You are on page 1of 23

Answer Key:

2 Marks:

1.Types of Computer security

The goal of computer security is protecting valuable assets.

Types of security and privacy[edit]

 Access control
 Anti-keyloggers
 Anti-malware
 Anti-spyware
 Anti-subversion software
 Anti-tamper software
 Antivirus software
 Cryptographic software
 Computer-aided dispatch (CAD)
 Firewall
 Intrusion detection system (IDS)
 Intrusion prevention system (IPS)
 Log management software
 Records management
 Sandbox
 Security information management
 Anti-theft
 Parental control
 Software and operating system updating
To study different ways of protection, we use a framework that describes how assets may be
harmed and how to counter or mitigate that harm.

A vulnerability is a weakness in the system, for example, in procedures, design, or

implementation, that might be exploited to cause loss or harm. For instance, a particular
system may be vulnerable to unauthorized data manipulation because the system does not
verify a user’s identity before allowing data access.

A threat to a computing system is a set of circumstances that has the potential to cause loss
or harm.

To see the difference between a threat and a vulnerability, consider the illustration in Figure
1-4. Here, a wall is holding water back. The water to the left of the wall is a threat to the man
on the right of the wall: The water could rise, overflowing onto the man, or it could stay
beneath the height of the wall, causing the wall to collapse. So the threat of harm is the
potential for the man to get wet, get hurt, or be drowned. For now, the wall is intact, so the
threat to the man is unrealized.

Spoofing is the act of masquerading as a valid entity through falsification of data (such as an IP
address or username), in order to gain access to information or resources that one is otherwise
unauthorized to obtain.[10][11] There are several types of spoofing, including:

 Email spoofing, where an attacker forges the sending (From, or source) address of an email.
 IP address spoofing, where an attacker alters the source IP address in a network packet to
hide their identity or impersonate another computing system.
 MAC spoofing, where an attacker modifies the Media Access Control (MAC) address of their
network interface to pose as a valid user on a network.
 Biometric spoofing, where an attacker produces a fake biometric sample to pose as another

3.Principals of security

Computer security is the protection of the items you value, called the assets of a computer or
computer system. There are many types of assets, involving hardware, software, data, people,
processes, or combinations of these. To determine what to protect, we must first identify what
has value and to whom.
The meaning of computer security
The meaning of the term computer security has evolved in recent years. Before the problem
of data security became widely publicized in the media, most people’s idea of computer
security focused on the physical machine. Traditionally, computer facilities have been
physically protected for three reasons:
• To prevent theft of or damage to the hardware
• To prevent theft of or damage to the information
• To prevent disruption of service
Computer security is security applied to computing devices such as
computers and smartphones, as well as computer networkssuch as private and public
networks, including the whole Internet. The field covers all the processes and mechanisms by
which digital equipment, information and services are protected from unintended or
unauthorized access, change or destruction, and are of growing importance in line with the
increasing reliance on computer systems of most societies worldwide. It includes physical
security to prevent theft of equipment, and information security to protect the data on that
equipment. It is sometimes referred to as "cyber security" or "IT security", though these terms
generally do not refer to physical security (locks and such).


The major technical insight that emerged at this time was that a secure operating system needed a
small, verifiably correct foundation upon which the security of the system can be derived.This
foundation was called a security kernel . A security kernel is defined as the hardware and
softwarenecessarytorealizethereferencemonitor abstraction .Asecuritykerneldesignincludes
hardware mechanisms leveraged by a minimal,software trusted computing base (TCB) to achieve
the reference monitor concept guarantees of tamperproofing,complete mediation,and verifiability

5.Vulnerability in computer security

A vulnerability is a weakness in the security of the computer system, for example, in

procedures, design, or implementation, that might be exploited to cause loss or harm. Think
of a bank, with an armed guard at the front door, bulletproof glass protecting the tellers, and a
heavy metal vault requiring multiple keys for entry. To rob a bank, you would have to think
of how to exploit a weakness not covered by these defenses. For example, you might bribe a
teller or pose as a maintenance worker.

Part B



Consider what we mean when we say that a program is "secure." We know that security
implies some degree of trust that the program enforces expected confidentiality, integrity,
and availability. From the point of view of a program or a programmer, how can we look at a
software component or code fragment and assess its security? This question is, of course,
similar to the problem of assessing software quality in general. One way to assess security or
quality is to ask people to name the characteristics of software that contribute to its overall
security. However, we are likely to get different answers from different people. This
difference occurs because the importance of the characteristics depends on who is analyzing
the software. For example, one person may decide that code is secure because it takes too
long to break through its security controls. And someone else may decide code is secure if it
has run for a period of time with no apparent failures. But a third person may decide that any
potential fault in meeting security requirements makes code insecure. Early work in
computer security was based on the paradigm of "penetrate and patch," in which analysts
searched for and repaired faults. Often, a top-quality "tiger team" would be convened to test a
system's security by attempting to cause it to fail. The test was considered to be a "proof" of
security; if the system withstood the attacks, it was considered secure. Unfortunately, far too
often the proof became a counterexample, in which not just one but several serious security
problems were uncovered. The problem discovery in turn led to a rapid effort to "patch" the
system to repair or restore the security. However, the patch efforts were largely useless,
making the system less secure rather than more secure because they frequently introduced
new faults. There are at least four reasons why.

1. The pressure to repair a specific problem encouraged a narrow focus on the fault itself and
not on its context. In particular, the analysts paid attention to the immediate cause of the
failure and not to the underlying design or requirements faults.

2. The fault often had nonobvious side effects in places other than the immediate area of the

3. Fixing one problem often caused a failure somewhere else, or the patch addressed the
problem in only one place, not in other related places.

4. The fault could not be fixed properly because system functionality or performance would
suffer as a consequence. The inadequacies of penetrate-and-patch led researchers to seek a
better way to be confident that code meets its security requirements.

One way to do that is to compare the requirements with the behavior. That is, to understand
program security, we can examine programs to see whether they behave as their designers
intended or users expected. We call such unexpected behavior a program security flaw; it is
inappropriate program behavior caused by program vulnerability. Program security flaws can
derive from any kind of software fault. That is, they cover everything from a
misunderstanding of program requirements to a one-character error in coding or even typing.
The flaws can result from problems in a single code component or from the failure of several
programs or program pieces to interact compatibly through a shared interface. The security
flaws can reflect code that was intentionally designed or coded to be malicious or code that
was simply developed in a sloppy or misguided way. Thus, it makes sense to divide program
flaws into two separate logical categories: inadvertent human errors versus malicious,
intentionally induced flaws. Types of Flaws to aid our understanding of the problems and
their prevention or correction, we can define categories that distinguish one kind of problem
from another. For example, Landwehr et al. present a taxonomy of program flaws, dividing
them first into intentional and inadvertent flaws. They further divide intentional flaws into
malicious and nonmalicious ones.

In the taxonomy, the inadvertent flaws fall into six categories:

 validation error (incomplete or inconsistent): permission checks

 domain error: controlled access to data
 serialization and aliasing: program flow order
 inadequate identification and authentication: basis for authorization
 boundary condition violation: failure on first or last case
 other exploitable logic errors


Being human, programmers and other developers make many mistakes, most of which are
unintentional andnonmalicious. Many such errors cause program malfunctions but do not
lead to more serious security vulnerabilities. However, a few classes of errors have plagued
programmers and security professionals for decades, and there is no reason to believe they
will disappear. In this section we consider three classic error types that have enabled many
recent security breaches. We explain each type, why it is relevant to security, and how it can
be prevented or mitigated.

Buffer Overflows

A buffer overflow is the computing equivalent of trying to pour two liters of water into
aoneliter pitcher: Some water is going to spill out and make a mess. And in computing, what
a mess these errors have made!


A buffer (or array or string) is a space in which data can be held. A buffer resides in memory.
Because memory is finite, a buffer's capacity is finite. For this reason, in many programming
languages the programmer must declare the buffer's maximum size so that the compiler can
set aside that amount of space.

Let us look at an example to see how buffer overflows can happen. Suppose a C language
program contains the declaration:

char sample[10];

The compiler sets aside 10 bytes to store this buffer, one byte for each of the ten elements of
the array, sample[0] through sample[9]. Now we execute the statement:

sample[10] = 'A';
The subscript is out of bounds (that is, it does not fall between 0 and 9), so we have a
problem.The nicest outcome (from a security perspective) is for the compiler to detect the
problem and mark the error during compilation. However, if the statement were

sample[i] = 'A';

we could not identify the problem until i was set during execution to a too-big subscript. It
would be useful if, during execution, the system produced an error message warning of a
subscript out of bounds. Unfortunately, in some languages, buffer sizes do not have to be
predefined, so there is no way to detect an out-of-bounds error. More importantly, the code
needed to check each subscript against its potential maximum value takes time and space
during execution, and the resources are applied to catch a problem that occurs relatively
infrequently. Even if the compiler were careful in analyzing the buffer declaration and use,
this same problem can be caused with pointers, for which there is no reasonable way to
define a proper limit. Thus, some compilers do not generate the code to check for exceeding
bounds. Let us examine this problem more closely. It is important to recognize that the
potential overflow causes a serious problem only in some instances. The problem's
occurrence depends on what is adjacent to the array sample. For example, suppose each of
the ten elements of the array sample is filled with the letter A and the erroneous reference
uses the letter B, as follows:

for (i=0; i<=9; i++) sample[i] = 'A'; sample[10] = 'B'

All program and data elements are in memory during execution, sharing space with the
operating system, other code, and resident routines. So there are four cases to consider in
deciding where the 'B' goes. If the extra character overflows into the user's data space, it
simply overwrites an existing variable value (or it may be written into an as-yet unused
location), perhaps affecting the program's result, but affecting no other program or data.

7.i Compare Threats and harm

Main Types of Computer Security Threats That Harm


At the other end of every security breach is an individual with malicious intent. Most often,
businesses are targeted by hackers for financial gain. These predators are seeking out
opportunities to capitalize on vulnerabilities, and they are the reason why your organization
needs to be on high alert.

Viruses are dangerous, they’re costly and they could be happening right now if you don’t have
the proper protocols in place to ensure prevention. A virus is a piece of software created to
damage a computer. The program replicates and executes itself, interfering with the way a
computer operates. It can steal data, corrupt your files or delete them altogether, which is a
menacing threat to any business.

A virus may also leverage other programs on the machine, such as email, to infect additional
computers, and it can be transmitted by a user via a network, USB stick or other media.


This malicious software does exactly what its name suggests: spy on the user without their
knowledge or permission. If a spyware program is installed on a computer in your organization,
the criminal who executed it has the ability to monitor activity on that device, collecting
information to use against the user or the business (e.g., financial data, login information,
website visits).

Some spyware can detect keystrokes, redirect web browsers, alter computer settings or install
other dangerous programs. Therefore, it is critical to put protections in place -- and update them
consistently -- to thwart spyware attacks.


When unwanted advertisements start appearing on a computer, it has been victimized by adware.
Your employees may accidentally download adware while trying to access free software, and it
can be used to retrieve information without permission or knowledge as well as redirect your
users’ browsers.

A phishing scam tricks an internal user into providing information such as usernames and
passwords that can be used to breach your system. This information is solicited from employees
through email and disguised as legitimate requests (e.g., a vendor or financial institution asking
for login details in order to fix an account or resolve an issue). Once the recipient hands over the
sensitive information, the hacker gains the access they need to lock up, steal or otherwise
compromise your company’s critical data.

Some phishing techniques use keyloggers in combination with sophisticated tracking

components to target specific information and organizations. There are also spear-phishing
emails that result in a small piece of malware being downloaded to the user's computer without
their knowledge, unleashing a network breach that may go undetected for long periods of time.

Ultimately, a single phishing attack can endanger the business’s entire network and leave every
last file exposed.


Wiggling its way into your network, a worm is deployed to self-replicate from one computer to
another. What makes it different from a virus, however, is that it requires no user interaction in
order to spread.

This software is applied to reproduce in large quantities in a very short period of time, and it can
both wreak havoc on your network performance and be used to launch other malicious attacks
throughout your system.

You’re probably already familiar with spam, as this junk email tends to clog up business servers
and annoy recipients across the organization.

Spam becomes a computer security threat when it contains harmful links, overloads your mail
server or is harnessed to take over a user’s computer and distribute additional spam.


A botnet can be used for anything from targeting attacks on servers to running spam email
campaigns. As botnets typically involve so many computers, many businesses find them difficult
to stop.

Basically, this computer security threat is deployed by a botmaster, who commands a number of
bots, or compromised computers, to run malicious activities over an Internet connection. The
collection of infected computers is often referred to as a “zombie army,” carrying out the ill
intent of the botmaster.

If your organization’s network of computers is overtaken by a botnet, your system could be

subsequently used to assault other networks by the likes of viruses, worms, Trojan horses and
DDoS attacks.


Imagine having a cyber attacker gain complete control over one of your computers or, worse, an
entire network of them. That is what a rootkit, or collection of software implemented to procure
administrator-level access, is designed to accomplish.
A hacker obtains this access through other threats and vulnerabilities, such as phishing scams,
spyware or password weaknesses. The rootkit has the ability to go undetected and enables the
originator to modify existing software -- even the security applications employed to protect your

DOS Attacks

In a DOS (Denial-of-Service) attack, your company’s website or web service can be rendered
unavailable to users. Often, these attacks are used against businesses for ransom or blackmail

Perhaps the most well known version is DDoS (Distributed Denial of Service), which involves
bombarding your server with traffic and requests in order to overwhelm and shut down the

With the system and its defenses down, an intruder has the capability to confiscate data or hold
your operation hostage.

Don’t allow your organization to be terrorized by these computer security threats. If you don’t
have one already, formulate a strong plan to safeguard your business’s critical data and protect
your assets.

ii. the browser side attack

Table 3-1.Types of Malicious Code.

Code Type Characteristics

Virus Attaches itself to program and propagates copies of itself to other programs

Trojan horse Contains unexpected, additional functionality

Logic bomb Triggers action when condition occurs

Time bomb Triggers action when specified time occurs

Trapdoor Allows unauthorized access to functionality

Worm Propagates copies of itself through a network

Rabbit Replicates itself without limit to exhaust resources

Because "virus" is the popular name given to all forms of malicious code and because fuzzy lines
exist between different kinds of malicious code, we are not too restrictive in the following
discussion. We want to look at how malicious code spreads, how it is activated, and what effect
it can have. A virus is a convenient term for mobile malicious code, so in the following sections
we use the term "virus" almost exclusively. The points made apply also to other forms of
malicious code.

How Viruses Attach A printed copy of a virus does nothing and threatens no one. Even
executable virus code sitting on a disk does nothing. What triggers a virus to start replicating?
For a virus to do itsmalicious work and spread itself, it must be activated by being executed.
Fortunately for virus writers but unfortunately for the rest of us, there are many ways to ensure
that programs will be executed on a running computer. For example, recall the SETUP program
that you initiate on your computer. It may call dozens or hundreds of other programs, some on
the distribution medium, some already residing on the computer, some in memory. If any one of
these programs contains a virus, the virus code could be activated. Let us see how. Suppose the
virus code were in a program on the distribution medium, such as a CD; when executed, the
virus could install itself on a permanent storage medium (typically, a hard disk) and also in any
and all executing program s in memory. Human intervention is necessary to start the process; a
human being puts the virus on the distribution medium, and perhaps another initiates the
execution of the program to which the virus is attached.

Appended Viruses A program virus attaches itself to a program; then, whenever the program is
run, the virus is activated. This kind of attachment is usually easy to program. In the simplest
case, a virus inserts a copy of itself into the executable program file before the first executable
instruction. Then, all the virus instructions execute first; after the last virus instruction, control
flows naturally to what used to be the first program instruction

Viruses That Surround a Program An alternative to the attachment is a virus that runs the
original program but has control before and after its execution. For example, a virus writer might
want to prevent the virus from being detected. If the virus is stored on disk, its presence will be
given away by its file name, or its size will affect the amount of space used on the disk.

Document Viruses Currently, the most popular virus type is what we call the document virus,
which is implemented within a formatted document, such as a written document, a database, a
slide presentation, a picture, or a spreadsheet. These documents are highly structured files that
contain both data (words or numbers) and commands (such as formulas, formatting controls,
links). The commands are part of a rich programming language, including macros, variables and
procedures, file accesses, and even system calls. The writer of a document virus uses any of the
features of the programming language to perform malicious actions

How Viruses Gain Control The virus (V) has to be invoked instead of the target (T).
Essentially, the virus either has to seem to be T, saying effectively "I am T" or the virus has to
push T out of the way and become a substitute for T, saying effectively "Call me instead of T." A
more blatant virus can simply say "invoke me [you fool]." The virus can assume T's name by
replacing (or joining to) T's code in a file structure; this invocation technique is most appropriate
for ordinary programs. The virus can overwrite T in storage (simply replacing the copy of T in
storage, for example). Alternatively, the virus can change the pointers in the file table so that the
virus is located instead of T whenever T is accessed through the file system. These two cases are
shown in Figure 3-7. Figure 3-7. Virus Completely Replacing a Program.

Homes for VirusesThe virus writer may find these qualities appealing in a virus:

 x It is hard to detect.
 x It is not easily destroyed or deactivated.
 x It spreads infection widely.
 x It can reinfect its home program or other program s.
 x It is easy to create.
 x It is machine independent and operating system independent. Few viruses meet all
these criteria. The virus writer chooses from these objectives when deciding what the
virus will do and where it will reside.

Memory-Resident Viruses Some parts of the operating system and most user programs
execute, terminate, and disappear, with their space in memory being available for anything
executed later. For very frequently used parts of the operating system and for a few
specialized user programs, it would take too long to reload the program each time it was
needed. Such code remains in memory and is called "resident" code.

Other Homes for Viruses A virus that does not take up residence in one of these cozy
establishments has to fend more for itself. But that is not to say that the virus will go
homeless. One popular home for a virus is an application program. Many applications, such
as word processors and spreadsheets, have a "macro" feature, by which a user can record a
series of commands and repeat them with one invocation.

Virus Signatures A virus cannot be completely invisible. Code must be stored somewhere,
and the code must be in memory to execute. Moreover, the virus executes in a particular
way, using certain methods to spread. Each of these characteristics yields a telltale pattern,
called a signature,that can be found by a program that looks for it. The virus's signature is
important for creating a program, called a virus scanner, that can detect and, in some cases,
remove viruses. The scanner searches memory and long-term storage, monitoring execution
and watching for the telltale signatures of viruses. For example, a scanner looking for signs
of the Code Red worm can look for a pattern containing the following characters:
NNNNNNNNNNNNNNN %u9090%u6858%ucbd3
%u7801%u9090%u6858%ucdb3%u7801%u9090%u6858 %ucbd3%u7801%u9090
%u9090%u8190%u00c3%u0003%ub00%u531b%u53ff %u0078%u0000%u00=a HTTP/1.0

When the scanner recognizes a known virus's pattern, it can then block the virus, inform the
user, and deactivate or remove the virus.

Virus Effects and Causes.

Virus Effect How It Is Caused

Remain in memory x Intercept interrupt by modifying interrupt handler address table x Load
self in nontransient memory area

Infect disks x Intercept interrupt x Intercept operating system call (to format disk, for
example) x Modify system file x Modify ordinary executable program

Conceal self x Intercept system calls that would reveal self and falsify result x Classify self
as "hidden" file

Spread infection x Infect boot sector x Infect systems program x Infect ordinary program x
Infect data ordinary program reads to control its execution

Prevent deactivation x Activate before deactivating program and block deactivation x Store
copy to reinfect after deactivation

Most virus writers seek to avoid detection for themselves and their creations. Because a
disk's boot sector is not visible to normal operations (for example, the contents of the boot
sector do not show on a directory listing), many virus writers hide their code there. A
resident virus can monitor disk accesses and fake the result of a disk operation that would
show the virus hidden in a boot sector by showing the data that should have been in the boot
sector (which the virus has moved elsewhere). There are no limits to the harm a virus can
cause. On the modest end, the virus might do nothing; some writers create viruses just to
show they can do it. Or the virus can be relatively benign, displaying a message on the
screen, sounding the buzzer, or playing music. From there, the problems can escalate. One
virus can erase files, another an entire disk; one virus can prevent a computer from booting,
and another can prevent writing to disk. The damage is bounded only by the creativity of the
virus's author.

Transmission Patterns A virus is effective only if it has some means of transmission from
one location to another. As we have already seen, viruses can travel during the boot process
by attaching to an executable file or traveling within data files. The travel itself occurs
during execution of an already infected program. Since a virus can execute any instructions a
program can, virus travel is not confined to any single medium or execution pattern.

8. The Security in OS and root kit

Security in operating system

 The basis of protection is separation: keeping one user's objects separate from other
users.Rushby and Randell noted that separation in an operating system can occur in
several ways:
 physical separation, in which different processes use different physical objects,
such as separate printers for output requiring different levels of security
 temporal separation, in which processes having different security requirements are
executed at different times
 logical separation, in which users operate under the illusion that no other processes
exist, as when an operating system constrains a program's accesses so that the
program cannot access objects outside its permitted domain
 cryptographic separation, in which processes conceal their data and computations
in such a way that they are unintelligible to outside processes Of course,
combinations of two or more of these forms of separation are also possible.
 The categories of separation are listed roughly in increasing order of complexity to
implement, and, for the first three, in decreasing order of the security provided.
However, the first two approaches are very stringent and can lead to poor resource
utilization. Therefore, we would like to shift the burden of protection to the operating
system to allow concurrent execution of processes having different security needs.
But separation is only half the answer. We want to separate users and their objects,
but we also want to be able to provide sharing for some of those objects.
 For example, two users with different security levels may want to invoke the same
search algorithm or function call.

We would like the users to be able to share the algorithms and functions without
compromisingtheir individual security needs. An operating system can support separation
and sharing in several ways, offering protection at any of several levels.
 Do not protect. Operating systems with no protection are appropriate when sensitive
procedures are being run at separate times.
 Isolate. When an operating system provides isolation, different processes running
concurrently are unaware of the presence of each other.
Each process has its own address space, files, and other objects. The operating
system must confine each process somehow so that the objects of the other processes
are completely concealed.
 Share all or share nothing. With this form of protection, the owner of an
objectdeclares it to be public or private. A public object is available to all users,
whereas a private object is available only to its owner.
 Share via access limitation. With protection by access limitation, the operating
system checks the allowability of each user's potential access to an object. That is,
access control is implemented for a specific user and a specific object. Lists of
acceptable actions guide the operating system in determining whether a particular
user should have access to a particular object. In some sense, the operating system
acts as a guard between users and objects, ensuring that only authorized accesses
 Share by capabilities. An extension of limited access sharing, this form of protection
allows dynamic creation of sharing rights for objects. The degree of sharing can
depend on the owner or the subject, on the context of the computation, or on the
object itself. Limit use of an object. This form of protection limits not just the
access to an object but the use made of that object after it has been accessed.
 For example, a user may be allowed to view a sensitive document, but not to print a
copy of it. More powerfully, a user may be allowed access to data in a database to
derive statistical summaries (such as average salary at a particular grade level), but
not to determine specific data values (salaries of individuals).
 A root kit is a collection of computer software, typically malicious, designed to
enable access to a computer or areas of its software that is not otherwise allowed (for
example, to an unauthorized user) and often masks its existence or the existence of
other software.[1] The term rootkit is a concatenation of "root" (the traditional name of
the privileged account on Unix-like operating systems) and the word "kit" (which
refers to the software components that implement the tool). The term "rootkit" has
negative connotations through its association with malware.[1]
 Rootkit installation can be automated, or an attacker can install it after having
obtained root or Administrator access. Obtaining this access is a result of direct attack
on a system, i.e. exploiting a known vulnerability (such as privilege escalation) or a
password (obtained by cracking or social engineering tactics like "phishing"). Once
installed, it becomes possible to hide the intrusion as well as to maintain privileged
access. The key is the root or administrator access. Full control over a system means
that existing software can be modified, including software that might otherwise be
used to detect or circumvent it.
 Rootkit detection is difficult because a rootkit may be able to subvert the software
that is intended to find it. Detection methods include using an alternative and trusted
operating system, behavioral-based methods, signature scanning, difference scanning,
andmemory dump analysis. Removal can be complicated or practically impossible,
especially in cases where the rootkit resides in the kernel; reinstallation of the
operating system may be the only available solution to the problem.[2] When dealing
with firmware rootkits, removal may require hardware replacement, or specialized
 A rootkit provides the attacker root access to the computer on which it has been
installed. This gives the attacker all rights and permissions to act as the administrator
of the computer. A rootkit typically intercepts API (application programming
interface) calls, such as requests to a file manager program like Windows Explorer.
Malware writers use this low-level system manipulation to make their programs
virtually undetectable. Some even create "kernel rootkits," which modify the kernel
component of the targeted operating system, corrupting the OS at such a low level
that the rootkit is difficult to detect and completely remove.
 A rootkit hypervisor is similar to a rootkit in that it gives the attacker control over the
infected machine.

9. the security in design of operating system

Good design principles are always good for security, as we have noted above.
But several important design principles are quite particular to security and
essential for building a solid, trusted operating system.

These principles have been articulated well by Saltzer [SAL74] and Saltzer and
Schroeder [SAL75]:

 x Least privilege. Each user and each program should operate by using
the fewest privileges possible. In this way, the damage from an
inadvertent or malicious attack is minimized.
 x Economy of mechanism. The design of the protection system should
be small, simple, and straightforward. Such a protection system can be
carefully analyzed, exhaustively tested, perhaps verified, and relied on.
 x Open design. The protection mechanism must not depend on the
ignorance of potential attackers; the mechanism should be public,
depending on secrecy of relatively few key items, such as a password
table. An open design is also available for extensive public scrutiny,
thereby providing independent confirmation of the design security.
 x Complete mediation. Every access attempt must be checked. Both
direct access attempts (requests) and attempts to circumvent the access
checking mechanism should be considered, and the mechanism should
be positioned so that it cannot be circumvented.
 x Permission based. The default condition should be denial of access. A
conservative designer identifies the items that should be accessible,
rather than those that should not.
 x Separation of privilege. Ideally, access to objects should depend on
more than one condition, such as user authentication plus a
cryptographic key. In this way, someone who defeats one protection
system will not have complete access.
 x Least common mechanism. Shared objects provide potential channels
for information flow. Systems employing physical or logical separation
reduce the risk from sharing.
 x Ease of use. If a protection mechanism is easy to use, it is unlikely to
be avoided.

Security Features of Ordinary Operating Systems As described in Chapter 4, a

multiprogramming operating system performs several functions that relate to security. To see
how, examine Figure 5-10, which illustrates how an operating system interacts with users,
provides services, and allocates resources.

Figure 5-10. Overview of an Operating System's Functions.

 We can see that the system addresses several particular functions that involve computer
 x User authentication. The operating system must identify each user who requests access
and must ascertain that the user is actually who he or she purports to be. The most
common authentication mechanism is password comparison.
 x Memory protection. Each user's program must run in a portion of memory protected
against unauthorized accesses. The protection will certainly prevent outsiders' accesses,
and it may also control a user's own access to restricted parts of the program space.
Differential security, such as read, write, and execute, may be applied to parts of a user's
memory space. Memory protection is usually performed by hardware mechanisms, such
as paging or segmentation.
 x File and I/O device access control. The operating system must protect user and system
files from access by unauthorized users. Similarly, I/O device use must be protected. Data
protection is usually achieved by table lookup, as with an access control matrix.
 x Allocation and access control to general objects. Users need general objects, such as
constructs to permit concurrency and allow synchronization. However, access to these
objects must be controlled so that one user does not have a negative effect on otherusers.
Again, table lookup is the common means by which this protection is provided.
 x Enforced sharing. Resources should be made available to users as appropriate. Sharing
brings about the need to guarantee integrity and consistency. Table lookup, combined
with integrity controls such as monitors or transaction processors, is often used to support
controlled sharing.
 x Guaranteed fair service. All users expect CPU usage and other service to be provided so
that no user is indefinitely starved from receiving service. Hardware clocks combine with
scheduling disciplines to provide fairness. Hardware facilities and data tables combine to
provide control.
 xInterprocess communication and synchronization. Executing processes sometimes need
to communicate with other processes or to synchronize their accesses to shared resources.
Operating systems provide these services by acting as a bridge between processes,
responding to process requests for asynchronous communication with other processes or
synchronization. Interprocess communication is mediated by access control tables.
 x Protected operating system protection data. The operating system must maintain data
by which it can enforce security. Obviously if these data are not protected against
unauthorized access (read, modify, and delete), the operating system cannot provide
enforcement. Various techniques, including encryption, hardware control, and isolation,
support isolation of operating system protection data.


10.The Security requirements in database

A database is a collection of data and a set of rules that organize the data by specifying certain
relationships among the data. Through these rules, the user describes a logical format for the
data. The data items are stored in a file, but the precise physical format of the file is of no
concern to the user. A database administrator is a person who defines the rules that organize the
data and also controls who should have access to what parts of the data. The user interacts with
the database through a program called a database manager or a database management system
(DBMS), informally known as a front end. Components of Databases The database file consists
of records, each of which contains one related group of data. As shown in the example in Table ,
a record in a name and address file consists of one name and address.

Each record contains fields or elements, the elementary data items themselves. The fields in the
name and address record are NAME, ADDRESS, CITY, STATE, and ZIP (where ZIP is the U.S.
postal code). This database can be viewed as a two-dimensional table, where a record is a row
and each field of a record is an element of the table.

Table.Example of a Database.

ADAMS 212 Market St. Columbus OH 43210

BENCHLY 501 Union St. Chicago IL 60603

CARTER 411 Elm St. Columbus OH 43210

Not every database is easily represented as a single, compact table. The database in Figure 6-1
logically consists of three files with possibly different uses. These three files could be
represented as one large table, but that depiction may not improve the utility of or access to the

The basic problemsaccess control, exclusion of spurious data, authentication of users, and
reliabilityhave appeared in many contexts so far in this book.

Following is a list of requirements for database security.

 xPhysical database integrity. The data of a database are immune to physical
problems, such as power failures, and someone can reconstruct the database if it is
destroyed through a catastrophe.
 xLogical database integrity. The structure of the database is preserved. With logical
integrity of a database, a modification to the value of one field does not affect other
fields, for example.
 xElement integrity. The data contained in each element are accurate.
 x Auditability. It is possible to track who or what has accessed (or modified) the
elements in the database.
 x Access control. A user is allowed to access only authorized data, and different
users can be restricted to different modes of access (such as read or write).
 x User authentication. Every user is positively identified, both for the audit trail and
for permission to access certain data.
 x Availability. Users can access the database in general and all the data for which
they are authorized.
We briefly examine each of these requirements. Integrity of the Database If a
database is to serve as a central repository of data, users must be able to trust the
accuracy of the data values.
This condition implies that the database administrator must be assured that updates
are performed only by authorized individuals. It also implies that the data must be
protected from corruption, either by an outside illegal program action or by an outside
force such as fire or a power failure.
Two situations can affect the integrity of a database: when the whole database is
damaged (as happens, for example, if its storage medium is damaged) or when
individual data items are unreadable. Integrity of the database as a whole is the
responsibility of the DBMS, the operating system, and the (human) computing system
From the perspective of the operating system and the computing system manager,
databases and DBMSs are files and programs, respectively. Therefore, one way of
protecting the database as a whole is to regularly back up all files on the system.
These periodic backups can be adequate controls against catastrophic failure.
Sometimes it is important to be able to reconstruct the database at the point of a
failure. For instance, when the power fails suddenly, a bank's clients may be in the
middle of making transactions or students may be in the midst of registering online
for their classes. In these cases, we want to be able to restore the systems to a stable
point without forcing users to redo their recently completed transactions.
To handle these situations, the DBMS must maintain a log of transactions. For
example, suppose the banking system is designed so that a message is generated in a
log (electronic or paper or both) each time a transaction is processed. In the event of a
system failure, the system can obtain accurate account balances by reverting to a
backup copy of the database and reprocessing all latertransactions from the log.
Element Integrity The integrity of database elements is their correctness or accuracy.
Ultimately, authorized users are responsible for entering correct data into databases.
However, users and programs make mistakes collecting data, computing results, and
entering values. Therefore, DBMSs sometimes take special action to help catch errors
as they are made and to correct errors after they are inserted.
This corrective action can be taken in three ways.
 First, the DBMS can apply field checks, activities that test for appropriate values in a
position. A field might be required to be numeric, an uppercase letter, or one of a set
of acceptable characters. The check ensures that a value falls within specified bounds
or is not greater than the sum of the values in two other fields. These checks prevent
simple errors as the data are entered.
A second integrity action is provided by access control. To see why, consider life
without databases. Data files may contain data from several sources, and redundant
data may be stored in several different places.
For example, a student's home address may be stored in many different campus files:
at class registration, for dining hall privileges, at the bookstore, and in the financial
aid office. Indeed, the student may not even be aware that each separate office has the
address on file. If the student moves from one residence to another, each of the
separate files requires correction.
Without a database, there are several risks to the data's integrity. First, at a given time,
there could be some data files with the old address (they have not yet been updated)
and some simultaneously with the new address (they have already been updated).
 Second, there is always the possibility that the data fields were changed incorrectly,
again leading to files with incorrect information.
 Third, there may be files of which the student is unaware, so he or she does not know
to notify the file owner about updating the address information. These problems are
solved by databases. They enable collection and control of this data at one central
source, ensuring the student and users of having the correct address.


For some applications it may be desirable to generate an audit record of all access (read
or write) to a database. Such a record can help to maintain the database's integrity, or at
least to discover after the fact who had affected which values and when. A second
advantage, as we see later, is that users can access protected data incrementally; that is,
no single access reveals protected data, but a set of sequential accesses viewed together
reveals the data, much like discovering the clues in a detective novel. In this case, an
audit trail can identify which clues a user has already been given, as a guide to whether to
tell the user more. (Accessing a record or an element without transferring to the user the
data received is called the pass-through problem.

Access Control

Databases are often separated logically by user access privileges. For example, all users
can be granted access to general data, but only the personnel department can obtain salary
data and only the marketing department can obtain sales data. Databases are very useful
because they centralize the storage and maintenance of data. Limited access is both a
responsibility and a benefit of this centralization.

User Authentication

The DBMS can require rigorous user authentication. For example, a DBMS might insist
that a user pass both specific password and time-of-day checks. This authentication
supplements the authentication performed by the operating system. Typically, the DBMS
runs as an application program on top of the operating system. This system design means
that there is no trusted path from the DBMS to the operating system, so the DBMS must
be suspicious of any data it receives, including user authentication. Thus, the DBMS is
forced to do its own authentication.


A DBMS has aspects of both a program and a system. It is a program that uses other
hardware and software resources, yet to many users it is the only application run. Users
often take the DBMS for granted, employing it as an essential tool with which to perform
particular tasks. But when the system is not availablebusy serving other users or down to
be repaired or upgraded the users are very aware of a DBMS's unavailability.

For example, two users may request the same record, and the DBMS must arbitrate; one
user is bound to be denied access for a while. Or the DBMS may withhold unprotected
data to avoid revealing protected data, leaving the requesting user unhappy. We examine
these problems in more detail later in this chapter. Problems like these result in high
availability requirements for a DBMS.


The three aspects of computer security integrity, confidentiality, and availability clearly
relate to database management systems. As we have described, integrity applies to the
individual elements of a database as well as to the database as a whole. Thus, integrity is
a major concern in the design of database management systems.

Reliability and Integrity

Databases amalgamate data from many sources, and users expect a DBMS to provide access to
the data in a reliable way. When software engineers say that software has reliability, they mean
that the software runs for very long periods of time without failing.

Users certainly expect a DBMS to be reliable, since the data usually are key to business or
organizational needs. Moreover, users entrust their data to a DBMS and rightly expect it to
protect the data from loss or damage. Concerns for reliability and integrity are general security
issues, but they are more apparent with databases.

A DBMS guards against loss or damage in several ways that we study them in this section.
However, the controls we consider are not absolute: No control can prevent an authorized user
from inadvertently entering an acceptable but incorrect value.

Database concerns about reliability and integrity can be viewed from three dimensions:

 xDatabase integrity: concern that the database as a whole is protected against damage,
as from the failure of a disk drive or the corruption of the master database index. These
concerns are addressed by operating system integrity controls and recovery procedures.
 x Element integrity: concern that the value of a specific data element is written or
changed only by authorized users. Proper access controls protect a database from
corruption by unauthorized users.
 xElement accuracy: concern that only correct values are written into the elements of a
database. Checks on the values of elements can help prevent insertion of improper
values. Also, constraint conditions can detect incorrect values.