You are on page 1of 5

Programming

Methodology

IT 215
Integrative Programming and Technologies

1
Programming Methodology
This chapter discusses issues pertinent to producing all high-quality software and, in
particular, issues pertinent primarily to producing software designed to resist attack.
Both application and system-level software are considered. Although there are
differences between how the two are produced, the similarities dominate the
differences.

Of the several factors that govern the difficulty of producing software, one of the most
important is the level of quality to be attained, as indicated by the extent to which the
software performs according to expectations. High-quality software does what it is
supposed to do almost all the time, even when its users make mistakes. For the
purposes of this study, software is classified according to four levels of quality:
exploratory, production quality, critical, and secure. These levels differ according to
what the software is expected to do (its functionality) and the complexity of the
conditions under which the software is expected to be used (environmental
complexity).

Exploratory software does not have to work; the chief issue is speed of development.
Although it has uses, exploratory software is not discussed in this report.

Production-quality software needs to work reasonably well most of the time, and its
failures should have limited effects. For example, we expect our spreadsheets to work
most of the time but are willing to put up with occasional crashes, and even with
occasional loss of data. We are not willing to put up with incorrect results.

Critical software needs to work very well almost all of the time, and certain kinds of
failures must be avoided. Critical software is used in trusted and safety-critical
applications, for example, medical instruments, where failure of the software can have
catastrophic results.

In producing critical software the primary worries are minimizing bugs in the software
and ensuring reasonable behavior when non-malicious users do unexpected things or
when unexpected combinations of external events occur. Producing critical software
presents the same problems as producing production-quality software, but because
the cost of failure is higher, the standards must be higher. In producing critical
software the goal is to decrease risk, not to decrease cost.

Secure software is critical software that needs to be resistant to attack. Producing it


presents the same problems as does producing critical software, plus some others.
One of the key problems is analyzing the kinds of attacks that the software must be
designed to resist. The level and kind of threat have a significant impact on how
difficult the software is to produce. Issues to consider include the following:

2
 To what do potential attackers have access? The spectrum ranges from the
keyboard of an automated teller machine to the object code of an operational
system.
 Who are the attackers and what resources do they have? The spectrum ranges
from a bored graduate student, to a malicious insider, to a knowledgeable, well-
funded, highly motivated organization (e.g., a private or national intelligence-
gathering organization).
 How much and what has to be protected?

In addition, the developers of secure software cannot adopt the various probabilistic
measures of quality that developers of other software often can. For many
applications, it is quite reasonable to tolerate a flaw that is rarely exposed and to
assume that its having occurred once does not increase the likelihood that it will occur
again (Gray, 1987; Adams, 1984). It is also reasonable to assume that logically
independent failures will be statistically independent and not happen in concert. In
contrast, a security vulnerability, once discovered, will be rapidly disseminated among
a community of attackers and can be expected to be exploited on a regular basis until
it is fixed.

In principle, software can be secure without being production quality. The most
obvious problem is that software that fails frequently will result in denial of service.
Such software also opens the door to less obvious security breaches. A perpetrator of
an intelligence-grade attack (see Appendix E, "High-grade Threats") wants to avoid
alerting the administrators of the target system while conducting an attack; a system
with numerous low-level vulnerabilities provides a rich source of false alarms and
diversions that can be used to cover up the actual attack or to provide windows of
opportunity (e.g., when the system is recovering from a crash) for the subversion of
hardware or software.

Low-quality software also invites attack by insiders, by requiring that administrative


personnel be granted excessive privileges of access to manually repair data after
software or system failures.

Another important factor contributing to the difficulty of producing software is the set of
performance constraints the software is intended to meet, that is, constraints on the
resources (usually memory or time) the software is permitted to consume during use.
At one extreme, there may be no limit on the size of the software, and denial of
service is considered acceptable. At the other extreme is software that must fit into
limited memory and meet "hard" real-time constraints. It has been said that writing
extremely efficient programs is an exercise in logical brinkmanship. Working on the
brink increases the probability of faults and vulnerabilities. If one must work on the
brink, the goals of the software should be scaled back to compensate.

3
Perhaps the most important factor influencing the difficulty of producing software is
size. Producing big systems, for example, a global communication system, is
qualitatively different from producing small ones. The reasons for this are well
documented (NRC, 1989a).

In summary, simultaneous growth in level of quality, performance constraints,


functionality, and environmental complexity results in a corresponding dramatic
increase in the cost and risk of producing, and the risk of using, the software. There is
no technology available to avoid this, nor is research likely to provide us with such a
technology in the foreseeable future. If the highest possible quality is demanded for
secure software, something else must give. Because security cannot be attained
without quality and the environment in which a system is to run is usually hard to
control, typically one must either remove performance constraints (perhaps by
allocating extra resources) or reduce the intended functionality.

SOFTWARE IS MORE THAN CODE

Good software is more than good code. It must be accompanied by high-quality


documentation, including a requirements document, a design document, carefully
written specifications for key modules, test plans, a maintenance plan, and so on.

Of particular importance for secure software is a guide to operations. More


comprehensive than a user's manual, such a guide often calls for operational
procedures that must be undertaken by people other than users of the software, for
example, by system administrators. In evaluating software one must consider what it
will do if the instructions in the guide to operations are followed, and what it will do if

they are not. One must also evaluate how likely it is that capable people with good
intentions will succeed in following the procedures laid down in the guide to
operations.

For critical and secure software, a guide to operations is particularly important. In


combination with the software it must provide for the following:

 Auditing: What information is to be collected, how it is to be collected, and what


is to be done with it must be described. Those who have penetrated secure
software cannot be expected to file a bug report, and so mechanisms for
detecting such penetrations are needed. Reduction of raw audit data to
intelligible form remains a complex and expensive process; a plan for secure
software must include resources for the development of systems to reduce and
display audit data.
 Recovery: Producing fault-free software of significant size is nearly impossible.
Therefore one must plan for dealing with faults, for example, by using carefully
designed recovery procedures that are exercised on a regular basis. When they

4
are needed, it is important that such procedures function properly and that
those who will be using them are familiar with their operation. If at all possible
manual procedures should be in place to maintain operations in the absence of
computing. This requires evaluating the risk of hardware or software crashes
versus the benefits when everything works.

 Operation in an emergency mode: There may be provisions for bypassing some


security features in times of extreme emergency. For example, procedures may
exist that permit "breaking in" to protected data in critical circumstances such as
incapacitation or dismissal of employees with special authorizations. However,
the system design should treat such emergencies explicitly, as part of the set of
events that must be managed by security controls.

Software should be delivered with some evidence that it meets its specifications
(assurance). For noncritical software the good reputation of the vendor may be
enough. Critical software should be accompanied by documentation describing the
analysis the software has been subjected to. For critical software there must be no
doubt about what configurations the conclusions of testing and validation apply to and
no doubt that what is delivered is what was validated. Secure software should be
accompanied by instructions and tools that make it possible to do continuing quality
assurance in the field.

Software delivered without assurance evidence may provide only illusory security. A
system that is manifestly nonsecure will generally inspire caution on the part of its
users; a system that provides illusory security will inspire trust and then betray that
trust when attacked.

Arrangements should be made to have the assurance evidence reviewed by a team of


experts who are individually and organizationally independent from the development
team.

Software should be delivered with a plan for its maintenance and enhancement. This
plan should outline how various expected changes might be accomplished and should
also make clear what kinds of changes might seriously compromise the software.

Secure software must be developed under a security plan. The plan should address
what elements of the software are to be kept confidential, how to manage trusted
distribution of software changes, and how authorized users can be notified of newly
discovered vulnerabilities without having that knowledge fall into the wrong hands.

You might also like