Professional Documents
Culture Documents
Methodology
IT 215
Integrative Programming and Technologies
1
Programming Methodology
This chapter discusses issues pertinent to producing all high-quality software and, in
particular, issues pertinent primarily to producing software designed to resist attack.
Both application and system-level software are considered. Although there are
differences between how the two are produced, the similarities dominate the
differences.
Of the several factors that govern the difficulty of producing software, one of the most
important is the level of quality to be attained, as indicated by the extent to which the
software performs according to expectations. High-quality software does what it is
supposed to do almost all the time, even when its users make mistakes. For the
purposes of this study, software is classified according to four levels of quality:
exploratory, production quality, critical, and secure. These levels differ according to
what the software is expected to do (its functionality) and the complexity of the
conditions under which the software is expected to be used (environmental
complexity).
Exploratory software does not have to work; the chief issue is speed of development.
Although it has uses, exploratory software is not discussed in this report.
Production-quality software needs to work reasonably well most of the time, and its
failures should have limited effects. For example, we expect our spreadsheets to work
most of the time but are willing to put up with occasional crashes, and even with
occasional loss of data. We are not willing to put up with incorrect results.
Critical software needs to work very well almost all of the time, and certain kinds of
failures must be avoided. Critical software is used in trusted and safety-critical
applications, for example, medical instruments, where failure of the software can have
catastrophic results.
In producing critical software the primary worries are minimizing bugs in the software
and ensuring reasonable behavior when non-malicious users do unexpected things or
when unexpected combinations of external events occur. Producing critical software
presents the same problems as producing production-quality software, but because
the cost of failure is higher, the standards must be higher. In producing critical
software the goal is to decrease risk, not to decrease cost.
2
To what do potential attackers have access? The spectrum ranges from the
keyboard of an automated teller machine to the object code of an operational
system.
Who are the attackers and what resources do they have? The spectrum ranges
from a bored graduate student, to a malicious insider, to a knowledgeable, well-
funded, highly motivated organization (e.g., a private or national intelligence-
gathering organization).
How much and what has to be protected?
In addition, the developers of secure software cannot adopt the various probabilistic
measures of quality that developers of other software often can. For many
applications, it is quite reasonable to tolerate a flaw that is rarely exposed and to
assume that its having occurred once does not increase the likelihood that it will occur
again (Gray, 1987; Adams, 1984). It is also reasonable to assume that logically
independent failures will be statistically independent and not happen in concert. In
contrast, a security vulnerability, once discovered, will be rapidly disseminated among
a community of attackers and can be expected to be exploited on a regular basis until
it is fixed.
In principle, software can be secure without being production quality. The most
obvious problem is that software that fails frequently will result in denial of service.
Such software also opens the door to less obvious security breaches. A perpetrator of
an intelligence-grade attack (see Appendix E, "High-grade Threats") wants to avoid
alerting the administrators of the target system while conducting an attack; a system
with numerous low-level vulnerabilities provides a rich source of false alarms and
diversions that can be used to cover up the actual attack or to provide windows of
opportunity (e.g., when the system is recovering from a crash) for the subversion of
hardware or software.
Another important factor contributing to the difficulty of producing software is the set of
performance constraints the software is intended to meet, that is, constraints on the
resources (usually memory or time) the software is permitted to consume during use.
At one extreme, there may be no limit on the size of the software, and denial of
service is considered acceptable. At the other extreme is software that must fit into
limited memory and meet "hard" real-time constraints. It has been said that writing
extremely efficient programs is an exercise in logical brinkmanship. Working on the
brink increases the probability of faults and vulnerabilities. If one must work on the
brink, the goals of the software should be scaled back to compensate.
3
Perhaps the most important factor influencing the difficulty of producing software is
size. Producing big systems, for example, a global communication system, is
qualitatively different from producing small ones. The reasons for this are well
documented (NRC, 1989a).
they are not. One must also evaluate how likely it is that capable people with good
intentions will succeed in following the procedures laid down in the guide to
operations.
4
are needed, it is important that such procedures function properly and that
those who will be using them are familiar with their operation. If at all possible
manual procedures should be in place to maintain operations in the absence of
computing. This requires evaluating the risk of hardware or software crashes
versus the benefits when everything works.
Software should be delivered with some evidence that it meets its specifications
(assurance). For noncritical software the good reputation of the vendor may be
enough. Critical software should be accompanied by documentation describing the
analysis the software has been subjected to. For critical software there must be no
doubt about what configurations the conclusions of testing and validation apply to and
no doubt that what is delivered is what was validated. Secure software should be
accompanied by instructions and tools that make it possible to do continuing quality
assurance in the field.
Software delivered without assurance evidence may provide only illusory security. A
system that is manifestly nonsecure will generally inspire caution on the part of its
users; a system that provides illusory security will inspire trust and then betray that
trust when attacked.
Software should be delivered with a plan for its maintenance and enhancement. This
plan should outline how various expected changes might be accomplished and should
also make clear what kinds of changes might seriously compromise the software.
Secure software must be developed under a security plan. The plan should address
what elements of the software are to be kept confidential, how to manage trusted
distribution of software changes, and how authorized users can be notified of newly
discovered vulnerabilities without having that knowledge fall into the wrong hands.