You are on page 1of 29 Uoko3MC&pg=PA100&lpg=PA100&dq=liab ility+for+software+failure&source=bl&ots=ts 1HiCWp96&sig=ZbFUNqGhuLapKc2J3wu XVKjWg1Q&hl=en&sa=X&ei=BEdEU4uYC cyQhQfpx4HQBg&ved=0CF0Q6AEwCDgU# v=onepage&q=liability%20for%20software %20failure&f=false 9/5-tips-for-understanding-potential-productliability.

aspx Liability Law and Software Development

Liability law with respect to computer software has important implications: potential lawsuits act as both a detterent to software development as well as an incentive for the creation of reliable software. While other areas of tort law have been present for generations, tort law with respect to computer software is a new area of law. It is important for computer scientists to play a role in the policy-making process of this field as new laws and precedents are developed. Our project attempts to address the fundamental issues in the area of software liability, as well as provide a comprehensive research resource for others interested in pursuing these issues. Among the issues we attempt to address:

Should software companies be liable for software failures? What is the definition of negligence with respect to software development? Do existing laws account for the unique characteristics of software engineering?

What ethical responsibilities do software engineers have to users? How should the terms appropriate use and appropriate care be defined in software liability law? What influence have corporations had in the development of existing law? Is software a tangible product? Tangibility is an important concept in products liability law. In 1991, the dicta of a 9th Circuit Court of Appeals opinion (actually dealing with a book about mushrooms) hinted that software could be considered a tangible product in certain circumstances. What is the concept of information liability? Should software companies be liable for information generated by the their software? Would increased liability stifle the quick release of new software? What would be the economic ramifications of an increased level of liability? Would such a change discourage the development of software for medical and other high risk fields? Is a computer program a product or a service? If an expert system using artificial-intelligence gives bad advice, should the programmers be held liable? Should programmers be considered professionals and thus subject to malpractice suits? What risks should users naturally assume when using software? Because computer programming is extremely complex, should the doctrine of strict liability apply to programmers in order to induce them to write bug-free software? Is such software possible?

The goal of our web site is to provide a comprehensive research center for issues of software liability law. Our web site will cover existing laws, precedents, and doctrines. Furthermore, our web site will contain normative assessments of the existing body of law as well as policy proposals for the future of software liability. In our normative inquiry, we will look comparatively at other areas of liability law, as well as address the fundamental differences between software failures and other liable actions. Furthermore, we will address these issues from an ethical standpoint as well.

SOFTWARE LIABILITY Cem Kaner, J.D., Ph.D. Copyright 1997. All rights reserved. In press, Software QA Note: This paper is based on talks of mine at recent meetings of the Association for Software Quality's Software Division and the Pacific Northwest Software Quality Conference. The talks surveyed software liability in general and focused on a few specific issues. I've edited the talks significantly because they restate some material that you've seen in this magazine already. If you don't have those articles handy, check my website,

W. Edwards Deming is one of my heroes. I enjoyed and agreed with almost everything that I've read of his. But in one respect, I flatly disagree. In Out of the Crisis, Deming named seven "deadly diseases." Number 7 was "Excessive costs of liability, swelled by lawyers that work on contingency fees." (Deming, 1986, p. 98). Software quality is often abysmally low and we are facing serious customer dissatisfaction in the mass market (see Kaner, 1997e; Kaner & Pels, 1997). Software publishers routinely ship products with known defects, sometimes very serious defects. The law puts pressure on companies who don't care about their customers. It empowers quality adocates. I became a lawyer because I think that liability for bad quality is part of the cure, not one of the diseases.

Life is more complex than either viewpoint. It's useful to think of the civil liability system as a societal risk management system. It reflects a complex set of tradeoffs and it evolves constantly. Risk Management and Liability Let's think about risk. Suppose you buy a product or service and something bad happens. Somebody gets hurt or loses money. Who should pay? How much? Why? The Fault-Based Approach If the product was defective, or the service was performed incompetently, there's natural justice in saying that the seller should pay. This is a faultbased approach to liability. First problem with the fault-based approach: How do we define "defective"? The word is surprisingly slippery. I ventured a definition for serious defects in Kaner (1997a). I think the approach works, but it runs several pages. It explores several relationships between buyers and sellers, and it still leaves a lot of room for judgment and argument. More recently, I was asked to come up with a relatively short definition of "defect" (serious or not). After several rounds of discussion, I'm stalled. I won't explore the nuances of the definitional discussions here. Instead, here's a simplification that makes the legal problem clear. Suppose we define a defect as failure to meet the specification. What happens when the program does something obviously bad (crashes your hard disk) that was never covered in the spec? Surely, the law shouldn't classify this as non-defective. On the other hand, suppose we define a defect as any aspect of the program that makes it unfit for use. Unfit for who? What use? When? And what is it about the program that makes it unfit? If a customer specified an impossibly complex user interface, and the seller built a program that matches that spec, is it the seller's fault if the program is too hard to use? Under one definition, the law will sometimes fail to compensate buyers of products that are genuinely, seriously defective. Under the other definition, the law will sometimes force sellers to pay buyers even when the product is not defective at all.

This is a classic problem in classification systems. A decision rule that is less complex than the situation being classified will make mistakes. Sometimes buyers will lose when they should win. Sometimes sellers will lose. Both sides will have great stories of unfairness to print in the newspapers. Second problem with the fault-based approach: We don't know how to define "competence" when we're talking about software development or software testing services. I'll come back to this later, in the discussion of professional liability. Third problem: I don't know how to make a software product that has zero defects. Despite results that show we can dramatically reduce the number of coding errors (Ferguson, Humphrey, Khajenoori, Macke, & Matuya, 1997; Humphrey, 1997), I don't think anyone else knows how to make zero-defect software either. If we create too much pressure on software developers to make perfect products, they'll all go bankrupt and the industry will go away. In sum, finding fault has appeal, but it has its limits as a basis for liability. Technological Risk Management It makes sense to put legal pressure on companies to improve their products because they can do it relatively (relative to customers) cheaply. In a mass market product, a defect that occasionally results in lost data might not cost individual customers very much, but if you total up all the costs, it would probably cost the company a great deal less to fix the bug than the total cost to customers. (Among lawyers, this is called the principle of the "least cost avoider." You put the burden of managing a risk on the person who can manage it most cheaply.) I call this technological risk management--because we are managing the risk of losses by driving technology. Losses and lawsuits are less likely when companies make better products, advertise them more honestly, and warn customers of potential hazards and potential failures more effectively. At our current stage of development in the software industry, I think that an emphasis on technological risk management is entirely appropriate. We save too many nickels in ways that we know will cost our customers dollars. However, we should understand that the technological approach is paternalistic. The legal system decides for you what risks companies and

customers can take. This drives schedules and costs and the range of products that are available on the market. The technological approach makes obvious sense when we're dealing with products like the Pinto, which had a deadly defect that could have been fixed for $11 per car. It's entirely appropriate whenever manufacturers will spend significantly less to fix a problem than the social cost of that problem. But over time, this approach gets pushed at less and less severe problems. In the extreme, we risk ending up with a system that imposes huge direct and indirect taxes on us all in order to develop products that will protect fools from their own recklessness. As we move in that direction, many companies and individuals find the system intolerable. Starting in the 1970's we were hearing calls for "tort reform" and a release from "oppressive regulations." The alternative is commercial risk management: let buyers and sellers make their own deals and keep the government out of it. Commercial Risk Management This is supposed to be a free country. It should be possible for a buyer to say to a seller, "Please, make the product sooner, cheaper, and less reliable. I promise not to sue you." The commercial risk management strategy involves allocation of risk (agreeing on who pays) rather than reduction of risk. Sellers rely on contracts and laws that make it harder for customers to sue sellers. Customers and sellers rely on insurance contracts to provide compensation when the seller or customer negligently makes or uses the product in a way that causes harm or loss. This approach respects the freedom of people to make their own deals, without much government interference. The government role in the commercial model is to determine what agreement the parties made, and then to enforce it. (Among lawyers, this is called the principle of "freedom of contract.") The commercial approach makes perfect sense in deals between people or businesses who actually have the power to negotiate. But over time, the principle stretches into contracts that are entirely non-negotiated. A consumer buying a Microsoft product doesn't have bargaining power.

Think about the effect of laws that ratify the shrink-wrapped "license agreements" that come with mass-market products. In mass-market agreements, we already see clauses that avoid all warranties and that eliminate liability even for significant losses caused by a defect that the publisher knew about when it shipped the product. Some of these "agreements" even ban customers from publishing magazine reviews without the permission of the publisher (such as this one, which I got with Viruscan, "The customer will not publish reviews of the product without prior written consent from McAfee.") Unless there is intense quality-related competition, the extreme effect of a commercial risk management strategy is a system that ensures that the more powerful person or corporation in the contract is protected if the quality is bad but that is otherwise indifferent to quality. Without intense quality-driven competition, some companies will slide into lower quality products over time. Eventually this strategy is corporate suicide, but for a few years it can be very profitable. Ultimately, the response to this type of system is customer anger and a push for laws and regulations that are based on notions of fault or of technological risk management. Legal Risk Management Strategies are in Flux Technological and commercial risk management strategies are both valid and important in modern technology-related commerce. But both present characteristic problems. The legal policy pendulum swings between them (and other approaches). Theories of Software Liability Software quality advocates sometimes argue that we should require companies to follow reasonable product development processes. This is a technological risk management approach, which is obvious to us because that's what we do for a living: use technology to improve products and reduce risks. A "sound process" requirement fits within some legal theories, but not others. There are several different theories under which we can be sued. Different ones are more or less important, depending on the legal climate (i.e.,

depending on which legal approach to risk management is dominant at the moment). A legal "theory" is not like a scientific theory. I don't know why we use the word "theory." A legal theory is a definition of the key grounds of a lawsuit. For example, if you sue someone under a negligence theory:

You must prove that (a) the person owed you a duty of care; (b) the person breached the duty; and (c) the breach was the cause of (d) some harm to you or your property. You must convince the jury that (a), (b), (c), and (d) are all more likely to be true than false Ties go to the defendant. If you prove your case, you are entitled to compensation for the full value of your injury or of the damage to your property. If the jury decides there is clear and convincing evidence that the defendant acted fraudulently, oppressively, maliciously, or outrageously, you can also collect punitive damages. These are to punish the defendant, not to compensate you. The amount of damages should be enough to get the defendant's attention but not enough to put it out of business. Punitive damages are rarely awarded in lawsuits--in a short course for plaintiffs' lawyers on estimating the value of a case, we were told to expect to win punitive damages in about 2% of the negligence cases that we try, and to expect small punitive damage awards in most of these cases. If a jury does assess major punitive damages, the trial court, an appellate court, and sometimes the state's supreme court all review the amount and justification of the award.

Every lawsuit is brought under a specifically stated theory, such as negligence, breach of contract, breach of warranty, etc. I provided detailed definitions of most of these theories, with examples, in Kaner, Falk, & Nguyen (1993). You can also find some of the court cases at my web site, along with more recent discussion of the law--check the course notes for my tutorial at Quality Week, 1997, at Quality Cost Analysis Any legal theory that involves "reasonable efforts" or "reasonable measures" should have you thinking about two things:

We aren't just looking at a product in this case. The process used to develop the product is at least as important as the end result.

The judge or jury are going to do a cost/benefit analysis if this type of case ever comes to trial.

We are, or should be, familiar with cost/benefit thinking, under the name of "Quality Cost Analysis" (Gryna, 1988; Campanella, 1990). Quality cost analysis looks at four ways that a company spends money on quality: prevention, appraisal (looking for problems), internal failure costs (the company's own losses from defects, such as wasted time, lost work, and the cost of fixing bugs), and external failure costs (the cost of coping with the customer's responses to defects, such as the costs of tech support calls, refunds, lost sales, and the cost of shipping replacement products). Note that the external failure costs that we consider as costs of quality reflect the company's costs, not the customer's. Previously (Kaner, 1996a), I pointed out that this approach sets us up to ignore the losses that our products cause our customers. That's not good, because if our customers' losses are significantly worse than our external failure costs, we risk being blindsided by unexpected litigation. The law cares more about the customer's losses. A manufacturer's conduct is unreasonable if it would have cost less to prevent or detect and fix a defect than it costs customers to cope with it (Kaner, 1996b). Cost of quality analysis was developed by Juran as a persuasive technique. "Because the main language of [corporate management] was money, there emerged the concept of studying quality-related costs as a means of communication between the quality staff departments and the company managers" (Gryna, 1988, p. 42). You can use this approach without ever developing complex cost-tracking systems. Whenever a product has a significant problem, of any kind, it will cost the company money. Figure out which department is most likely to lose the most money as a result of this problem and ask the head of that department how serious the problem is. How much will it cost? If she thinks its important, bring her to the next product development meeting and have her explain how expensive this problem really is. There is no expensive cost-tracking system in place, but there's a lot of persuasive benefit here. When the company's cost of external failures is less than the cost a customer will face, don't use these numbers to try to persuade management to fix the problem. The numbers aren't persuasive and they almost certainly underestimate the long term risks (litigation and lost sales). Instead, come up

with some scenarios, examples that illustrate just how serious the problem will be for some customers. Make management envision the problem itself and the extent to which it will make customers unhappy or angry. Survey of the Theories Here's a quick look at theories under which a software developer can be sued:

Criminal: The government sues the company for committing a criminal act, such as intentionally loading a virus on the customer's computer or otherwise tampering with the computer. For example, several years ago, Vault Corp. announced plans to release a new copy protection program that would unleash a worm that would gradually destroy your system if you illegally (in the program's opinion) copied the protected program. (see Kaner et. al., 1993 for details.) That was probably not illegal at the time, but today such a program probably would be. Intentional Tort: The company did something very bad, such as deliberately loading a virus onto your computer, or stealing from you, or telling false, insulting stories about you. The government might be able to sue the company under a criminal theory. You sue the company for damages (money, to be paid to you). Strict Liability: A product defect caused a personal injury or property damage. In this case, we look at the product's defectiveness and behavior, without thinking about the reasonableness of the process used to develop the product. No punitive damages are available. For example, suppose that the program controlling a car's brakes crashes and soon thereafter, so does the car. In a strict liability suit, we would have to prove that the program was defective, and the defect caused the accident. In a negligence suit, we also have to ask whether the manufacturer made a reasonable effort to make the brakes safe. Negligence: The company has a duty to take reasonable measures to make the product safe (no personal injuries or property damage), or no more unsafe than a reasonable customer would expect (skis are unsafe, but skiers understand the risk and want to buy skis anyway.) Under the right circumstances, a company can non-negligently leave a product in a dangerous condition. Proof of negligence can be quite difficult. No single factor will prove that a company was non-negligent. A court will consider several factors in trying to understand the level of care taken by the company (Kaner, 1996b). Kaner, Falk, & Nguyen (1993) list

several factors that will probably be considered in a software negligence case, such as:

Did the company have ctual knowledge of the problem? (No one likes harm caused by known defects.) How carefully did the company perform its safety analysis? (The wrong answer is, "Safety analysis? What safety analysis?") How well designed is the program for error handling? (The law expects safety under conditions of foreseeable misuse. 90% of industrial accidents are caused by "user errors." Manufacturers have to deal with this, not whine about dumb users.) How does the company handle customer complaints? (Jurors will sympathize with mistreated customers.) What level of coverage was achieved during testing? (There are so many different types of coverage. Using judgment is more important than slavishly achieving 100% on one type of coverage. Kaner, 1996b.) Did the product design and development follow industry standards? (In negligence, failure to follow a standard is relevant if and only if the plaintiff can show that this failure caused the harm.) It's worth asking whether current industry standards, such as IEEE standards, are appropriate references. Do they realistically describe what the industry does or should do?

What is the company's bug tracking methodology? (Does it have one?) Did the company use a consistent methodology? (If not, how does it make tradeoffs?) What is the company's actual level of intensity or depth of testing? (Did it make a serious effort to find errors?) What is its test plan? (How did the company develop it? How do they know it's good? Did they follow it?) What does the documentation say about the product? (Does it warn people of risks? Does it lead them into unsafe uses of the product?)

Fraud: The company made a statement of fact (something you can prove true or false) to you. It knew when it made the statement that it was false, but it wanted you to make an economic decision (such as buying a product or not returning it) on the basis of that statement. You reasonably relied on the statement, made the desired decision, and then discovered that it was false. In the case of Ritchie Enterprises v. Honeywell Bull (1990), the court ruled that a customer can sue for fraud if technical support staff convinced him to keep trying to make a bad product work (perhaps talking him out of a refund), by intentionally deceiving him after the sale. Negligent Misrepresentation: Like fraud except that the company made a mistake. It didn't know that the statement was false when it made it. If the company had taken the care in fact-finding that a reasonable company under the circumstances would have taken, it would not have made the mistake. Burroughs Corp. v. Hall Affiliates, Inc. (1982) is an example of this type of case in a sales situation.You have to establish that the company owed you a duty to take care to avoid accidentally misinforming you. This is often very difficult to prove, especially if the company made a false statement about something that it was not selling to you.However, independent test labs have been successfully sued by end customers for negligently certifying the safety of a product (Kaner, 1996b). Unfair or Deceptive Trade Practice: The company engaged in activities that have been prohibited under the unfair and deceptive practices act that your state has adopted. For example, false advertising, or falsely stating or implying that the product has been endorsed by someone, or falsely claiming that a new upgrade will be released in a few weeks, are all deceptive trade practices. You may have to show that the company has repeatedly engaged in this misconduct--the theory may require evidence of a "practice", a pattern of misconduct, not just one bad event. You can receive a refund and repayment of your attorney fees. Some states allow additional statutory damages. For example, in Texas, a successful plaintiff can collect up to three times her actual damages. This is the law under which Compaq has recently been sued (Johnson v. Compaq, 1997). According to the plaintiff, Compaq sold a computer with a warranty that stated that Compaq would not charge for calls about software defects. He claims that Compaq's support staff told the plaintiff that he had to pay up to $3 per minute for all calls about software, whether they involved defects or not. Based on his observation of the AOL/Compaq message board and on other sources, the plaintiff alleged that Compaq was also refusing to

provide free support to other people when they called about genuine software defects. Unfair Competition: The definition varies across states. For example, in California anyone can file an unfair competition suit, so long as they can prove that the company engaged in a pattern of illegal activity. In some other states, only a competitor can sue, and only for some narrower list of bad acts. In Princeton Graphics v. NEC (1990), Princeton successfully sued NEC for claiming that its Multisync monitor (the first one) was VGA-compatible. Princeton and NEC had the same problems with VGA, and Princeton chose not to advertise itself as VGAcompatible. FTC Enforcement: The Federal Trade Commission can sue companies for unfair or deceptive trade practices, unfair competition or other anticompetitive acts. Most defendants these cases without admitting liability. Recent FTC cases have been settled against Apple Computer (In the Matter of Apple Computer, 1996) and against the vendor of a Windows 95 optimization program that allegedly didn't provide any performance or storage benefits (In the Matter of Syncronys Software, 1996). Occasionally, the FTC sues over vaporware announcements that appear to be intended to mislead customers. Regulatory: The Food and Drug Administration, for example, requires that certain types of software be developed and tested with what the FDA considers an appropriate level of care. My understanding is that development process is important to the FDA. Breach of Contract: In a software transaction, the contract specifies obligations that two or more persons have to each other. (In legal terms, a "person" includes humans, corporations, and other entities that can take legally binding actions.) Contracts for non-customized products are currently governed under Article 2 (Law of Sales) of the Uniform Commercial Code (UCC). Contracts for services, including custom software, are covered under a more general law of contracts

Liability for defective software

1 May 01 Share on facebookShare on twitterShare on linkedinShare on emailShare on print10 Establishing a duty of care creates difficulties in pinpointing liability when defective software causes injury by Maurice Jamieson Increasingly software is used in situations where failure may result in death or injury. In these situations the software is often described as safety critical software. Where such software is used and where an accident occurs it is proper that the law should intervene in an attempt to afford some form of redress to the injured party or the relatives of a deceased person. Safety critical software is used in specialised situations such as flight control in the aviation industry and by the medical profession in carrying out diagnostic tasks. Nowadays software will have an impact on the average citizens life whether by choice or otherwise. However for most individuals as the plane leaves the airport typical concerns usually centre on the exchange rate and not the computer software controlling the flight. These concerns of course change when the plane falls from the sky without explanation. What can the individual do when faced with such occurrences? In such a dramatic scenario the situation there is unlikely to be a contractual relationship between the individual affected by the defective software and software developer. In this article I shall attempt to examine how liability may accordingly arise. Setting the Scene the Computer as the Villain The legal concept of liability has traditionally long included as a base element the concept of culpa or fault. Humans are marvellous at attributing blame in any given situation, the converse of this phenomenon being they are equally good at passing the buck. When things go wrong and where a

computer is involved more often than not the initial response is blame the computer. Whilst solving the puzzle following a calamity is never straightforward the first line of attack is often the technology used in a situation that has gone wrong. An example of this pattern of behaviour can be seen following the introduction of computerised stock indexed arbitraging in the New York financial markets back in 1987. On 23rd January 1987 the Dow Jones Industrial Average rose 64 points, only to fall 114 points in a period of 70 minutes, causing widespread panic. Black Monday as it became known was indeed a black day for many investors large and small alike who sustained heavy financial losses. The response of the authorities in the face of the crisis was to suspend the computerised trading immediately. In considering this event Stevens argues that all computerised program trading did was increase market efficiency and perhaps more significantly get the market to where it was going faster without necessarily determining its direction. However, the decision to suspend computerised trading was taken without a full investigation of all the relevant facts. As Stevens himself puts it: Every disaster needs a villain. In the securities markets of 1987, program trading played that role. Computerised stock-indexed arbitrage has been singled out as the source of a number of market ills 1 Of course in the situation outlined above the losses incurred would be economic in nature which is not to say that such losses do not have real and human consequences for those who suffer them, but as now appears to be the case in both Britain and America there can be no recovery where the losses are purely economic unless there has been reliance in accordance with the Hedley Byrne principle2. Turning from the purely financial implications of software failure other failures have rightly generated considerable public concern. In particular the report of the inquiry into the London Ambulance Service3 highlighted the human consequences when software failed to perform as it was expected to. The situation becomes all the more problematic when it is remembered that nobody expects software to work first time. Software by its very nature is extremely complex, consisting as it does of line upon line of code. It might be thought that the simple solution to this problem would be to check all software thoroughly. That of course begs the question as to what actually constitutes a thorough check. Even where software developers check each line of code or test the validity of every statement in the code the reality is that such testing will not ensure that the code is error free.

Kaner4 has identified at least 110 tests that could be carried out in respect of a piece of software none of which would necessarily guarantee that the software would be error free. Indeed the courts in England have explicitly accepted that there is no such thing as error free software.5 Furthermore the hardware on which software runs can also be temperamental. It can be affected by temperature changes, power fluctuations or failure could occur simply due to wear and tear. There are any number of other factors which could affect software such as incompatibility with hardware and when all the factors are taken together one could easily justify the assertion that establishing the precise cause of software failure is no easy task. This is of significance given the necessity for the person claiming loss to prove the actual cause of that loss. Principle, Pragmatism or Reliance In situations where an individual is killed or injured as a consequence of the failure of a piece of software there is no reason why, in principle, recovery of damages should not be possible. It would however be foolhardy for the individual to assume that recovering damages is by any means straightforward. In order for an individual to establish a case against a software developer at common law it would be necessary to show that the person making the claim was owed a duty of care by the software developer, that there was a breach of that duty, that the loss sustained was a direct result of the breach of duty and that the loss was of a kind for which recovery of damages would be allowed. In determining whether or not a duty of care exists between parties the starting point has traditionally been to consider the factual relationship between the parties and whether or not that relationship gives rise to a duty of care. The neighbour principle as espoused by Lord Atkin in the seminal case of Donoghue v Stevenson6 requires the individual, in this case the software developer, to have in his contemplation those persons who may be affected his acts or omissions. By way of an example it is obvious to assume that the developer of a program used to operate a diagnostic tool or therapeutic device is aware that the ultimate consumer (for lack of a better word) will be a member of the public although the identity of that person may be unknown to the software developer. The case of the ill-fated Therac25, a machine controlled by computer software used to provide radiation therapy for cancer patients, highlights the problem. Prior to the development of radiotherapy treatment, radical invasive surgery was the only means of treating various cancers. Not only was this extremely traumatic for patients but often it was unsuccessful.

With the development of radiotherapy treatment the requirement for surgery has been greatly reduced. However between 1985 and 1987 six patients were seriously injured or killed as a result of receiving excessive radiation doses attributable to the Therac-25 and defective software. Commenting on the tragedy Liversedge stated that: Although technology has progressed to the point where many tasks may be handled by our silicon based friends, too much faith in the infallibility of software will always result in disaster.7 In considering the question of to whom a duty of care is owed the law will have to develop with a degree of flexibility as new problems emerge. The question for the courts will be whether or not this is done on an incremental basis or by application of principles. If the former approach is adopted a further question for consideration is whether or not it is just and reasonable to impose a duty where none has existed before. However in my view the absence of any direct precedent should not prevent the recovery of damages where there has been negligence. I do however acknowledge that theoretical problems that can arise are far from straightforward. The broad approach adumbrated in Donoghue is according to Rowland8 appropriate in cases where there is a direct link between the damage and the negligently designed software such as might cause intensive care equipment to fail. However she argues that in other cases the manner in which damage results cannot provide the test for establishing a duty of care. She cites as an example the situation where passengers board a train with varying degrees of knowledge as to the signalling system being Y2K compliant. Whilst she does not directly answer the questions she poses, the problems highlighted are interesting when one looks at the extremes. For instance, should it make any difference to the outcome of claims by passengers injured in a train accident that one passenger travelled only on the basis that the computer controlled signalling system was certified as safe while the other passenger did not apply his mind to the question of safety at all? It is tempting to assume that liability would arise in the former scenario on the basis of reliance but that then begs the question of whether or not liability arises in the latter scenario at all. If, as sh e implies, reliance is the key to establishing liability then it would not as there has been no reliance. That result in the foregoing scenario would be harsh indeed, avoiding as it does the issue of the failure of the developer to produce a signalling system that functioned properly. More often than not an individual may have little appreciation that his environment is being controlled by computer software. It could be argued that because of the specialist knowledge on the part of the computer programmer it follows that he or she assumes responsibility for the individual ultimately affected by the software. The idea that reliance could give rise to a duty of care first came to prominence in the Hedley Byrne case. The basis of the concept is that a special relationship exists between someone providing expert information or an expert service thereby creating a duty of care.

In the context of computer programming the concept while superficially attractive ignores the artificiality of such a proposition given that it is highly unlikely that the individual receiving for example radiotherapy treatment will have any idea of the role software will play in the treatment process. Furthermore in attempting to establish a duty of care based on reliance the House of Lords have been at pains to stress that the assumption of responsibility to undertake a specific task is not of itself evidence of the existence of a duty of care to a particular class of persons.9 Standard of Care It might come as a shock to some and no great surprise to others that there is no accepted standard as to what constitutes good practice amongst software developers. That is not to say that there are not codes of practice and other guidelines but merely that no one code prevails over others. The legal consequences of this situation can be illustrated by the following example. Two software houses given the task of producing a program for the same application do so but the code produced by each house is different. One application fails while the other runs as expected. It is tempting to assume that the failed application was negligently designed simply because it did not work. However such an assumption is not merited without further inquiry. In order to establish that the program was produced negligently it would be necessary to demonstrate that no reasonable man would have produced such a program. In the absence of a universal standard, proving such a failing could be something of a tall order. An increasing emphasis on standards is of considerable importance given that it is by this route that an assessment of whether or not a design is reasonable will become possible. Making such an assessment should not be an arbitrary judgment but one based on the objective application of principles to established facts. At present in the absence of a uniform approach one is faced with the spectre of competing experts endeavouring to justify their preferred approaches. In dealing with what can be an inexact science the problem that could emerge is that it may prove difficult to distinguish between the experts who hold differing opinions and the courts in both England and Scotland have made it clear that it is wrong to side with one expert purely on the basis of preference alone.10 In America standards have been introduced for accrediting educational programs in computer science technology. The Computer Science Accreditation Commission (CSAC) established by the Computer Science Accreditation Board (CSAB) oversees these standards. Although such a move towards standardisation has positive benefits, not least that such standards should reflect best practice, it would be nave to assume that this will make litigation any easier. Perhaps only in those cases where it was clear that no regard whatsoever had been paid to any of the existing standards would there be a chance of establishing liability.

It should also be borne in mind that in the age of the Internet software will undoubtedly travel and may be produced subject to different standards in many jurisdictions. In determining what standards could be regarded as the best standards the courts could be faced with a multiplicity of choices. That of course is good news for the expert and as one case suggests some of them are more than happy to participate in costly bun fights.11 Causation Even where it is possible to establish a duty of care and a breach of that duty the individual may not be home and dry. It is necessary to show that the damage sustained was actually caused by the breach of duty. That is not as straightforward as it might sound when one pauses to consider the complexities of any computer system. The topography of a computer is such as to make it necessary that an expert is instructed to confirm that the computer program complained of was the source of the defect, giving rise to the damage. A computer program may be incompatible with a particular operating system and therefore fail to work as expected. In these circumstances it would be difficult to establish liability on that basis alone unless the programmer had given an assurance that compatibility would not be an issue. If ISO 9127, one of the main standards, were to become the accepted standard then of course the computer programmer would be bound to provide information as to the appropriate uses of a particular program. In that event it may be easier to establish liability as a result of failure to give appropriate advice with the curious consequence that the question of whether or not the program was itself defective in design would be relegated to one of secondary importance. A more difficult question arises in relation to the use of machines in areas such as medical electronics. Returning by way of example to the case of the ill-fated Therac-25, while it is clear that the machine caused harm in those cases where there were fatalities it would be difficult to maintain that the machines caused death as it is highly probable that the cancers if left untreated would have lead to death in any event. Equally where an ambulance was late and a young girl died from a severe asthma attack it could not be said that the cause of death was as a result of the failure in the computer controlled telephone system even although if the system had worked the chances of survival would have been greatly increased. Let the Developer Defend As can be seen from the above, establishing liability at common law in the context of defectively designed software is no mean feat. With the passing of the Consumer Protection Act 1987 following the EC Directive (85/374/EEC) the concept of product liability has been part of UK law for over a decade. The effect of the Directive and the Act is to create liability without fault on the part of the

producer of a defective product that causes death or personal injury or any loss or damage to property including land. Part of the rationale of the Directive is that as between the consumer and the producer it is the latter that is better able to bear the costs of accidents as opposed to individuals affected by the software. Both the Directive and the Act provide defences to an allegation that a product is defective so that liability is not absolute. However, given that under the Directive and the Act an individual does not have to prove fault on the part of the producer the onus of proof shifts from the consumer to the producer, requiring the producer to make out one of the available defences. In relation to computer programs and the application of the Directive and the Act the immediate and much debated question that arises is whether or not computer technology can be categorised as a product. Undoubtedly hardware will be covered by the directive no doubt providing a modicum of comfort to those working in close proximity to killer robots. The difficulty arises in relation to the question of software. The arguments against software being classified as a product are essentially threefold. Firstly, software is not moveable, therefore is not a product. Secondly, software is information as opposed to a product, although some obiter comments on the question of the status of software suggests that information forms an integral part of a product.12 Thirdly, software development is a service, and consequently the legislation does not apply. Against that it can be argued that software should be treated like electricity, which itself is specifically covered by the Directive in Article 2 and the Act in Section 1(2), and that software is essentially compiled from energy that is material in the scientific sense. Ultimately it could be argued that placing an over legalistic definition on the word product ignores the reality that we now live in an information society where for social and economic purposes information is treated as a product and that the law should also recognise this. Furthermore, following the St Albans case it could be argued that the trend is now firmly towards categorising software as a product and indeed the European Commission has expressed the view that software should in fact be categorised as a product.13 Conclusion How the courts deal with some of the problems highlighted above remains to be seen, as at least within the UK there has been little litigation in this matter. If as Rowlands suggests pockets of liability emerge covering specific areas of human activity, such as the computer industry, it is likely that this will only happen over a long period of time. Equally relying on general principles, which has to a certain extent become unfashionable, gives no greater guarantee that the law will become settled more quickly.

Parliament could intervene to further afford consumers greater rights and clarify for once and for all the status of software. However it should be borne in mind that any potential expansion of liability on the part of producers of software may have adverse consequences in respect of insurance coverage and make obtaining comprehensive liability coverage more difficult. For smaller companies obtaining such coverage may not be an option, forcing them out of the market. Whatever the future holds in this brave new world perhaps the only thing that can be said with any certainty is that it will undoubtedly be exciting. Maurice Jamieson is an advocate


Feb Jan


Dec Nov Oct Sep Aug Jul Jun May Apr Mar Feb Jan


Dec Nov Oct Sep Aug Jul Jun

May Apr Mar Feb Jan


Dec Nov Oct Sep Aug Jul Jun May Apr Mar Feb Jan


Dec Nov Oct Sep Aug Jul Jun May Apr Mar Feb Jan


Dec Nov Oct

Sep Aug Jul Jun May Apr Mar Feb Jan


Dec Nov Oct Sep Aug Jul Jun May Apr Mar Feb Jan


Dec Nov Oct Sep Aug Jul Jun May Apr Mar Feb Jan


Dec Nov Oct Sep Aug Jul Jun May Apr Mar Feb Jan


Dec Nov Oct Sep Aug Jul Jun May Apr Mar Feb Jan


Dec Nov Oct Sep Aug Jul Jun May

Apr Mar Feb Jan


Dec Nov Oct Sep Aug Jul Jun May Apr Mar Feb Jan


Dec Nov Oct Sep Aug Jul Jun May Apr Mar Feb Jan


Dec Nov Oct Sep

Aug Jul Jun May Apr Mar Feb Jan


Dec Nov Oct Sep Aug Jul Jun May Apr Mar Feb Jan


Dec Nov Oct Sep Aug Jul Jun May Apr Mar Feb Jan












My straw-man proposal for a software liability law has three clauses: Clause 0. Consult criminal code to see if any intentionally caused damage is already covered. I am trying to impose a civil liability only for unintentionally caused damage, whether a result of sloppy coding, insufficient testing, cost cutting, incomplete documentation, or just plain incompetence. Intentionally inflicted damage is a criminal matter, and most countries already have laws on the books for this. Clause 1. If you deliver software with complete and buildable source code and a license that allows disabling any functionality or code by the licensee, then your liability is limited to a refund. This clause addresses how to avoid liability: license your users to inspect and chop off any and all bits of your software they do not trust or do not want to run, and make it practical for them to do so. The word disabling is chosen very carefully. This clause grants no permission to change or modify how the program works, only to disable the parts of it that the licensee does not want. There is also no requirement that the licensee actually look at the source code, only that it was received. All other copyrights are still yours to control, and your license can contain any language and restriction you care to include, leaving the situation unchanged with respect to hardware locking, confidentiality,

secrets, software piracy, magic numbers, etc. Free and open source software is obviously covered by this clause, and it does not change its legal situation in any way. Clause 2. In any other case, you are liable for whatever damage your software causes when used normally. If you do not want to accept the information sharing in Clause 1, you would fall under Clause 2 and have to live with normal product liability, just as manufacturers of cars, blenders, chainsaws, and hot coffee do. How dire the consequences and what constitutes "used normally" are for the legislature and courts to decide. An example: A salesperson from one of your longtime vendors visits and delivers new product documentation on a USB key. You plug the USB key into your computer and copy the files onto the computer. This is "used normally" and should never cause your computer to become part of a botnet, transmit your credit card number to Elbonia, or send all your design documents to the vendor. The majority of today's commercial software would fall under Clause 2. To give software houses a reasonable chance to clean up their acts and/or to fall under Clause 1, a sunrise period would make sense, but it should be no longer than five years, as the laws would be aimed at solving a serious computer security problem. And that is it, really. Software houses will deliver quality and back it up with product liability guarantees, or their customers will endeavor to protect themselves.

There is little doubt that my proposal would increase software quality and computer security in the long run, which is exactly what the current situation calls for. It is also pretty certain that there will be some short-term nasty surprises when badly written source code gets a wider audience. When that happens, it is important to remember that today the good guys have neither the technical nor the legal ability to know if they should even be worried, as the only people with source-code access are the software houses and the criminals. The software houses would yell bloody murder if any legislator were to introduce a bill proposing these stipulations, and any pundits and lobbyists they could afford would spew their dire predictions that "this law will mean the end of computing as we all know it!" To which my considered answer would be: "Yes, please! That was exactly the idea."