Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Risk and Safety Analysis of Nuclear Systems
Risk and Safety Analysis of Nuclear Systems
Risk and Safety Analysis of Nuclear Systems
Ebook815 pages8 hours

Risk and Safety Analysis of Nuclear Systems

Rating: 0 out of 5 stars

()

Read preview

About this ebook

The book has been developed in conjunction with NERS 462, a course offered every year to seniors and graduate students in the University of Michigan NERS program.

The first half of the book covers the principles of risk analysis, the techniques used to develop and update a reliability data base, the reliability of multi-component systems, Markov methods used to analyze the unavailability of systems with repairs, fault trees and event trees used in probabilistic risk assessments (PRAs), and failure modes of systems. All of this material is general enough that it could be used in non-nuclear applications, although there is an emphasis placed on the analysis of nuclear systems.

The second half of the book covers the safety analysis of nuclear energy systems, an analysis of major accidents and incidents that occurred in commercial nuclear plants, applications of PRA techniques to the safety analysis of nuclear power plants (focusing on a major PRA study for five nuclear power plants), practical PRA examples, and emerging techniques in the structure of dynamic event trees and fault trees that can provide a more realistic representation of complex sequences of events. The book concludes with a discussion on passive safety features of advanced nuclear energy systems under development and approaches taken for risk-informed regulations for nuclear plants.

LanguageEnglish
PublisherWiley
Release dateJan 12, 2012
ISBN9781118043455
Risk and Safety Analysis of Nuclear Systems

Related to Risk and Safety Analysis of Nuclear Systems

Related ebooks

Chemical Engineering For You

View More

Related articles

Reviews for Risk and Safety Analysis of Nuclear Systems

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Risk and Safety Analysis of Nuclear Systems - John C. Lee

    CHAPTER 1

    RISK AND SAFETY OF ENGINEERED SYSTEMS

    1.1 RISK AND ITS PERCEPTION AND ACCEPTANCE

    Risk and safety concerns for the engineering of nuclear power plants are somewhat analogous to the opposing yin and yang energies that represent the ancient Chinese understanding of how things work. The outer circle represents everything, while the yin (black) and yang (white) shapes within the circle represent the interaction of two energies that cause everything to happen. As such, risk (yin) is the performance downside of a nuclear system and safety (yang) is what happens when the system performs its intended function. In the Chinese interpretation of yin-yang, there is a continuous movement between the two energies, just as there is when a nuclear system operates. Just as the Chinese have observed, risk and safety are intertwined, even though the engineering principles for each have a different emphasis.

    Risk is the combination of the predicted frequency of an undesired initiating event and the predicted damage such an event might cause if the ensuing follow-up events were to occur. In essence, it combines the concepts of How often? with How bad?

    In this book we are concerned with probabilistic risk assessment (PRA) and the methods used to analyze the safety of nuclear systems. For this reason we are investigating risks that might occur to society as a whole, rather than risks that might be incurred by an individual in society. A PRA typically models events that only very rarely occur. Hence it differs from an investigation in which there is an operating history from which to predict risks. Although most of the licensing and regulations governing the current generation of operating nuclear power plants are based on deterministic assessment of the consequences of postulated accidents and operating conditions, there is an increasing emphasis placed on implementing PRA techniques in licensing decisions. With this perspective, the terminology probabilistic safety analysis often is used to represent the safe assessment that combines the elements of both probabilistic and deterministic methods. Thus, the dichotomy between risk and safety has become somewhat fuzzy in recent years.

    When thinking about a complex technology it is not difficult to conjecture a series of questions: What if undesired event A happened? Or if undesired event B happened? Or if undesired event C happened? … To scientifically answer such questions requires clearly defining what the consequences of events A, B, C, … are, but an often overlooked aspect is the frequency of occurrence of such events. Risk analysis techniques are needed to assess both the frequency and the consequence of an undesired event while safety analysis techniques are for preventing the occurrence of such events.

    Perception of the risk associated with any human activity, including that associated with the utilization of man-made systems, is quite subjective. This can be illustrated by the way the news media typically report on airplane crashes involving the injury or death of even a few passengers and crew, while the annual casualties of 40,000 to 50,000 individuals due to automobile accidents in the United States do not receive special coverage. The distinction between perhaps a few hundred casualties resulting from airplane accidents and a much larger number of deaths from automobile accidents in the United States every year can be characterized in two ways: (a) voluntary versus involuntary risks and (b) distributed versus acute or catastrophic risks. We consider the risk associated with traveling in private automobiles a voluntary one that is under our personal control, in contrast to the involuntary risk involved with commercial airline flights in which we do not have control. Similarly, an automobile-related accident typically does not result in a large number of casualties so the risk is distributed, while a catastrophic airline crash can result in a large number of casualties.

    Acceptability of risk is often inversely proportional to the consequences. In the risk space shown in Fig. 1.1, the abscissa represents the consequences or dreadfulness and the ordinate the observability or familiarity of the hazard. Events in the upper right quadrant, entailing significant consequences and significant unfamiliarity or limited observability, generally require strict regulations. In the case of postulated accidents in nuclear systems, the consequences could be significant although the probability of the accidents is predicted to be very small. Thus, the traditional method of risk evaluation is often subject to public skepticism, despite the extensive efforts made in implementing scientific principles in the design, construction, and operation of nuclear systems.

    Figure. 1.1 Risk space illustrating acceptability of risks as a function of consequences and observability.

    Source: Reprinted with permission from [Mor93]. Copyright © 1993 Scientific American, a division of Nature America, Inc.

    Risks are incurred in everyday life by everyone, of course. So what distinguishes such risks from those from the operation of a nuclear power plant, for example? An important distinction in whether an individual accepts a risk is whether he or she has control over the risk to be incurred. Other factors are important as well and have been summarized in Table 1.1.

    Table. 1.1 Factors Affecting Acceptance of Risks

    The use of nuclear power for the generation of electricity has the disadvantage of many factors working against its acceptance. By its very nature, a probabilistic analysis of any system can never yield a result for a risk known with certainty. The potential for a delay of the effects and the irreversible consequences following a catastrophic event at a radioactive waste disposal site are contributing effects to the siting of such sites, for example. Public concerns over the potential for delayed climate changes arising from the buildup of CO2 also can be understood in the context of the Table 1.1 factors.

    One might think that the response of the public to modern medical imaging methods might provide a clue for the eventual acceptance of nuclear power. Widespread acceptance of x-rays shows that a radiation technology can be tolerated once its use becomes familiar, its benefits clear, and its practitioners trusted. In spite of the two most widely publicized nuclear power accidents, at Three Mile Island Unit 2 and Chernobyl, the nuclear power safety record is outstanding in light of the benefits obtained from the electricity generated without CO2 emissions. But yet several decades have passed, with countries like France generating upward of 80% of its electricity by nuclear power, and the acceptance of nuclear power in the United States has remained lower than most engineers with a nuclear background could have imagined at those earlier times.

    It can be argued that unfavorable media publicity has played a role in the lack of acceptance of nuclear power by a large fraction of the U.S. population. An outstanding example of this is what transpired after the Three Mile Island nuclear reactor accident in March 1979 in which some radioactive gas was released a couple of days after the accident, but not enough to cause any dose above background levels to local residents. Indeed, for 18 years the Pennsylvania Department of Health maintained a registry of more than 30,000 people who lived within 5 miles of Three Mile Island at the time of the accident, but that was discontinued in mid 1997 without any evidence of unusual health trends in the area. Yet an explosion at the Union Carbide India pesticide plant in Bhopal in December 1984 released toxic gas in the form of methyl isocyanate and its reaction products over the city. The estimated mortality of this accident is believed to have been between 2500 and 5000 people, with up to 200,000 injured [Meh90]. But such an accident was largely ignored by the media in comparison to the publicity surrounding the Three Mile Island accident. One reason for this disparity was that the consequences of the Bhopal accident were known within days while the effects of the Three Mile Island accident took years to assess.

    Industrial facilities such as nuclear reactors and chemical plants have been studied, by the techniques presented in this book, for their risks to the public at large. But such investigations are entirely different than what people do in making their own individual decisions about risks in their everyday lives. Because ordinary citizens do not have direct control over how their electricity is generated or various products are manufactured, the operation of such industrial facilities must lead to the probability of undesired consequences much lower than the risks from everyday occurrences.

    For common risks leading to unnatural human deaths incurred involuntarily by an individual, for example, the probability of occurrence loosely can be bounded between 10−6/yr and 10−2/yr. The lower bound is set by the risk of death from natural events, such as lightning, flood, earthquakes, insect and snake bites, etc. (about one death per year per million people) and the upper bound arises by the death rate from disease (about one death per year per 100 people). The lower bound is not, however, appropriate for a large-scale commercial facility like a nuclear power or chemical plant.

    One can argue that the risks from the operation of plant A need not necessarily be as small as those from operation of plant B if one perceives the benefits of the products produced by plant A to be greater than those from plant B. An early comparative risk assessment of technologies for the generation of electricity was performed by Inhaber [Inh82]. He investigated the production of electricity in MWe-yr from 11 different sources: coal, oil, nuclear, natural gas, hydroelectric, wind, methanol, solar space heating, solar thermal, solar photovoltaic, and ocean thermal sources (but did not consider ocean tidal, for example). One innovative feature of the study was to put the technologies for each power source on equal footing by also assigning the percentage of risk-for energy backup during the predicted down time for maintenance, etc.- from other electric generating plants in Canada. (Thus, interruptible power sources were assigned risks not only from their own performance.) Beside the risks from activities related to electricity production, operation, maintenance, and energy backup, his risk estimates included emissions from acquisition of materials to build the plant, energy storage, transportation, and the gathering and handling of fuels and acquiring material and equipment. For nuclear systems he also included estimates of the risks of waste management along with possible catastrophic reactor accidents. The consequences per MWe-yr included public deaths and occupational deaths and also public and occupational lost person-days. Although the numerical values of an early version of the study and some of the techniques were questioned [Ho179a, Ho179b], risks from nonconventional energy sources can be as high as or even higher than that of some conventional sources and the relative rankings of the 11 systems were not strongly influenced by whether the energy backup was included in the calculations [Inh82]. Figure 1.2 shows that in such energy comparison studies that are normalized to equal amounts of uninterruptible power generation, it is important to account for the risks from producing the materials used to construct the energy production system.

    Figure. 1.2 Proportions of risk by source, normalized to the sum of the occupational and public risks for each source.

    Source: Reprinted with permission from [Inh82]. Copyright © 1982 Gordon and Breach.

    1.2 OVERVIEW OF RISK AND SAFETY ANALYSIS

    The objective of a risk analysis is to predict what might happen, beginning with an undesired initiating event and following that event in time to predict an undesired consequence if the active and passive safety systems fail to perform their intended function. In other words, risk involves the occurrence or potential occurrence of some accident sequence involving one or more events, together with the ensuing consequences from such an accident. On the other hand, the objective of a safety analysis is to design the components of a system so that undesired initiating events do not occur or, if they do, that backup systems intervene in the progression of following events to prevent or mitigate any undesired consequences.

    What types of undesired initiating events can occur? There are postulated events such as a large pipe break caused by an earthquake or an electrical short in a safety system caused by a local fire. Indeed, part of the focus in the latter part of this book is to focus on some of these initiating events. What happens after such an initiating event? Because of the inherent potential danger of an uncontrolled release of ionizing radiation, nuclear plants have backup safety systems to reduce the undesired consequences from the undesired accident sequence. The failure of such backup systems causes an initiating event to become a sequence of failure events to form the accident sequence.

    What kind of consequences are of concern? The loss of human life immediately comes to mind, such as in the catastrophic Chernobyl accident with the loss of life to plant workers and citizens in the surrounding countryside. Of course there also are differences between the length of time people lived following that event: some died within hours and others from the prolonged exposure to radionuclides that affected their thyroid glands, for example.

    The potential consequences from a release of the radiological source contained in a typical nuclear power plant (NPP) pose a unique safety concern. An estimate of the inventory in an operating reactor may be obtained based on a simple physical analysis with the approximation that every fission event is a binary fission yielding two fission products and that every fission product (FP) undergoes one radioactive decay in an equilibrium operating condition. With this simple but reasonable approximation, together with a recoverable energy of 200 MeV released per fission, 1 W of thermal energy generated requires 3.1 × 10¹⁰ fissions/s, which then produces approximately 2 Ci of radioactivity. Thus, a 1.0–GWe nuclear power plant with a thermal efficiency of 33% produces 3.0 GWt, which then yields an equilibrium radioactivity of 6000 MCi (6 BCi). This simple estimate compares favorably with a total radioactivity inventory of 5.6 BCi, including 3.8 BCi of FP radioactivity, in the tally of radioactivity in Appendix A for a 3.56–GWt reactor [Rah84]. This huge inventory of radionuclides accounts for about 6 to 7% of the total power in a typical operating plant, and this power must be dissipated after the chain reaction is terminated. (These two features provide distinctly different risk and safety concerns from a coal-fired plant.) For this reason Appendix A also contains an introduction to the fission product inventory and decay heat in a nuclear reactor, health effects of radiation exposure, and current regulations governing radiation exposure.

    As engineers analyzing a nuclear system we have a moral obligation to develop the safest possible system. By performing a risk analysis we may obtain sufficient information to redesign it and lower the probability of the occurrence of an accident or mitigate the ensuing consequences. Alternatively, it may be possible to show that the probability of occurrence of a postulated accident is negligibly small enough that the potential accident can be neglected compared to other potential accidents.

    A PRA can provide either a point estimate or an interval estimate of an event. Although the point estimate may give the best single value for the probability of occurrence, it does not give any indication of the uncertainty in the estimate. An interval estimate, on the other hand, is useful because the width of the interval conveys how well, in a probabilistic sense, the point estimate is known. Confidence limits for an estimated parameter provide a point estimate combined with functions of the standard errors. Hence both estimates are useful.

    In addition to the need to calculate the risk of any technology it is necessary to represent the state of knowledge uncertainty and population variability [Kap83]. The state of knowledge uncertainty is also known as assessment uncertainty and covers the uncertainty that could be reduced by further research. The population variability for nuclear power plants accounts for variability in engineered systems, e.g., differences in engineered safety systems of individual plants. The first PRA of a family of potential system failures for a boiling water reactor (BWR) and a pressurized water reactor (PWR) was the Reactor Safety Study [NRC75] completed in 1975. Although that study is now dated, because it was based on nuclear plants that were operating in 1972 and designed much earlier, it is still of interest to engineers interested in risks from nuclear systems because the study established methods used in all later investigations and because it was very comprehensive.

    1.3 TWO HISTORICAL REACTOR ACCIDENTS

    The importance of risk and safety analysis becomes obvious when considered in the context of history. The Three Mile Island accident in 1979 in which the reactor was destroyed by a core meltdown—but which led to only a very minor release of radioactivity outside the turbogenerator building—provided an incentive to further develop techniques to predict potential events leading to system malfunctions. Follow-on reports augmenting the procedures developed in the Reactor Safety Study and used in probabilistic risk analyses were published in the early 1980s, including a guide to fault tree analysis [Ves81] and a PRA procedures guide [NRC83]. Another important report was an assessment of risks for five U.S. nuclear power plants [NRC90].

    The accident at the Chernobyl nuclear power plant in 1986 also contributed to the current emphasis on the use of probabilistic techniques for the analysis of nuclear systems, even though that plant was of an entirely different type than those built outside of the former Soviet Union because the RBMK reactors had a positive void coefficient of reactivity. A power excursion was initiated when the reactor operators were testing the performance of the coolant pumps operated with electrical power from the plant’s turbine generator rather than off-site power. After overheated fuel from the reactor core was ejected into the coolant, causing it to boil off, reactivity was added to the reactor core, which increased the power excursion so rapidly that the control systems could not shut the system down. A steam explosion subsequently destroyed the pressure vessel, which led to the release of massive amounts of radioactivity, causing early fatalities and subsequent long-term health consequences from radiation exposure.

    These two reactor accidents, along with other incidents of major concern, will be discussed in much more detail in Chapter 9.

    1.4 DEFINITION OF RISK

    To express the concept of risk in more mathematical terms, risk Ri combines the frequency Fi of an event sequence i, in events per unit time, with the corresponding damage Di, which is the magnitude of the expected consequence. A traditional definition of risk is

    (1.1) equation

    Other definitions could be used, however, if one wished to amplify the importance of undesired events with large consequences, such as with Rk = FDk for k > 1. Risk differs from hazard, which is a condition with the potential of causing an undesired consequence, and from danger, which is exposure to a hazard.

    More generally, the damage from an accident sequence can be analyzed with a continuum of outcomes between x and x + Δx. Then, instead of Eq. (1.1), the risk density Ri(x) of magnitude Di(x), per unit of damage, can be interpreted as

    (1.2) equation

    Usually, however, of more interest is the risk of damages Di(x) exceeding the magnitude X, in which case the risk in Eq. (1.2) is replaced by

    (1.3) equation

    The risk Ri(≥ X) is the complementary cumulative distribution function (CCDF) for accident sequence i.

    In the case of a severe release of radioactivity, more than one type of potential damage could occur. For example, there could be early deaths, within days to weeks after the release, due to acute doses of radiation. Or latent somatic effects after lesser radiation exposures, leading to cancer fatalities, might occur typically within a few years or a few decades. In addition, loss of work time (in person-days) and property losses also are potential damages. For such cases, when a catastrophic initiating event i causes a variety of predicted consequences of type j, leading to damages with a magnitude between Dij and Dij + ΔDij then Eqs. (1.2) and (1.3) are replaced by, respectively,

    (1.4) equation

    and

    (1.5) equation

    A cornerstone of the risk and safety assessments for nuclear systems is the principle of defense in depth (DID), originating from the various safety measures that Enrico Fermi and his colleagues incorporated in the planning and execution of the first self-sustaining chain reaction at the University of Chicago in 1942. Thus, the DID principle has been implemented at every stage of design, construction, and operation of nearly every nuclear reactor around the world, with an ultimate objective of protecting the health and life of the population at large, although some people would argue that this was not done with the Russian RBMK reactors. The principle may be accomplished through the diversity and redundancy of various equipment and safety functions. The safety principle may also be represented in terms of multiple layers of radiation barriers, including the fuel matrix, fuel cladding, reactor pressure vessel, and ultimately the reactor containment building. In terms of safety functions, three basic levels may be illustrated: (a) prevention of accidents via reactor shutdown, (b) mitigation of accidents through the actuation of an auxiliary coolant system, and (c) protection of the public via containment sprays minimizing the release of radionuclides to the environment. The DID principles are fully reflected in the General Design Criteria, promulgated as Appendix A to Title 10, Code of Federal Regulations, Part 50 [NRC71].

    1.5 RELIABILITY, AVAILABILITY, MAINTAINABILITY, AND SAFETY

    The risk and safety issues of a nuclear plant initially depend on the plant design and construction. Thereafter, because a plant naturally cannot operate indefinitely without intervention, the degree of risk versus safety depends on the maintenance procedures and operator actions intended to improve the plant operation.

    To determine a risk Ri(≥ X) of an undesired event, it is necessary to predict the availability of the safety systems that should operate after the initiating event to mitigate the consequences. The availability of a safety system is analyzed with the concepts of reliability engineering used for predicting whether the system is up or down. When performing an availability or reliability analysis, there are several issues related to performance that must be considered: hardware and software failures, human errors, and incorrect operating procedures as well as the interactions between these.

    What are the differences between a reliable system and an available system? Or to phrase the question in a different way, can a safety system, for example, be available but not very reliable? Reliability R(t) is the probability that a system can perform a specified function or mission under given conditions for a period of time t, while availability A(t) is the probability that a system can perform a specified function or mission under given conditions at time t. The difference between R(t) and A(t) arises because reliability does not account for the possibility that a given system can be repaired after its failure. This means that R(t) predicts the time of interest t until the system has undergone its first failure, whereas the system may have failed in the past but been repaired so that it is operational at time t with predicted availability A(t). Reliability, also called the survival function of a system, is the complement of the failure probability F(t) that defines the probability of failure after a time period t, i.e., R(t) = 1 - F(t).

    It is important to note that reliability refers to the first system failure, but a system with redundant subsystems can exhibit subsystem failures without system failure. For a reliability analysis, once a system has failed, any incomplete repair actions are considered to cease, whereas for an availability analysis the on-going repair actions continue. Thus, if a system can be repaired, then the mean time between failures (MTBF) should exceed the mean time to failure (MTTF).

    The assumptions about the way a system degrades with age and how it responds to a failure affect the type of model that can be assumed for repair of a system. A minimal repair returns the system to the state the system was in immediately preceding failure, while a perfect repair or renewal repair returns it to the state it-was in when new. A minimal repair model allows the analysis of systems that are deteriorating or improving with time, while a perfect repair model does not. A minimal repair model for which improvements with time might be appropriate, for example, is if the repair people can learn from identical previous repairs.

    Maintainability, on the other hand, is the ability of a system component, during its prescribed use, to be restored to a state in which it can perform its intended function when the maintenance is performed under prescribed procedures. It involves actions typically performed according to procedures established by the manufacturer of the component. Although manufacturers may have tabulated data that prescribe regular maintenance procedures, the frequency of maintenance actions is guided by experience and depends not only on the quality of a system’s components but also on the operating environment of the equipment, such as the operating temperature or pressure. Although probabilistic failure analyses can be incorporated when developing a scheduled maintenance procedure, maintenance procedures for nuclear systems tend to be developed more through operating experience, with the objective of increasing the safety of the plant and decreasing the system downtime caused by an unscheduled outage.

    Reliability, availability, and maintainability (often abbreviated RAM) all contribute to improving the safe operation of a nuclear plant. A plant operated with good RAM procedures provides safety, which can be defined as eliminating those conditions, to an acceptable level of risk, conditions that can cause death, injury, occupational illness, or damage to or loss of equipment or property. Because safety is the single most overriding consideration of plant operation, one is most interested in the availability of the plant safety systems to perform their intended functions at the time they are needed. From the perspective of decreasing the plant downtime, on the other hand, one is interested in the reliability of the system components for the duration of time between routinely scheduled maintenance activities. A RAM program coupled to safety (S) enhancement of the plant leads to a RAMS structure.

    The RAM program did not develop as a unique discipline, but rather it has grown out of the integration of activities previously used by engineers to achieve a reliable, safe and cost-effective system. Engineered systems have been growing more and more complex over the past decades, which now requires increased attention to maintain the performance of the systems with minimal cost. Thus, it has been a constant challenge for engineers to apply preventive maintenance on engineered systems in a cost-effective way to avoid failures, which would usually require more costly repair or maintenance procedures.

    Reliability-centered maintenance (RCM) provides a framework for developing optimally scheduled maintenance programs that are cost effective. The RCM concept was first developed in the aircraft industry when the first Boeing 747 was built. There were many requirements for maintaining such a complex aircraft and there was a need to identify a maintenance strategy that could reduce unnecessary maintenance tasks. By 1978 the first full description of RCM was published [Now78], and in the 1980s the Electric Power Research Institute introduced RCM to the nuclear industry.

    Maintenance activities are usually classified [Rau04] as either preventive or corrective activities. Preventive maintenance (PM) represents planned maintenance that is performed when the equipment is functioning properly to avoid future failures. It may involve inspection, adjustments, lubrication, parts replacement, calibration, and repair of items that are beginning to wear out. PM may be carried out on a regular basis, regardless of whether the functionality or performance is degraded or not. PM activities can be classified into the following categories:

    (a) Clock-based maintenance. This is the simplest form of PM where maintenance is carried out according to a fixed maintenance schedule on a regular basis.

    (b) Age-based maintenance. This form of PM is carried out at a specified age of the item, often according to manufacturer’s specification. Aging may be measured in terms of time in operation, number of times operated, or other time concepts.

    (c) Condition-based maintenance. This PM is based on one or more condition variables of the equipment. It requires a monitoring scheme of the variables and a set threshold to initiate maintenance. Examples of condition variables are temperature, pressure, and vibration of a component.

    Corrective maintenance (CM), or in a simpler word repair, is carried out when an item has failed. The objective of CM is to quickly restore the equipment to functionality or to switch in a standby equipment to restore the system. Corrective maintenance is also called run-to-failure maintenance, which effectively represents the result of a deliberate decision to operate the system until a failure occurs.

    1.6 ORGANIZATION OF THE BOOK

    Chapters 2 through 5 provide an introduction to some of the more important concepts from the first several weeks of a course in reliability engineering as typically taught on most university campuses in mechanical or industrial engineering departments. Chapter 2 covers the elements of probability and reliability theory and some widely used probability distributions for a system that can be modeled as a single component. Chapter 3 presents aspects of statistics used in working with data for a reliability analysis on one component. In Chapter 4 the reliability of multiple-component systems is introduced, while Chapter 5 illustrates a way to incorporate repair of components into an analysis. The PRA discussion begins in Chapter 6 with methods for combining failure probabilities and consequences, followed by PRA computer programs in Chapter 7.

    Nuclear power plant safety analysis is treated in Chapter 8 before considering major nuclear power plant accidents and incidents in Chapter 9. With this background, past PRA studies of nuclear plants are discussed in Chapter 10. Advanced nuclear power plant designs with enhanced passive safety features are considered in Chapter 11, followed by topics related to risk-informed regulations and reliability-centered maintenance in Chapter 12. Chapter 13 discusses recent developments of probabilistic techniques to accurately represent dynamic system evolutions for reliability evaluation and system diagnostics. A number of mathematical and statistical techniques as well as specific data relevant to the risk and safety analysis of nuclear systems are provided as appendices.

    References

    [Ho179a] J. P. Holdren, K. Anderson, P. H. Gleick, I. Mintzer, and G. Morris, Risk of Renewable Energy Sources: A Critique of the Inhaber Report, ERG 79–3, Energy and Resources Group, Univ. of California, Berkeley (1979).

    [Ho179b] J. P. Holdren, Nucl. News 25 (March 1979); H. Inhaber, ibid. 25 (March 1979); J. P. Holdren, ibid. 32 (April 1979); H. Inhaber, ibid. 26 (May 1979); see also Nucl. News 42 (September 1979).

    [Inh82] H. Inhaber, Energy Risk Assessment, Fig. 7, Gordon and Breach (1982).

    [Kap83] S. Kaplan, On a ‘Two-Stage’ Bayesian Procedure for Determining Failure Rates from Experiential Data, IEEE Trans. Power App. Sys. PAS-102, 195 (1983).

    [Low76] W. W. Lowrance, Of Acceptable Risk, Kaufman (1976).

    [Meh90] P. Mehta et al., Bhopal Tragedy’s Health Effects, A Review of Methyl Isocyanate Toxicity, JAMA 264, 2781(1990).

    [Mor93] M. G. Morgan, Risk Analysis and Management, Sci. Am. 269, 32 (1993).

    [Now78] F. S. Nowlan and H. F. Heap, Reliability-Centered Maintenance, A066–579, U. S. Department of Commerce (1978).

    [NRC71] General Design Criteria for Nuclear Power Plants, Title 10, Code of Federal Regulations, Part 50, Appendix A, U.S. Nuclear Regulatory Commission (1971).

    [NRC75] Reactor Safety Study-An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants, WASH-1400 (NUREG 75/014), U.S. Nuclear Regulatory Commission (1975).

    [NRC83] PRA Procedures Guide, NUREG/CR-2300, U.S. Nuclear Regulatory Commission (1983).

    [NRC90] Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants, NUREG- 1150, U.S. Nuclear Regulatory Commission (1990).

    [Rah84] F. J. Rahn, A. G. Adamantiades, J. E. Kenton, and C. Braun, A Guide To Nuclear Power Technology: A Resource for Decision Making, Wiley (1984).

    [Rau04] M. Rausand and A. Hoyland, System Reliability Theory-Models, Statistical Methods, and Applications, Wiley (2004).

    [Ves81] W. E. Vesely, F. F.Goldberg, N. H. Roberts, and D. F. Haasl, The Fault Tree Handbook, NUREG-0492, U.S. Nuclear Regulatory Commission (1981).

    CHAPTER 2

    PROBABILITIES OF EVENTS

    This chapter contains an introduction to the underlying principles of probabilities and their application to the analysis of failure events.

    2.1 EVENTS

    In order to understand probability concepts we first need to define a sample space S with unique events En, n = 1, 2, …, being members of S. For brevity of equations, in this book we write E1E2 for the intersection of two events E1 and E2, although elsewhere such an intersection may be written as E1 ∩ E2. Note that we cannot multiply events, so "E1 AND E2" is not "E1 times E2." It is helpful to illustrate such an occurrence with the aid of the Venn diagram in Fig. 2.1. For a sample space with N events, the intersection of all events is E1E2 ˙ ˙ ˙ EN.

    Figure 2.1 Venn diagram illustrating the intersection and union of two events E1 and E2.

    Another concept arising with events is the union of unique events such as E1 or E2. This will be denoted here by E1 + E2, although elsewhere it may appear as E1 ∪ E2. Either convention means "E1 OR E2," not "E1 plus E2." For a sample space with N events, the union of all events is E1 + E2 + … + EN. The additional symbol equation , "NOT E," is for the complement of E.

    A compound event H may consist of many events, in which case the use of parentheses may be needed to appropriately group the events. Some rules of Boolean algebra for events, given in Table 2.1, are used to simplify the writing of a compound event. The commutative and associative laws are similar to those laws for ordinary algebra. The idempotent laws enable redundancies for the same event to be eliminated. Absoiption law 4a is easily justified by observing that if event X occurs then event (X + Y) also has occurred so X (X + Y) = X; a similar argument holds for absorption law 4b. The distributive laws 5a and 5b are very useful in fault tree analysis (Chapter 6) and may be verified by using the preceding rules. De Morgan’s theorems are useful if the search for a system failure event H is switched to the search for the successful operation of that system.

    Table 2.1 Boolean Algebra for Events

    For a failure analysis, a system failure event H might consist of many component failure events nested together. Boolean algebra facilitates the reduction of H to a set of single-component failure events, double-component failure events, etc. The resulting single- and multiple-component events are cut sets, i.e., combinations of events, any of which could cause failure of a system. That is, a cut set is defined as a set of system events that, if they all occur, will cause system failure while a minimal cut set of a system is a cut set that has no other cut set as a subset. Another way of saying this is that the removal of any event from a minimal cut set would cause it not to be a cut set, i.e., the system would no longer fail.

    Example 2.1 Construct the minimal cut sets for a system failure event H consisting of component failure events A to E, where

    equation

    We observe that the first term obviously cannot be reduced, while

    equation

    and

    equation

    Combining terms gives

    equation

    and we conclude that a system failure could occur from a single failure event A or any of the four double failure events. In Example 2.2 we will consider the probability of occurrence of failure event H. ¹

    2.2 EVENT TREE ANALYSIS AND MINIMAL CUT SETS

    An event tree depicts the evolution of a series of events with time. In a safety analysis of a nuclear system, for example, it provides an inductive logic method for identifying the various possible outcomes of a given (undesired) initiating event. Event trees are similar to decision trees, but they differ in that human intervention is not required to influence the outcome of the initiating event. In risk and safety analysis applications, the initiating event of an event tree can be the failure of a system itself or it can be initiated externally to the system, with the subsequent events determined by the performance of the system components. Different event trees must be constructed and evaluated to analyze a set of possible accidents.

    In a given accident analysis, once an initiating event is defined, all the safety systems that possibly can be utilized after the failure event must be identified, and the set of possible failure and success states for each system must be defined. These safety systems are then structured in the form of headings for the event tree. To be conservative, usually each system is defined to have only one success state S, where everything is working as good as new, and a single system failure state F comprised of all possible system failures. This is illustrated with a classic tree structure in Fig. 2.2. The accident sequences that result from the tree structure are shown in the last column of the figure. Each branch of the tree yields one particular accident sequence; for example, S1F2 denotes the accident sequence in which the initiating event (I) occurs and system 1 is called upon and succeeds (S1) but system 2 either is in a failed state or fails to perform upon demand. For larger event trees, this stepwise branching analysis would simply be continued.

    Figure 2.2 Illustration of event tree branching.

    Source: [NRC75].

    It should be emphasized that the system states on a given branch of an event tree are conditional on the previous states having already occurred. In Fig. 2.2, for example, the success and failure of system 1 must be defined under the condition that the initiating event has occurred; likewise, in the upper branch of the tree corresponding to system 1 success, the success and failure of system 2 must be defined under the conditions that the initiating event has occurred and system 1 has succeeded.

    A major concern in event tree construction involves accounting for the timing of the events. In some instances, the failure logic changes depending on the time at which the events take place; such a case occurs, for example, in the operation of emergency core cooling systems in nuclear plants. Then dynamic event tree analysis techniques are needed to model the system that changes during the accident, even though the safety system components remain the same [Dev92,Izq96,Lab00]. Dynamic event tree analysis will be discussed in Chapter 13.

    Successful construction of an event tree provides a qualitative analysis of what happens after an initiating event, but if a quantitative analysis is desired, then each branch of the event tree must be quantitatively evaluated. This can be done by a variety of techniques, but typically the states in nuclear systems are assigned numerical values from fault tree analyses. The probabilities obtained must be conditional probabilities for each branch in a sequence, as schematically illustrated in Fig. 2.2. Nonconditional and conditional probabilities are the subject of the next section.

    2.3 PROBABILITIES

    2.3.1 Interpretations of Probability

    The classic mathematical interpretation of the probability of an event E, which is the relative frequency approach, requires that if event E in sample space S occurs X number of times out of a number n of repeated experiments whose outcomes are described by S, then the probability P(E) of the outcome of event E is defined by

    (2.1) equation

    For a fixed n, the quantity X/n is the relative frequency of occurrence of E. Because it is impossible to actually conduct an infinite number of trials so that n → ∞, usually P(E) is just approximated by X/n. The strong law of large numbers and the central limit theorem [Fe168,Man74,Pap02] provide a justification that improved estimates of P(E) will follow by increasing n.

    The difficulty of such an interpretation for engineers interested in risk and safety analysis is that usually we do not have the option of performing n experiments because we are dealing with rarely occurring events, and sometimes it is preferable to not even perform a single experiment if the outcome would damage a system. In such instances it is necessary to resort to the axiomatic or subjective approach to the concept of probability, which we shall use from now on.

    The axiomatic interpretation begins with the broad view that probability is nothing more than a measure of uncertainty about the likelihood of an event. Stated more precisely, a probability assignment is a numerical encoding of a state of knowledge [Tri69]. With such a broad definition it is necessary to impose some constraints before obtaining something that can be used in quantitative analysis. Examples of several kinds of knowledge are:

    Symmetry Sometimes a system is known to be symmetrical, as in the case of honest dice or coins. As an example, if an experiment consisting of 1000 flips of a coin gave 534 heads and 466 tails, the probability of the event that heads will appear would be assigned a probability of 0.5 because it would be believed that an insufficient number of flips had been performed to give the outcome 0.5, as expected from the relative frequency interpretation of probability.

    Averages Sometimes the average result of what has occurred in the past is known, such as the average annual rainfall in a given year, so this would be used as an estimate of the expected rainfall in the next year unless

    Enjoying the preview?
    Page 1 of 1