You are on page 1of 112

SINTEF REPORT

TITLE

SINTEF Technology and Society Reliability Data for Safety Instrumented Systems
Safety research
Address: NO-7465 Trondheim,
PDS Data Handbook, 2010 Edition
NORWAY
Location: S P Andersens veg 5
NO-7031 Trondheim
Telephone: +47 73 59 27 56
Fax: +47 73 59 28 96 AUTHOR(S)

Enterprise No.: NO 948 007 029 MVA


Stein Hauge and Tor Onshus
CLIENT(S)

Multiclient - PDS Forum


REPORT NO. CLASSIFICATION CLIENTS REF.

SINTEF A13502 Unrestricted


CLASS. THIS PAGE ISBN PROJECT NO. NO. OF PAGES/APPENDICES

Unrestricted 978-82-14-04849-0 504091.17 116


ELECTRONIC FILE CODE PROJECT MANAGER (NAME, SIGN.) CHECKED BY (NAME, SIGN.)

Stein Hauge Per Hokstad


FILE CODE DATE APPROVED BY (NAME, POSITION, SIGN.)

2009-12-18 Lars Bodsberg, Research Director


ABSTRACT

This report provides reliability data estimates for components of control and safety systems. Data
dossiers for input devices (sensors, detectors, etc.), control logic (electronics) and final elements
(valves, etc.) are presented, including some data for subsea equipment. Efforts have been made to
document the presented data thoroughly, both in terms of applied data sources and underlying
assumptions. The data are given on a format suitable for performing reliability analyses in line with
the requirements in the IEC 61508 and IEC 61511 standards.

As compared to the former 2006 edition, the following main changes are included:

• A general review and update of the failure rates, coverage values, β-values and other relevant
parameters;
• Some new equipment groups have been added;
• Data for control logic units have been updated and refined.

KEYWORDS ENGLISH NORWEGIAN

GROUP 1 Safety Sikkerhet


GROUP 2 Reliability Pålitelighet
SELECTED BY AUTHOR Data Data
Safety Instrumented Systems (SIS) Instrumenterte sikkerhetssystemer
SIL calculations SIL beregninger
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

PREFACE
The present report is an update of the 2006 edition of the Reliability Data for Control and Safety
Systems, PDS Data Handbook [12]. The handbook presents data in line with the latest available
data sources as well as data for some new equipment.

The work has been carried out as part of the research project “Managing the integrity of safety
instrumented systems”. 1

Trondheim, December 2009

Stein Hauge

PDS Forum Participants in the Project Period 2007 - 2009

Oil Companies/Operators
• A/S Norske Shell
• BP Norge AS
• ConocoPhillips Norge
• Eni Norge AS
• Norsk Hydro ASA
• StatoilHydro ASA (Statoil ASA from Nov. 1st 2009)
• Talisman Energy Norge
• Teekay Petrojarl ASA
• TOTAL E&P NORGE AS

Control and Safety System Vendors


• ABB AS
• FMC Kongsberg Subsea AS
• Honeywell AS
• Kongsberg Maritime AS
• Bjørge Safety Systems AS
• Siemens AS
• Simtronics ASA

Engineering Companies and Consultants


• Aker Kværner Engineering & Technology
• Det Norske Veritas AS
• Lilleaker Consulting AS
• NEMKO AS
• Safetec Nordic AS
• Scandpower AS

Governmental Bodies
• The Directorate for Civil Protection and Emergency Planning (Observer)
• The Norwegian Maritime Directorate (Observer)
• The Petroleum Safety Authority Norway (Observer)

1
This user initiated research project has been sponsored by the Norwegian Research Council and the PDS
forum participants. The project work has been carried out by SINTEF.
3
ABSTRACT
This report provides reliability data estimates for components of control and safety systems. Data
dossiers for input devices (sensors, detectors, etc.), control logic (electronics) and final elements
(valves, etc.) are presented, including some data for subsea equipment. Efforts have been made to
document the presented data thoroughly, both in terms of applied data sources and underlying
assumptions. The data are given on a format suitable for performing reliability analyses in line
with the requirements in the IEC 61508 and IEC 61511 standards.

As compared to the former 2006 edition, the following main changes are included:

• A general review and update of the failure rates, coverage values, β-values and other
relevant parameters;
• Some new equipment groups have been added;
• Data for control logic units have been updated and refined.

4
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Table of Contents

PREFACE ........................................................................................................................... 3
ABSTRACT .................................................................................................................................... 4
1 INTRODUCTION ................................................................................................................... 9
1.1 Objective and Scope ......................................................................................................... 9
1.2 Benefits of Reliability Analysis – the PDS Method ......................................................... 9
1.3 The IEC 61508 and 61511 Standards ............................................................................. 10
1.4 Organisation of Data Handbook ..................................................................................... 10
1.5 Abbreviations ................................................................................................................. 10
2 RELIABILITY CONCEPTS ................................................................................................. 13
2.1 The Concept of Failure ................................................................................................... 13
2.2 Failure Rate and Failure Probability............................................................................... 13
2.2.1 Failure Rate Notation ...................................................................................... 13
2.2.2 Decomposition of Failure Rate........................................................................ 14
2.3 Reliability Measures and Notation ................................................................................. 15
2.4 Reliability Parameters .................................................................................................... 16
2.4.1 Rate of Dangerous Undetected Failures .......................................................... 16
2.4.2 The Coverage Factor, c ................................................................................... 17
2.4.3 Beta-factors and CMooN .................................................................................... 17
2.4.4 Safe Failure Fraction, SFF............................................................................... 18
2.5 Main Data Sources ......................................................................................................... 18
2.6 Using the Data in This Handbook .................................................................................. 19
3 RELIABILITY DATA SUMMARY ..................................................................................... 21
3.1 Topside Equipment ......................................................................................................... 21
3.2 Subsea Equipment .......................................................................................................... 27
3.3 Comments to the PDS Data ............................................................................................ 28
3.3.1 Probability of Test Independent Failures (PTIF) .............................................. 28
3.3.2 Coverage .......................................................................................................... 29
3.3.3 Fraction of Random Hardware Failures (r) ..................................................... 30
3.4 Reliability Data Uncertainties – Upper 70% Values ...................................................... 32
3.4.1 Data Uncertainties ........................................................................................... 32
3.4.2 Upper 70% Values........................................................................................... 33
3.5 What is “Sufficient Operational Experience“? – Proven in Use .................................... 34
4 MAIN FEATURES OF THE PDS METHOD ...................................................................... 37
4.1 Main Characteristics of PDS .......................................................................................... 37
4.2 Failure Causes and Failure Modes ................................................................................. 37
4.3 Reliability Performance Measures ................................................................................. 39
4.3.1 Contributions to Loss of Safety ....................................................................... 40
4.3.2 Loss of Safety due to DU Failures - Probability of Failure on Demand (PFD)40
4.3.3 Loss of Safety due to Test Independent Failures (PTIF)................................... 40
4.3.4 Loss of Safety due to Downtime Unavailability – DTU ................................. 41
4.3.5 Overall Measure for Loss of Safety– Critical Safety Unavailability .............. 41
5 DATA DOSSIERS ................................................................................................................. 43
5.1 Input Devices .................................................................................................................. 44
5.1.1 Pressure Switch ............................................................................................... 44
5.1.2 Proximity Switch (Inductive) .......................................................................... 46
5.1.3 Pressure Transmitter ........................................................................................ 47
5
5.1.4 Level (Displacement) Transmitter................................................................... 49
5.1.5 Temperature Transmitter ................................................................................. 51
5.1.6 Flow Transmitter ............................................................................................. 53
5.1.7 Catalytic Gas Detector..................................................................................... 55
5.1.8 IR Point Gas Detector...................................................................................... 57
5.1.9 IR Line Gas Detector ....................................................................................... 59
5.1.10 Smoke Detector ............................................................................................... 61
5.1.11 Heat Detector ................................................................................................... 63
5.1.12 Flame Detector ................................................................................................ 65
5.1.13 H2S Detector .................................................................................................... 68
5.1.14 ESD Push Button ............................................................................................. 70
5.2 Control Logic Units ........................................................................................................ 72
5.2.1 Standard Industrial PLC .................................................................................. 73
5.2.2 Programmable Safety System ......................................................................... 79
5.2.3 Hardwired Safety System ................................................................................ 85
5.3 Final Elements ................................................................................................................ 88
5.3.1 ESV/XV........................................................................................................... 88
5.3.2 ESV, X-mas Tree ............................................................................................ 92
5.3.3 Blowdown Valve ............................................................................................. 95
5.3.4 Pilot/Solenoid Valve........................................................................................ 97
5.3.5 Process Control Valve ................................................................................... 100
5.3.6 Pressure Relief Valve .................................................................................... 103
5.3.7 Deluge Valve ................................................................................................. 105
5.3.8 Fire Damper ................................................................................................... 106
5.3.9 Circuit Breaker .............................................................................................. 108
5.3.10 Relay.............................................................................................................. 109
5.3.11 Downhole Safety Valve – DHSV.................................................................. 110
5.4 Subsea Equipment ........................................................................................................ 111
6 REFERENCES..................................................................................................................... 116

6
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

List of Tables

Table 1 Decomposition of critical failure rate, λcrit ........................................................................15


Table 2 Performance measures and reliability parameters .............................................................15
Table 3 Failure rates, coverages and SFF for input devices ...........................................................21
Table 4 Failure rates, coverages and SFF for control logic units ...................................................22
Table 5 Failure rates, coverages and SFF for final elements..........................................................23
Table 6 PTIF for various components ..............................................................................................24
Table 7 β-factors for various components ......................................................................................25
Table 8 Numerical values for configuration factors, CMooN ...........................................................26
Table 9 Failure rates for subsea equipment - input devices, control system units and
output devices ........................................................................................................................27
Table 10 Estimated upper 70% confidence values for topside equipment .....................................33
Table 11 Discussion of proposed subsea data ..............................................................................111

List of Figures

Figure 1 Decomposition of critical failure rate, λcrit .......................................................................15


Figure 2 Illustration of failure rate with confidence level of 70% .................................................32
Figure 3 Failure classification by cause of failure ..........................................................................38
Figure 4 Contributions to critical safety unavailability (CSU).......................................................42

7
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

1 INTRODUCTION
Safety standards like IEC 61508, [1] and IEC 61511, [2], require quantification of failure
probability for operation of safety systems. Such quantification may be part of design
optimization or verification that the design is according to stated performance requirements.

The use of relevant failure data is an essential part of any quantitative reliability analysis. It is also
one of the most challenging parts and raises a number of questions concerning the availability and
relevance of the data, the assumptions underlying the data and what uncertainties are related to the
data.

In this handbook recommended data for reliability quantification of Safety Instrumented Systems
(SIS) are presented. Efforts have been made to document the presented data thoroughly, both in
terms of applied data sources and underlying assumptions.

Various data sources have been applied when preparing this handbook, the most important source
being the OREDA database and handbooks (ref. section 2.5).

1.1 Objective and Scope


When performing reliability quantification, the analyst will need information on a number of
parameters related to the equipment under consideration. This includes basic failure rates,
distribution of critical failure modes, diagnostic coverage factors and common cause factors. In
this handbook best estimates for these reliability parameters are presented for selected equipment.
The data are given on a format suitable for performing analyses in line with the requirements in
the IEC 61508/61511 standards and the PDS method, [10].

As compared to the former 2006 edition, [12], the following main changes are included:

• A general update / review of the failure rates, coverage values, β-values and other relevant
parameters;
• Some new equipment groups have been added;
• Data for control logic units have been updated and refined.

1.2 Benefits of Reliability Analysis – the PDS Method


Instrumented safety systems such as emergency shutdown systems, fire and gas systems and
process shutdown systems, are installed to prevent abnormal operating conditions from
developing into an accident. High reliability of such systems is therefore paramount with respect
to safe - as well as commercial - operation.

Reliability analysis represents a systematic tool for evaluating the performance of safety
instrumented systems (SIS) from a safety and production availability point of view. Some main
applications of reliability analysis are:

• Reliability assessment and follow-up; verifying that the system fulfils its safety and
reliability requirements;
• Design optimisation; balancing the design to get an optimal solution with respect to safety,
production availability and lifecycle cost;
• Operation planning; establishing the optimal testing and maintenance strategy;
9
• Modification support; verifying that planned modifications are in line with the safety and
reliability requirements.

The PDS method has been developed in order to enable the reliability engineer and non-experts to
perform such reliability considerations in various phases of a project. The main features of the
PDS method are discussed in chapter 4.

1.3 The IEC 61508 and 61511 Standards


The IEC 61508 and IEC 61511 standards, [1] and [2], present requirements to safety instrumented
systems (SIS) for all the relevant lifecycle phases, and have become leading standards for SIS
specification, design, implementation and operation. IEC 61508 is a generic standard common to
several industries, whereas IEC 61511 has been developed especially for the process industry.
These standards present a unified approach to achieve a rational and consistent technical policy
for all SIS systems. The Norwegian Oil Industry Association (OLF) has developed a guideline to
support the use of IEC 61508/61511, [19].

The PDS method is in line with the main principles advocated in the IEC standards, and is a
useful tool when implementing and verifying quantitative (SIL) requirements as described in the
IEC standards.

1.4 Organisation of Data Handbook


In chapter 2 important reliability aspects are discussed and definitions of the applied notations are
given.

The recommended reliability data estimates are summarised in chapter 3 of this report. A split has
been made between input devices, logic solvers and final elements.

Chapter 4 gives a brief summary of the main characteristics of the PDS method. The failure
classification for safety instrumented systems is presented together with the main reliability
performance measures used in PDS.

In chapter 5 the detailed data dossiers providing the basis for the recommended reliability data are
given. As for previous editions of the handbook, some data are scarcely available in the data
sources, and it is necessary to, partly or fully, rely on expert judgements.

1.5 Abbreviations
CCF - Common cause failure
CSU - Critical safety unavailability
DTU - Downtime unavailability
FMECA - Failure modes, effects, and criticality analysis
FMEDA - Failure modes, effects, and diagnostic analysis
IEC - International Electro technical Commission
JIP - Joint industry project
MTTR - Mean time to restoration
NDE - Normally de-energised
NE - Normally energised
OLF - The Norwegian oil industry association
OREDA - Offshore reliability data

10
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

PDS - Norwegian acronym for “reliability of computer based safety systems”


PFD - Probability of failure on demand
RNNP - Project: Risk level in Norwegian petroleum production
www.ptil.no
SIL - Safety integrity level
SIS - Safety instrumented system
SFF - Safe failure fraction
STR - Spurious trip rate
TIF - Test independent failure

Additional abbreviations (equipment related)

AI - Analogue input
BDV - Blowdown valve
CPU - Central Processing Unit
DO - Digital output
ESV - Emergency shutdown valve
DHSV - Downhole safety valve
XV - Production shutdown valve

11
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

2 RELIABILITY CONCEPTS
In this chapter some selected concepts related to reliability analysis and reliability data are
discussed. For a more detailed discussion reference is made to the updated PDS method
handbook, ref. [10].

2.1 The Concept of Failure


A failure is in IEC 61508-4 defined as the termination of the ability of a functional unit to perform
a required function. The two main functions of a safety system are [10]; the ability to shut down
or go to a predefined safe state when production is not safe and the ability to maintain production
when it is safe. Hence, a failure may have two facets; (1) loss of the ability to shut down or go to a
safe state when required or (2) loss of the ability to maintain production.

From a safety point of view, the first category will be the more critical and such failures are
defined as dangerous failures (D), i.e. they have the potential to result in loss of the ability to shut
down or go to a safe state when required.

Loss of the ability to maintain production is normally not so critical to safety and such failures
have therefore in PDS traditionally been denoted spurious trip (ST) failures whereas IEC 61508
categorise such failures as ‘safe’ (S). In the forthcoming update of the IEC 61508 standard the
definition of safe failures is more in line with the PDS interpretation. Therefore PDS have in this
updated version also applied the notation ‘S’ (instead of ‘ST’ failures).

It should be noted that a given failure may be classified as either dangerous or safe depending on
the intended application. E.g. loss of hydraulic supply to a valve actuator operating on-demand
will be dangerous in an energise-to-trip application and safe in a de-energise-to-trip application.
Hence, when applying the failure data, the assumptions underlying the data as well as the context
in which the data shall be used must be carefully considered.

2.2 Failure Rate and Failure Probability


The failure rate (numbers of failures per time unit) for a component is essential for the reliability
calculations. In section 2.2.1, definitions and notation related to the failure rate are given, whereas
in section 2.2.2 the decomposition of this failure rate into its various elements is further discussed.

2.2.1 Failure Rate Notation

λcrit = Rate of critical failures; i.e., failures that may cause loss of one of the two main
functions of the component/system (see above).

Critical failures include dangerous (D) failures which may cause loss of the ability to
shut down production when required and safe (S) failures which may cause loss of
the ability to maintain production when safe (i.e. spurious trip failures). Hence:

λcrit = λD + λS (see below)

λD = Rate of dangerous (D) failures, including both undetected as well as detected


failures. λD = λDU + λDD (see below)

13
λDU = Rate of dangerous undetected failures, i.e. failures undetected both by automatic
self-test or personnel

λDD = Rate of dangerous detected failures, i.e. failures detected by automatic self-test or
personnel

λS = Rate of safe (spurious trip) failures, including both undetected as well as detected
failures. λS = λSU + λSD (see below)

λSU = Rate of safe (spurious trip) undetected failures, i.e. undetected both by automatic
self-test and personnel

λSD = Rate of safe (spurious trip) detected failures, i.e. detected by automatic self-test or
personnel

λundet = Rate of (critical) failures that are undetected both by automatic self-test and by
personnel (i.e., detected in functional testing only). λundet = λDU + λSU

λdet = Rate of (critical) failures that are detected by automatic self-test or personnel
(independent of functional testing). λdet = λDD + λSD

c = Coverage: percentage of critical failures detected either by the automatic self-test or


(incidentally) by personnel observation

cD = Coverage of dangerous failures. cD = (λDD / λD ) · 100%


Note that λDU then can be calculated as: λDU = λD · (1- cD / 100%)

cS = Coverage of safe (spurious trip) failures. cS = (λSD / λS) ·100%


Note that λSU then can be calculated as: λSU = λS · (1- cS / 100%)

r = Fraction of dangerous undetected (DU) failures originating from random hardware


failures (1-r will then be the fraction originating from systematic failures)

SFF = Safe failure fraction = (1 - λDU / λ rit ) · 100 %

β = The fraction of failures of a single component that causes both components of a


redundant pair to fail “simultaneously”

CMooN = Modification factor for voting configurations other than 1oo2 in the beta-factor
model (e.g. 1oo3, 2oo3 and 2oo4 voting logics)

2.2.2 Decomposition of Failure Rate

Some important relationships between different fractions of the critical failure rate are illustrated
in Table 1 and Figure 1.

14
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Table 1 Decomposition of critical failure rate, λcrit

Spurious trip failures Dangerous failures Sum


Undetected λSU λDU λundet
Detected λSD λDD λdet
Sum λS λD λcrit

Dangerous failure,
λDU undetected by automatic
self-test or personnel
λundet
Safe (spurious trip) failure,
λSU undetected by automatic
self-test or personnel
λcrit
Contribute to SFF
λDD (Safe Failure
Fraction)
λdet

λSD

Figure 1 Decomposition of critical failure rate, λcrit

2.3 Reliability Measures and Notation


Table 2 lists some performance measures for safety and reliability, and some other main
parameters in the PDS method. A more complete description is found in the updated PDS Method
Handbook, 2010 Edition, [10].

Table 2 Performance measures and reliability parameters

Term Description

PFD Probability of failure on demand. This is the measure for loss of safety caused by
dangerous undetected failures, see section 4.3.

PTIF Probability of a test independent failure. This is the measure for loss of safety
caused by a failure not detectable by functional testing, but occurring upon a true
demand (see section 4.3).
CSU Critical safety unavailability, CSU = PFD + PTIF

15
Term Description

MTTR Mean time to restoration. Time from failure is detected/revealed until function is
restored, ("restoration period"). Note that this restoration period may depend on a
number of factors. It can be different for detected and undetected failures: The
undetected failures are revealed and handled by functional testing and could have
shorter MTTR than the detected failures. The MTTR could also depend on
configuration, operational philosophy and failure multiplicity.

STR Spurious trip rate. Rate of spurious trips of the safety system (or set of redundant
components), taking into consideration the voting configuration.
τ Interval of functional test (time between functional tests of a component)

2.4 Reliability Parameters


In this section some of the reliability parameters defined above is further discussed.

2.4.1 Rate of Dangerous Undetected Failures

As discussed in section 2.2.2, the critical failure rate, λ rit are split into dangerous and safe
failures, (i.e. λcrit = λD + λS) which are further split into detected and undetected failures. When
performing safety unavailability calculations, the rate of dangerous undetected failures, λDU, is of
special importance, since this parameter - together with the test interval - to a large degree
governs the prediction of how often a safety function is likely to fail on demand.

Equipment specific failure data reports prepared by manufacturers (or others) often provide λDU
estimates being an order of magnitude (or even more) lower than those reported in generic data
handbooks. There may be several causes for such exaggerated claims of performance, including
imprecise definition of equipment- and analysis boundaries, incorrect failure classification or too
optimistic predictions of the diagnostic coverage factor (see e.g. [20]).

When studying the background data for generic failure rates (λDU) presented in data sources such
as OREDA and RNNP, it is found that these data will include both random hardware failures as
well as systematic failures. Examples of the latter include incorrect parameter settings for a
pressure transmitter, an erroneous output from the control logic due to a failure during software
modification, or a PSV which fails due to excessive internal erosion or corrosion. These are all
failures that are detectable during functional testing and therefore illustrate the fact that systematic
failures may well be part of the λDU for generic data.

Since failure rates provided by manufacturers frequently tend to exclude all types of failures
related to installation, commissioning or operation of the equipment (i.e. systematic type of
failures), a mismatch between manufacturer data and generic data appears. Our question then
becomes - since systematic failures inevitably will occur - why not include these failures in
predictive reliability analyses?

In order to elucidate the fact that the failure rate will comprise random hardware failures as well
as systematic failures, the parameter r has therefore been defined as the fraction of dangerous
undetected failures originating from random hardware failures. Rough estimates of the r factor are
given in the detailed data sheets in chapter 5. For a more thorough discussion and arguments
concerning the r factor, reference is made to [10].

16
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

2.4.2 The Coverage Factor, c

Modules often have built-in automatic self-test, i.e. on-line diagnostic testing to detect failures
prior to an actual demand 2. The fraction of failures being detected by the automatic self-test is
called the fault coverage and quantifies the effect of the self-test. Note that the actual effect on
system performance from a failure that is detected by the automatic self-test will depend on
system configuration and operating philosophy. In particular it should be considered whether the
detected failure is configured to only raise an alarm or alternatively bring the system to a safe
state. It is often seen that failures classified as dangerous detected only raise an alarm and in such
case it must be ensured that the failure initiates an immediate response in the form of a repair
and/or introduction of risk reducing measures.

In addition to the diagnostic self-test, an operator or maintenance crew may detect dangerous
failures incidentally in between tests. For instance, the panel operator may detect a transmitter that
is “stuck” or a sensor that has been left in by-pass. Similarly, when a process segment is isolated
for maintenance, the operator may detect that one of the valves will not close. The PDS method
also aims at incorporating this effect, and defines the total coverage factor; c reflecting detection
both by automatic self-test and by operator. Further, the coverage factor for dangerous failures is
denoted cD whereas the coverage factor for safe failures is denoted cS.

Critical failures that are not detected by automatic self-testing or by observation are assumed
either to be detectable by functional (proof) testing 3 or they are so called test independent failures
(TIF) that are not detected during a functional test but appear upon a true demand (see section 2.3
and chapter 4 for further description).

It should be noted that the term “detected safe failure” (of rate λS), is interpreted as a failure
which is detected such that a spurious trip is actually avoided. Hence, a spurious closure of a
valve which is detected by, e.g., flow metering downstream the valve, can not be categorised as a
detected safe failure. On the other hand, drifting of a pressure transmitter which is detected by the
operator, such that a shutdown is avoided, will typically be a detected safe failure.

2.4.3 Beta-factors and CMooN

When quantifying the reliability of systems employing redundancy, e.g., duplicated or triplicated
systems, it is essential to distinguish between independent and dependent failures. Random
hardware failures due to natural stressors are assumed to be independent failures. However, all
systematic failures, e.g. failures due to excessive stresses, design related failures and maintenance
errors are by nature dependent (common cause) failures. Dependent failures can lead to
simultaneous failure of more than one (redundant) component in the safety system, and thus
reduce the advantage of redundancy.

Traditionally, the dependent or common cause failures have been accounted for by the β-factor
approach. The problem with this approach has been that for any M-out-of-N (MooN) voting
(M<N) the rate of dependent failures is the same, and thus the approach does not distinguish
between e.g. a 1oo2 and a 2oo3 voting. The PDS method extends the β-factor model, and
distinguishes between the voting logics by introducing β-factors which depend on the voting
configuration; i.e. β(MooN) = β · CMooN. Here, CMooN is a modification factor depending on the
voting configuration, MooN.

2
Also refer to IEC 61508-4, section 3.8.6 and 3.8.7
3
See also IEC 61508-4, section 3.8.5.
17
Standard (average) values for the β-factor are given in Table 7. Note that when performing
reliability calculations, application specific β-factors should preferably be obtained, e.g. by using
the checklists provided in IEC 61508-6, or by using the simplified method as described in
Appendix D of the PDS method handbook, [10].

Values for CMooN are given in Table 8. For a more complete description of the extended β-factor
approach of PDS, see [10].

2.4.4 Safe Failure Fraction, SFF

The Safe Failure Fraction as described in IEC 61508 is given by the ratio between dangerous
detected failures plus safe failures and the total rate of failure; i.e. SFF = (λDD + λS) /(λD + λS).
The objective of including this measure (and the associated hardware fault tolerance; HFT) was to
prevent manufacturers from claiming excessive SILs based solely on PFD calculations. However,
experience has shown that failure modes that actually do not influence the main functions of the
SIS (ref. section 2.1) are frequently included in the safe failure rate so as to artificially increase
the SFF, [20].

It is therefore important to point out that when estimating the SFF, only failures with a potential to
actually cause a spurious trip of the component should be included among the safe failures. Non-
critical failures, such as a minor external leakage of hydraulic oil from a valve actuator, should not
be included.

The SFF figures presented in this handbook are based on reported failure mode distributions in
OREDA as well as some additional expert judgements. Higher (or lower) SFFs than given in the
tables may apply for specific equipment types and this should in such case be well documented,
e.g. by FMEDA type of analyses.

2.5 Main Data Sources


The most important data source when preparing this handbook has been the OREDA database and
handbooks. OREDA is a project organisation whose main purpose is to collect and exchange
reliability data among the participating companies (i.e. BP, ENI, ExxonMobil, ConocoPhillips,
Shell, Statoil, TOTAL and Gassco). A special thanks to the OREDA Joint Industry Project (JIP)
for providing access to an agreed set of the OREDA JIP data. For more information about the
OREDA project, any feedback to OREDA JIP concerning the data or name of contact persons,
reference is made to http://www.oreda.com. Equipment for which reliability data are missing or
additional data desirable should be reported to the OREDA project manager or one of the
participating OREDA companies, as this will provide valuable input to future OREDA data
collection plans.

Other important data sources have been;

• Recent data from the RNNP (Norwegian: “Risikonivået i Norsk Petroleumsindustri”)


project on safety critical equipment;
• Failure data and failure mode distributions from safety system manufacturers;
• Experience data from operational reviews on Norwegian offshore and onshore
installations;
• Other commercially published data handbooks such as Exida, [15] and the T-book, [16];
• Discussions and interviews with experts.

A complete list of data sources and references is given in chapter 6.

18
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

2.6 Using the Data in This Handbook


The data in this handbook provide best (average) estimates of equipment failure rates based on
experience gathered mainly throughout the petroleum industry.

The recommended data is based on a number of assumptions concerning safe state, fail safe
design, self-test ability, loop monitoring, NE/NDE design, etc. These assumptions are, for each
piece of equipment, described in the detailed data sheets in chapter 5, Hence, when using the data
for reliability calculations, it is important to consider the relevance of these assumptions for each
specific application.

19
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

3 RELIABILITY DATA SUMMARY

3.1 Topside Equipment


The tables 3 to 8 summarise the input data to be used in reliability analyses. The definitions of the
column headings relate to the parameter definitions given in section 2.2 and 2.3. Some additional
comments on the values for PTIF, coverage and r, are given in section 3.3.

Observe that λD (third column of tables 3 to 5), together with λcrit =λD + λS, will provide the λS.
The rates of undetected failures λDU and λSU follow from the given coverage values, cD and cS. I.e.
λDU = λD · (1- cD / 100%) and λSU = λS · (1- cS / 100%). The safe failure fraction, SFF, can be
calculated by SFF = ((λcrit - λDU)/ λcrit) · 100%.

Data dossiers with comprehensive information for each component are given in chapter 5 as
referred to in tables 3 to 5.

Table 3 Failure rates, coverages and SFF for input devices

Input Devices

Component λcrit 1) λD 1) cD cS λDU 1) λSU 1) SFF Ref.

Pressure switch 3.4 2.3 15 % 10 % 2.0 1.0 41 % Sect. 5.1.1


Proximity switch,
5.7 3.5 15 % 10 % 3.0 2.0 47 % Sect. 5.1.2
inductive
Pressure transmitter 1.3 0.8 60 % 30 % 0.3 0.4 77 % Sect. 5.1.3
Level (displacement)
3.0 1.4 60 % 30 % 0.6 1.1 80 % Sect. 5.1.4
transmitter
Temperature transmitter 2.0 0.7 60 % 30 % 0.3 0.9 85 % Sect. 5.1.5

Flow transmitter 3.7 1.5 60 % 30 % 0.6 1.5 84 % Sect. 5.1.6

Gas detector, catalytic 5.0 3.5 50 % 30 % 1.8 1.1 64 % Sect. 5.1.7

Gas detector, IR point 4.7 2.5 75 % 50 % 0.6 1.1 88 % Sect. 5.1.8

Gas detector, IR line 5.0 2.8 75 % 50 % 0.7 1.1 86 % Sect. 5.1.9

Smoke detector 3.2 1.2 40 % 30 % 0.7 1.4 78 % Sect. 5.1.10

Heat detector 2.5 1.0 40 % 40 % 0.6 0.9 76 % Sect. 5.1.11

Flame detector 6.5 2.7 70 % 50 % 0.8 1.9 88 % Sect. 5.1.12

H2S detector 1.3 1.0 50% 30% 0.5 0.2 62% Sect. 5.1.13

ESD push button 0.8 0.5 20 % 10 % 0.4 0.3 50 % Sect. 5.1.14


1)
All failure rates given per 106 hours

21
Table 4 Failure rates, coverages and SFF for control logic
units

Control Logic Units – industrial PLC


2) 2)
Component λcrit 1) λD 1) cD cS λDU 1) λSU 1) SFF Ref.

Analogue input (single) 3.6 1.8 60 % 20 % 0.7 1.4 80 % Sect. 5.2.1.1

CPU (1oo1) 17.6 8.8 60 % 20 % 3.5 7.0 80 % Sect. 5.2.1.2

Digital output (single) 3.6 1.8 60 % 20 % 0.7 1.4 80 % Sect. 5.2.1.3

Control Logic Units – programmable safety system


2) 2)
Component λcrit 1) λD 1) cD cS λDU 1) λSU 1) SFF Ref.

Analogue input (single) 3.2 1.6 90 % 20 % 0.16 1.3 95 % Sect. 5.2.2.1

CPU (1oo1) 9.6 4.8 90 % 20 % 0.48 3.8 95 % Sect. 5.2.2.2

Digital output (single) 3.2 1.6 90 % 20 % 0.16 1.3 95 % Sect. 5.2.2.3

Control Logic Units – hardwired safety system


2) 2)
Component λcrit 1) λD 1) cD cS λDU 1) λSU 1) SFF Ref.
Trip amplifier / analogue
0.44 0.04 0 0 0.04 0.4 91 % Sect. 5.2.3.1
input (single)
Logic (1oo1) 0.33 0.03 0 0 0.03 0.3 91 % Sect. 5.2.3.2

Digital output (single) 0.33 0.03 0 0 0.03 0.3 91 % Sect. 5.2.3.3


1)
All failure rates given per 106 hours
2)
For control logic units, the coverage c will mainly include failures detected by automatic self-testing. Casual
observation of control logic failures is unlikely.

The following additional assumptions and notes apply for the above data on control logic units:

• A single system with analogue input, CPU/logic and digital output configuration is
generally assumed;
• For the input and output part, figures are given for one channel plus the common part of
the input/output card (except for hardwired safety system where figures for one channel
only are given);
• Single processing unit / logic part is assumed throughout;
• If the figures for input and output are to be used for redundant configurations, separate
input cards and output cards must be used since the given figures assume a common part
on each card;
• If separate Ex barriers or other interface devices are used, figures for these must be added
separately;
• The systems are generally assumed used in de-energised to trip functions, i.e. loss of
power or signal will result in a safe state.

22
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Table 5 Failure rates, coverages and SFF for final elements

Final elements

Component λcrit 1) λD 1) cD cS λDU 1) λSU 1) SFF Ref.


ESV/XV incl. actuator
5.3 3.0 30 % 10 % 2.1 2.1 60 % Sect. 5.3.1
(ex. pilot)
Topside X-mas tree ESV
2.0 1.1 30 % 10 % 0.8 0.8 60 % Sect. 5.3.2
incl. actuator (ex. pilot)
Blowdown valve incl.
3.9 2.6 20 % 0% 2.1 1.3 46 % Sect. 5.3.3
actuator (ex. pilot)
Pilot/solenoid valve 3.0 1.1 30 % 10 % 0.8 1.7 73 % Sect. 5.3.4
Control valve
6.9 4.4 50 % 50 % 2.2 1.3 68 % Sect. 5.3.5
(frequently operated) 2)
Control valve
6.9 4.4 20 % 20 % 3.5 2.0 49 % Sect. 5.3.5
(shutdown service only) 3)
Pressure relief valve, PSV 3.3 2.2 0% 10 % 2.2 4) 1.0 33 % Sect. 5.3.6

Deluge valve (complete) 4.5 3.0 0% 0% 3.0 1.5 33 % Sect. 5.3.7


Fire damper (incl.
5.5 3.2 0% 0% 3.2 2.3 43 % Sect. 5.3.8
solenoid valve)
Circuit breaker (large) 0.8 0.3 0% 0% 0.3 0.5 63 % Sect. 5.3.9

Relay 0.5 0.2 0% 0% 0.2 0.3 60 % Sect. 5.3.10


1)
All failure rates are given per 106 hours
2)
Fail to close data for control valves applied in combined control and shutdown purpose. Failure rate for pilot /
solenoid valve should be added
3)
Fail to close data for control valves applied only for shutdown (i.e. normally not operated). Failure rate for pilot /
solenoid valve should be added
4)
The dangerous undetected failure rate applies for a fail to open failure within 20% of the set point pressure. If a
critical failure is defined as fail to open at a higher pressure, a reduced failure rate is expected. For a ‘fail to open
before test pressure’ failure; λDU = 1.1·10-6 is suggested

Table 6 below gives suggested values for the PTIF, i.e. the probability of a test independent failure
occurring upon a demand.

23
Table 6 PTIF for various components

Component
Component PTIF Comments (see section 3.3.1)
group

1·10-3 When operating in clean medium


Pressure switch
Unclean medium - clogging of sensing
5·10-3
line possible
Proximity switch 1·10-3 Based on expert judgement
Applies for pressure, level, temperature
Process transmitters 5·10-4
and flow transmitters
Gas detector, catalytic 5·10-4
Input
Devices IR gas detector 1·10-3
The PTIF values are given assuming that
Smoke detector 1·10-3 the detector is already exposed.

Heat detector 1·10-3 Catalytic H2S detector is assumed.

Flame detector 1·10-3

H2S detector 5·10-4

ESD push button 1·10-5 Previous SINTEF estimate


Standard industrial PLC –
5·10-4
single system
Control Programmable safety system –
5·10-5 Mainly due to software errors.
Logic Units single system
Hardwired safety system –
5·10-6
single system
ESV/XV and X-mas tree Applies for a standard/average functional
1·10-4
valves test
Blowdown valve 1·10-4 Assuming a standard functional test

Control valve 1·10-4 Assuming a standard functional test

Final Pressure relief valve, PSV 1·10-3 Previous SINTEF estimate


1)
Elements
Deluge valve 1·10-3 SINTEF estimate

Fire damper 1·10-3 SINTEF estimate

Circuit breaker (large) 5·10-5 SINTEF estimate

Relay 5·10-5 SINTEF estimate


1)
For all valves the PTIF applies for the complete valve including pilot/solenoid

Table 7 and 8 give suggested values for the β-factor and the configuration factor CMooN
respectively. Note that the CMooN factors have been updated as compared to previous values, ref.
[12].

Regarding the suggested β-factors it should be pointed out that these are typical values. Any
application specific factors may be implemented in the estimates by e.g. applying the checklists in

24
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

IEC 61508 or the simplified method described in appendix D in [10]. Some beta values have been
slightly increased as compared to the figures in the 2006 edition, [12]. This is based on results
from operational reviews where it was observed that a fairly large proportion of the SIS failures
actually involved more than one component.

Table 7 β-factors for various components

Component
Component β Comment/source
group

Pressure switch 0.05

Proximity switch 0.05 Updated SINTEF estimates based on


former values and additional
Input devices Process transmitters 0.04
knowledge from operational reviews
Fire/gas detectors 0.06

ESD push button 0.03

Standard industrial PLC 0.07


Control logic Updated SINTEF estimates based on
Programmable safety system 0.05
units additional judgements
Hardwired safety system 0.03
ESV/XV incl. X-mas tree
0.03
valves (main valve + actuator)
Blowdown valves (main valve
0.03
+ actuator)
Pilot valves on same valve 0.10

Pilot valves on different valves 0.03


Updated SINTEF estimates based on
Final Control valves 0.03 former values and additional
Elements 1) knowledge from operational reviews
Pressure relief valve, PSV 0.05

Deluge valve 0.03

Fire damper 0.03

Relay 0.03

Circuit breaker 0.03


1)
β value for (redundant) PSVs on the same equipment/vessel. For PSVs on different equipment, a value of 0.03 is
suggested

25
Table 8 Numerical values for configuration factors, CMooN

M\ N N=2 N=3 N=4 N=5 N=6

M=1 C1oo2 = 1.0 C1oo3 = 0.5 C1oo4 = 0.3 C1oo5 = 0.21 C1oo6 = 0.17

M=2 - C2oo3 = 2.0 C2oo4 = 1.1 C2oo5 = 0.7 C2oo6 = 0.4

M=3 - - C3oo4 = 2.9 C3oo5 = 1.8 C3oo6 = 1.1

M=4 - - - C4oo5 = 3.7 C4oo6 = 2.4

M=5 - - - - C5oo6 = 4.3

Note that the CMooN factors have been updated as compared to the previous 2006 handbook, [12].
It should be pointed out that the CMooN factors are suggested values and not exact figures. C1oo5
and C1oo6 have been given to two decimal places in order to be able to distinguish the two
configurations. The reasoning behind the CMooN factors is further discussed in [10].

26
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

3.2 Subsea Equipment


Table 9 summarises the reliability input data for subsea equipment. For a more thorough
discussion of the data, reference is made to section 5.4. It should be noted that for the subsea
equipment, focus has been on dangerous failures and only values for the coverage factor for
dangerous failures, cD, is specified. Hence, the rate of undetected safe failures, λSU, is not given.
Furthermore, specific PTIF and β-factors for subsea components are not given. As a starting point,
estimates for topside equipment can be used for these unspecified parameters. They should,
however, be assessed on a case by case basis depending on their specific (subsea) application.

Similarly, the values given for the safe failure fraction (SFF) should be considered as indicative
only. Higher (or lower) SFFs may apply for specific equipment types and this should in such case
be documented separately.

Table 9 Failure rates for subsea equipment - input devices, control


system units and output devices

Subsea equipment

Component 1) λcrit 2) λD 2) cD λDU 2) SFF

Pressure sensor 0.62 0.37 60 % 0.15 76 %

Temperature sensor 0.30 0.18 60 % 0.07 76 %

Combined pressure and temperature sensor 2.5 1.3 60 % 0.50 80 %

Flow sensor 2.0 1.4 60 % 0.56 72 %


ESD/PSD logic including input/output (located
16.0 8.0 90 % 0.80 95 %
topside)
MCS - Master control station (located topside) 9.4 2.8 60 % 1.1 88 %

Umbilical hydraulic/chemical line (per line) 0.31 0.22 80 % 0.04 87 %

Umbilical power/signal line (per line) 0.51 0.36 80 % 0.07 86 %

SEM – subsea electronic module 9.9 4.0 70 % 1.2 84 %

Manifold isolation valve 1.32 0.40 0% 0.40 70 %

Solenoid control valve (in subsea control module) 0.40 0.16 0% 0.16 60 %
Production master valve (PMV), Production wing
0.26 0.18 0% 0.18 30 %
valve (PWV)
Chemical injection valve (CIV) 0.37 0.22 0% 0.22 40 %

Downhole safety valve (DHSV) 5.6 3.2 0% 3.2 42 %

Subsea isolation valve (SSIV) 0.52 0.21 0% 0.21 60 %


1)
Further reference is made to Table 11 in section 5.4 for additional details on the recommended data
2)
All failure rates are given per 106 hours

27
3.3 Comments to the PDS Data
The data presented in Table 3 – Table 9 are mainly based on operational experience (OREDA,
RNNP, etc.) and as such reflect some kind of average expected field performance. It is stressed
that these generic data should not be used uncritically – if valid application specific data is
available, these should be preferred. When comparing the data in this handbook with figures
found in manufacturer certificates and reports, major gaps will often be found. As discussed in
section 2.4.1 such data sources often exclude failures caused by inappropriate maintenance, usage
mistakes and design related systematic failures. Care should therefore be taken when data from
certificates and similar reports are used for predicting reliability performance in the field.

For some equipment types and some of the parameters, the listed data sources provide limited
information and additional expert judgement must therefore be applied. In particular for the PTIF,
the coverage c and the r factor, the data sources are scarce, and some arguments are therefore
required concerning the recommended values.

3.3.1 Probability of Test Independent Failures (PTIF)

General
No testing is 100% perfect and some dangerous undetected failures may therefore be present also
after a functional test. The suggested PTIF values attempt to quantify the likelihood of such failures
to be present after a test. Obviously, such values will depend heavily on the given application, and
specific measures may have been introduced to minimise the likelihood of test independent
failures. Hence, it may be argued to reduce (or increase) the given values. This is further
discussed in appendix D of the method handbook, [10].

Process Switch
The proposed PTIF of 10-3 applies to a pressure switch operating in clean medium, and the main
contribution is assumed to be failures during human intervention (e.g. by-pass, wrong set point,
etc.). If the switch is operating in unclean medium and clogging of the sensing line is a possibility,
the PTIF may be increased to 5·10-3.

Proximity Switch
For proximity switches a relatively high PTIF of 10-3 has been suggested. The main contributors to
this TIF are assumed to be failures during installation and maintenance, in particular mounting
and misalignment problems related to the interacting parts.

Process Transmitters
Transmitters have a “live signal”. Thus, blocking of the sensing line may be detected by the
operator and is included in the λdet. Also a significant part of the failures of the transmitter itself
(all "stuck" failures) may be detected by the operator and therefore contribute to λdet. Thus, the
PTIF for transmitters is expected to be less than that of the switch and a value of 5·10-4 has been
suggested. In previous editions of the handbook, smart and field bus transmitters have, due to
more complete self-test, been given an even smaller PTIF. However, since smart transmitter also
has some additional software, other test independent failures may be introduced. Consequently,
one common PTIF is given.

Gas Detectors
In previous versions of the PDS data handbook the given PTIF values for gas detectors
differentiated with respect to detector type, the size of the leakage, ventilation, and other
conditions expected to influence the PTIF probability for detectors. The PTIF values were then
given as intervals depending on the state of the conditions listed. It is now assumed that the
detector is already exposed and the present values generally represent lower end values of the
previously given intervals, as this represented the “best conditions”. Note that catalytic gas

28
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

detectors and H2S detectors have been given a somewhat lower PTIF than the IR gas detectors. The
catalytic gas detectors have a simpler design which is assumed to result in a lower probability of
test independent systematic failures.

Fire Detectors
PTIF values are given based on the assumptions that (1) a detector with the "appropriate" detection
principle is applied (e.g. that smoke detectors are applied where smoke fires are expected and
flame detectors where flame fires are expected), and (2) the detector is already exposed to the
flame/heat or smoke (depending on detector type). A PTIF value of 10-3 has been suggested for all
fire detectors.

Control Logic Units


The PTIF for the control logic is mainly due to software errors. For dedicated high quality safety
systems, the overall estimate equals 5·10-5, i.e., the required action will fail to be carried out
successfully in 1 out of 20 000 demands due to (an undetectable) software error. For hardwired
safety systems without software, the corresponding estimate is a factor 10 lower, whereas for
standard industrial PLC systems, the estimate PTIF = 5·10-4 applies. Furthermore, it must be
assumed that the quality assurance program during design and modifications is more extensive for
programmable safety systems (and hardwired safety systems) than for a standard industrial PLC.
Consequently, this also indicates a lower PTIF for a programmable (and hardwired) safety system
than for a standard industrial PLC.

Valves
The PTIF for ESV/XVs will depend on the quality of the functional testing performed. Here, a
standard functional test has been assumed where the valve is fully closed but not tested for
internal leakage. In such case a PTIF value of 10-4 is suggested. For control valves used for
shutdown purposes and blowdown valves a PTIF of 10-4 is also suggested. All these values include
PTIF for the pilot valve. For PSVs a relatively high PTIF value of 10-3 has been suggested due to the
possibility of human failures related to incorrect setting/adjustment of the PSV.

3.3.2 Coverage

General
As compared to the ’03, ‘04 and ’06 editions of the PDS Reliability data handbook, some of the
coverage factors have been updated. The reasoning behind this is partly discussed below. The
discussion is mainly limited to dangerous failures.

Switches and Transmitters


For process switches and proximity switches a total coverage of 15% has been assumed. It is then
assumed 5 % coverage due to line monitoring of the connections and additional 10% detection for
dangerous failures due to operator observation during operation.

For process transmitters 60 % coverage for dangerous failures has been assumed. This is based on
implemented self test in the transmitter as well as casual observation by control room operator.
The latter assumes that the transmitter signal can be observed on the VDU and compared with
other signals so that e.g. stuck or drifting signal can be revealed. If a higher coverage is claimed,
e.g. due to automatic comparison between transmitters, this should be especially documented.

Fire and Gas Detectors


For all detectors, the given coverage values apply for analogue sensors (where the value/reading
can be monitored by the operator in the CCR). IR gas-, IR flame- and UV flame detectors have
build-in monitoring and self-test of electronics and optics, and therefore have a relatively high

29
coverage. Catalytic gas detectors normally have limited built-in self-test. The same applies for
smoke and heat detectors. Hence, these detector types will have a lower coverage.

Control Logic Units


For control logic no updated OREDA data has been available and the quantitative input from the
safety system vendors has focused on the rate of dangerous undetected failures and the failure rate
distribution between the functional parts. Therefore the coverage values are mainly based on
judgements and discussions with experts.

For a standard industrial PLC (single system) the coverage factor for dangerous failures, cD, has
been set lower than for a SIL certified programmable safety system. For safe failures, the
coverage factor is low, since it is assumed that upon detection of such a failure (e.g. loss of signal)
the single safety system should normally go to a safe state (i.e. a shutdown). It should be noted
that if the safety system is redundant, the rate of undetected safe (i.e. spurious trip) failures may
be reduced significantly by the use of voting.

The hardwired safety system is assumed to be a fail safe design without diagnostic coverage, i.e.
failures will either be dangerous undetected or a detected failure will result in a trip action (SU).
Hence, this implies that the coverage for both dangerous and safe failures has been assumed to be
zero. Note that this applies for single systems and as a consequence hardwired safety systems are
often voted 2oo2 in order to avoid spurious trip failures.

Valves
No automatic self-test for valves is assumed. For ESV/XV valves the coverage for dangerous
failures have been slightly increased to 30% due to information from OREDA phase V-VII where
it appears that a high fraction of dangerous failures (more than 50%) are detected in between tests
by operator observation or other methods. It should be noted that this is not automatic diagnostic
coverage (as e.g. defined in IEC-61508) but will however imply that dangerous faults are detected
in between testing. For valves that are operated infrequently, the coverage will be lower, and the
cD for blowdown valves has therefore been set to 20%. Based on information from OREDA the
coverage for safe failures has been set to 10% for ESV/XV and 0% for blowdown valves.

For control valves used also for shutdown purposes, a relatively high coverage of 50% has been
estimated based on the registered observation methods for the relevant failure modes in OREDA.
It is then implicitly assumed that the control valve is frequently operated resulting in a relatively
high coverage.

Occasionally, e.g. on some onshore plants, selected control valves may be used solely for
shutdown purposes (i.e. not normally operated). In this case the valves will be operated
infrequently, resulting in a significantly lower coverage factor. For control valves used only as
shutdown valves, the coverage is therefore suggested reduced to 20%.

For PSV valves and deluge valves no coverage has generally been assumed.

3.3.3 Fraction of Random Hardware Failures (r)

General
Based on input from discussions with experts as well as a study of available OREDA data,
estimates of r have been established. As discussed previously, r is the fraction of dangerous
undetected (DU) failures that can be “explained” by random hardware failures (hence 1-r is the
fraction of DU failures that can be explained by systematic failures). Below, a brief discussion of
the r values suggested in the detailed data sheets is given.

30
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Process Switch
For process switches the reported failure causes from OREDA are scarce and the r has been
estimated by expert judgement to be approximately 50%.

Process Transmitters
Data from OREDA on critical transmitter failures, results from operational reviews as well as
discussions with experts, all indicate that a significant proportion of the critical dangerous failures
for transmitters are caused by factors such as “excessive vibration”, “erroneous maintenance”
(e.g. ‘wrong calibration’ ‘erroneous specification of measurement area’ and ‘left in inhibit’) and
“incorrect installation”. As seen all these are examples of systematic failures which according to
OREDA are detectable (either by casual observation or during functional testing/maintenance).
Based on the observed failure cause distribution, an r = 30% has therefore been proposed.

Detectors
When going through data from OREDA phase V and VI for fire and gas detectors, it is found that
for some 40% of the critical failures the failure cause is reported as being due to ‘expected wear
and tear, whereas some 60% of the critical failures are due to ‘maintenance errors’. When going in
more detail into the failure mechanisms, it is seen that the failures are described by e.g. ‘out of
adjustment’ (30%), ‘general instrument failure’ (28%), ‘contamination’ (21%), ‘vibration’ (10%)
and ‘maintenance/external/others’ (11%). Even though ‘contamination’ (i.e. typical dirty lens) and
instrument failure partly can be explained by expected wear and tear, it is seen that many of the
critical failures are systematic ones. Based on this an r = 40% has been proposed.

Control Logic
For control logic no updated OREDA data is available on failure causes and the proposed r values
are therefore entirely based on expert judgements. It has been assumed that for a standard
industrial PLC the major part of the failures can be explained by (systematic) software related
errors. Hence, a small r of 10% has been proposed. On the other hand, for a hardwired safety
system, it is assumed that a large part of the failure rate is due to random hardware failures, and a
large r of 80% has been suggested.

Valves
The reported failure causes in OREDA for critical failures are somewhat scarce and therefore
additional expert judgement has to be applied. When considering what types of valve failures that
are typically revealed upon functional testing, this includes stuck valve, insufficient actuator
force, valve not shutting off tight due to excessive erosion or corrosion (unclean medium),
incorrect installation, etc. Several of these failures represent (detectable) systematic failures,
hence it is evident that the r is significantly lower than 1 (only random hardware failures).

For ESV/XV and X-mas tree valves an r = 50% has been proposed, mainly based on expert
judgement and reported failure causes for other type of valves. For pilot valves, there are more
reported failure causes and these indicate a relatively high proportion of systematic failures. Here
an r equal to 40% has been suggested based on the reported OREDA data.

For control valves used for shutdown purposes, a somewhat higher proportion of ‘wear and tear’
failures are expected, and therefore an r equal to 60% has been proposed. Reported failures causes
for deluge valves also indicate a relatively high proportion of ‘wear an tear’ related failures and an
r equal to 60% has been proposed also for deluge valves.

For PSV valves, limited data on failure causes is available from OREDA, and an r = 50% has
been suggested.

31
3.4 Reliability Data Uncertainties – Upper 70% Values

3.4.1 Data Uncertainties

The failure rates given in this handbook are best (mean) estimates based on the available data
sources listed in section 2.5. The data in these sources have mainly been collected on oil and gas
installations where environment, operating conditions and equipment types are comparable, but
not at all identical. The presented data are therefore associated with uncertainties due to factors
such as:

• The data collection itself; inadequate failure reporting, classification or data interpretation.
• Variations between installations; the failure rates are highly dependant upon the operating
conditions and also the equipment make will vary between installations.
• Relevance of data / equipment boundaries; what components are included / not included in
the reported data? Have equipment parts been repaired or simply replaced, etc.?
• Assumed statistical model; is the standard assumption of a constant failure rate always
relevant for the equipment type under consideration?
• Aggregated operational experience; what is the total amount of operational experience
underlying the given estimates?

The last bullet concerning amount of operational experience, is related to the possibility of
establishing a confidence interval for the failure rate. Instead of only specifying a single mean
value, an interval likely to include the parameter is given. How likely the interval is to contain the
parameter is determined by the confidence level. E.g. a 90% confidence interval for λDU may be
given by: [0.1·10-6 per hour, 5·10-6 per hour]. This means that we are 90% confident that the
failure rate will lie within this interval. It is also possible to specify one-sided confidence intervals
where the lower bound of the interval is zero. E.g. a one-sided 70% interval for λDU may be given
by: [0, 4·10-6 per hour], implying that we can be 70% certain that the failure rate is lower than
4·10-6 per hour.

In particular, in IEC 61508-2, section 7.4.7.4, it is stated that any failure rate data based on
operational experience should have a confidence level of at least 70% (a similar requirement is
found in IEC 61511-1, section 11.9.2). Hence, IEC 61508 and IEC 61511 indicate that when using
historic data one should be conservative and the recommended approach is to choose the upper
70% confidence value for the failure rate as illustrated on Figure 2 below.

70% confidence range for λ

0 Mean λ Conservative λ

Figure 2 Illustration of failure rate with confidence level of 70%

Some data sources, such as OREDA, provide confidence intervals for the failure rate estimates,
whereas most sources, including this handbook, provide mean values only. However, in the next
section an attempt has been made to indicate failure rate values with a confidence level of at least
70% as required in the IEC 61508/61511 standards.

32
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

3.4.2 Upper 70% Values

When looking in more detail at the data dossiers in chapter 5, it is seen that there is a varying
amount of operational experience underlying the failure rate estimates. Hence, there will also be a
varying degree of confidence associated with the given data. Based on the aggregated operational
time, number of dangerous failures and some additional expert judgement, an attempt has been
made, whenever possible, to establish a one-sided 70% confidence interval and thereby provide
some upper 70% values for the dangerous undetected failure rate. The result of this exercise is
summarised in the below table (done only for the topside data where the most detailed
information has been available).

Table 10 Estimated upper 70% confidence values for topside


equipment
1) 1)
Component λDU λDU
group Component Comments
(mean) (70%)
Pressure switch 2.0·10-6 4.8·10-6

Proximity switch 3.0·10-6 - Insufficient data available

Pressure transmitter 0.3·10-6 0.5·10-6

Level transmitter 0.6·10-6 1.2·10-6

Temperature transmitter 0.3·10-6 0.6·10-6

Flow transmitter 0.6·10-6 1.0·10-6

Gas detector, catalytic 1.8·10-6 2.4·10-4


Input
Devices
IR gas detector, point 0.6·10-6 0.9·10-3

IR gas detector, line 0.7·10-6 1.3·10-6

Smoke detector 0.7·10-6 0.9·10-6

Heat detector 0.6·10-6 0.9·10-6

Flame detector 0.8·10-6 1.2·10-6

H2S detector 0.5·10-6 0.8·10-6

ESD push button 0.5·10-6 1.1·10-6


Control
AI / CPU / DO - - Insufficient data available
Logic Units
ESV/XV (ex. pilot) 2.1·10-6 2.8·10-6

Final X-mas tree valve (ex. pilot) 0.8·10-6 1.3·10-6


Elements
Blowdown valve (ex. pilot) 2.1·10-6 2.8·10-6

Pilot/solenoid valve 0.8·10-6 1.1·10-6

33
1) 1)
Component λDU λDU
group Component Comments
(mean) (70%)
Final Control valve (ex. pilot)
2.2·10-6 3.5·10-6
Elements (frequently operated)
(cont.) Control valve (ex. pilot)
3.5·10-6 5.5·10-6
(shutdown service only)
Pressure relief valve, PSV 2.2·10-6 3.2·10-6

Deluge valve 3.0·10-6 5.7·10-6

Fire damper 3.2·10-6 5.3·10-6

Circuit breaker / relay - - Insufficient data available

Downhole safety valve 3.2·10-6 5.0·10-6 Based mainly on RNNS data


1)
All failure rates given per hour

Some comments to the above table should be made:

• Establishing confidence intervals based on data from different sources and different
installations is not a straightforward task. The suggested upper 70% values should
therefore be taken as rough estimates only.
• As discussed in section 2.4.1, the generic data presented in this handbook include failure
mechanisms that are frequently excluded from e.g. manufacturer failure reports and
certificates. As such, the mean failure rates given in Table 3-5 are considered
representative when predicting the expected risk reduction from the equipment. Using the
upper 70% confidence values presented above should therefore be considered as a way of
increasing the robustness of the results e.g. when performing sensitivity analyses.
• In the SINTEF report “Guidelines for follow-up of Safety Instrumented Systems (SIS) in
the operating phase”, [23], a procedure for updating failure rates in operation is described.
For this purpose a conservative estimate for the λDU is required. Unless other equipment
specific values are available, the above upper 70% values can then be applied.

3.5 What is “Sufficient Operational Experience“? – Proven in Use


As an alternative to developing a product fully in line with the systematic capability requirements
given in IEC 61508, manufacturers may claim “proven in use” based on operational experience
and return data for a specific piece of equipment. A question which frequently comes up is “when
has sufficient operational experience been gained in order to claim proven in use?”

In the SINTEF report “Guidelines for follow-up of Safety Instrumented Systems (SIS) in the
operating phase”, [23], it has been discussed how much operational experience is required before
a reasonable confidence in a new failure rate estimate can be established. For SIS component data
(for detectors, sensors and valves) from OREDA, it can be found that the upper 95% confidence
limit for the rate of DU-failures is typically some 2–3 times the mean value of the failure rate,
[23], [24]. A suggested “cut-off” criterion for claiming proven in use can then be that the gathered
operational experience shall be sufficient to establish a failure rate estimate with comparable
confidence, i.e. the upper 95% confidence for λDU shall be within 2–3 times the mean value.

Based on this criterion and further work from [23] and [24], some suggested rules for claiming
“proven in use” for a given piece of field equipment are:

34
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

• Minimum aggregated time in service should be 2.5 million (2.5·106) operational hours or
at least 2 dangerous undetected failures 4 should have been registered for the considered
observation period;
• Operational data should be available from at least 2 installations with comparable
operational environments;
• The data should be collected from the useful period of life of the equipment (typically this
implies that the first 6 months of operation should be considered excluded);
• A systematic data collection and reporting system should be implemented to ensure that all
failures have been formally recorded;
• It should be ensured that all equipment units included in the sample have been activated
(i.e. tested or demanded) at least once during the observation period (in order to ensure
that components that have never been activated are counted in).

Additional requirements are given in the IEC standards. It should be noted that whereas IEC
61508 uses the term ‘proven in use’, IEC 61511 applies the term ‘prior use’. However, neither
IEC 61508 nor IEC 61511 quantify the required amount of operating experience, but states that
for field equipment there may be extensive operating experience that can be used as a basis for the
evidence [for prior use, ref. IEC 61511-1, section 11.5.3].

It may be argued that the above requirement concerning aggregated time in service is difficult to
fulfil for equipment other than e.g. fire and gas detectors. However, an important part of claiming
proven in use is to have a clear understanding of failure mechanisms, how the failure is detected
and repaired and what maintenance activities are required in order to keep the equipment in an “as
good as new condition”, [21]. For this purpose considerable operational experience is necessary
and focus should therefore be on improved data collection and failure registration. Furthermore, it
will require that the manufacturers obtain feedback on operational performance from the
operators, also beyond the warranty period of the equipment.

4
In general, an increasing number of failures will result in a narrower confidence interval, i.e. a higher
confidence in the estimated mean value. Hence, experienced DU failures may “compensate” for limited
operational experience (but will anyhow require significant operational time if a low failure rate is to be
claimed).
35
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

4 MAIN FEATURES OF THE PDS METHOD


This section briefly presents the main characteristics of the PDS method, the failure classification
scheme, and reliability performance measures in brief. Please note that the objective is not to give
a full and detailed presentation of the method, but to give an introduction to the model taxonomy
and the basic ideas. For a more comprehensive description of the PDS method and the detailed
formulas, see the updated PDS method handbook, [10].

4.1 Main Characteristics of PDS


Some main characteristics of the PDS method are:

 The method gives an integrated approach to random hardware and systematic failures. Thus,
the model accounts for relevant failure causes such as:
- normal ageing
- software failures
- stress induced failures
- design failures
- installation failures
- operational related failures
 The model includes all relevant failure types that may occur, and explicitly accounts for
dependent (common cause) failures and the effect from different types of testing (auto-
matic/self-test as well as manual observation).
 The model distinguishes between the ways a system can fail (failure mode), such as fail-to-
operate, spurious operation and non-critical failures.
 A main benefit of the PDS taxonomy is the direct relationship between failure causes and the
measures used to improve safety system performance.
 The method is simple and structured:
- highlighting the important factors contributing to loss of safety and spurious operation
- promoting transparency and communication
 As stressed in IEC 61508, it is important to incorporate the complete safety function when
performing reliability analyses. This is a core issue in PDS; it is function-oriented, and the
whole path from the sensors, via the control logic to the actuators is taken into consideration
when modelling the system.
 The PDS method has a somewhat different approach to systematic failures compared to IEC
61508. Whereas IEC 61508 only quantifies part of the total failure rate, represented by the
random hardware failures, PDS also attempts to quantify the contribution from systematic
failures (see Figure 3 below) and therefore gives a more complete picture of how the
equipment is likely to operate in the field.

4.2 Failure Causes and Failure Modes


Failures can be categorised according to failure cause and the IEC standards differentiate between
random hardware failure and systematic failures. In PDS the same split is made, but a somewhat
more detailed breakdown of the systematic failures has been performed, as indicated in Figure 3.

37
Failure

Random Systematic
hardware failure
failure

Aging failure Software faults Operational failure


Installation failure
- Valve left in wrong
- Random failures due to - Programming error - Gas detector cover left on position
natural (and foreseen) - Compilation error after commisioning - Sensor calibration
stressors - Error during software - Valve installed in wrong failure
update direction - Detector in override mode
- Incorrect sensor location

Design related failure Excessive stress


failure
- Inadequate or erroneous
specificaton - Excessive vibration
- Inadequate or erroneous - Unforeseen sand prod.
implementation - Too high temperature

Figure 3 Failure classification by cause of failure

The following failure categories (causes) are defined:

Random hardware failures are failures resulting from the natural degradation mechanisms of the
component. For these failures it is assumed that the operating conditions are within the design
envelope of the system.

Systematic failures are failures that can be related to a particular cause other than natural
degradation and foreseen stressors. Systematic failures are due to errors made during
specification, design, operation and maintenance phases of the lifecycle. Such failures can
therefore normally be eliminated by a modification, either of the design or manufacturing process,
the testing and operating procedures, the training of personnel or changes to documentation.

There are several possible schemes for classifying systematic failures. Here, a further split into
five categories has been suggested:

• Software faults may be due to programming errors, compilation errors, inadequate testing,
unforeseen application conditions, change of system parameters, etc. Such faults are present
from the point where the incorrect code is developed until the fault is detected either through
testing or through improper operation of the safety function. Software faults can also be
introduced during modification to existing process facilities, e.g. inadequate update of the
application software to reflect the revised shutdown sequences or erroneous setting of a high
alarm outside its operational limits.
• Design related failures, are failures (other than software faults) introduced during the design
phase of the equipment. It may be a failure arising from incorrect, incomplete or ambiguous
system or software specification, a failure in the manufacturing process and/or in the quality
assurance of the component. Examples are a valve failing to close due to insufficient actuator
force or a sensor failing to discriminate between true and false demands.
• Installation failures are failures introduced during the last phases prior to operation, i.e. during
installation or commissioning. If detected, such failures are typically removed during the first
months of operation and such failures are therefore often excluded from data bases. These
failures may however remain inherent in the system for a long period and can materialise

38
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

during an actual demand. Examples are erroneous location of e.g. fire/gas detectors, a valve
installed in the wrong direction or a sensor that has been erroneously calibrated during
commissioning.
• Excessive stress failures occur when stresses beyond the design specification are placed upon
the component. The excessive stresses may be caused either by external causes or by internal
influences from the medium. Examples may be damage to process sensors as a result of
excessive vibration or valve failure caused by unforeseen sand production.
• Operational failures are initiated by human errors during operation or maintenance/testing.
Examples are loops left in the override position after completion of maintenance or a process
sensor isolation valve left in closed position so that the instrument does not sense the medium.

The PDS method considers three failure modes:

• Dangerous (D). Safety system/module does not operate on demand (e.g. sensor stuck upon
demand)
• Safe (S). Safety system/module may operate without demand (e.g. sensor provides signal
without demand – potential spurious trip)
• Non-Critical (NONC). Main functions not affected (e.g. sensor imperfection, which has no
direct effect on control path)

The first two of these failure modes, dangerous (D) and safe (S) are considered "critical" in the
sense that they have a potential to affect the operation of the safety function. The safe failures
have a potential to cause a trip of the safety function, while the dangerous failures may cause the
safety function not to operate upon a demand. The failure modes above are further split into the
following categories:

• Dangerous undetected (DU)


Dangerous failures not detected by automatic self-test or personnel; i.e. only detected by a
functional test (or a true demand)
• Dangerous detected (DD)
Dangerous failures detected by automatic self-test or personnel
• Safe undetected (SU)
Safe (spurious trip) failures not detected by automatic self-test or personnel.
• Safe detected (SD)
Safe (spurious trip) failures detected by automatic self-test or personnel that can prevent an
actual trip to occur.

4.3 Reliability Performance Measures


This section presents the main measures for loss of safety used in PDS. All these reflect safety
unavailability of the function, i.e. the probability of a failure on demand. The measure for loss of
safety used in IEC for systems operating in low demand mode, is denoted PFD (Probability of
Failure on Demand), and this is also one of the measures adopted in the PDS method.

Note that for high demand mode systems IEC 61508 uses PFH (Probability of Failure per Hour)
as the measure for loss of safety. PFH is not discussed here but is treated separately in the updated
method handbook, [10].

39
4.3.1 Contributions to Loss of Safety

The potential contributors to loss of safety (safety unavailability) can be split into the following
categories:

1) Unavailability due to dangerous undetected (DU) failures. For a single component, these
failures occur with rate λDU. The average period of unavailability due to such a failure is τ/2
(where τ = period of functional testing), since the failure can have occurred anywhere inside
the test interval.

2) Unavailability due to failures not revealed during functional testing. This unavailability is
caused by “unknown” ("dormant"), dangerous and undetected failures which can only be
detected during a true demand. These failures are denoted Test Independent Failures (TIF), as
they are not detected during functional testing.

3) Unavailability due to known or planned downtime. This is the unavailability or downtime


caused by components which are either known to have failed or are taken out for test-
ing/maintenance.

Below, we discuss separately the loss of safety measures for the three failure categories, and
finally an overall measure for loss of safety is given.

4.3.2 Loss of Safety due to DU Failures - Probability of Failure on Demand (PFD)

The PFD quantifies the loss of safety due to dangerous undetected failures (with rate λDU), during
the period when it is unknown that the function is unavailable. The average duration of this period
is τ/2, where τ = test period. For a single (1oo1) component the PFD can be approximated by:

PFD ≈ λDU · τ/2

For a MooN voting logic (M<N), the main contribution to PFD (accounting for common cause
failures) is given by:

PFD ≈ CMooN · β · (λDU ⋅ τ/2); (M<N)

Here, CMooN is a modification factor depending on the voting configuration, ref. Table 8. Further,
for a NooN voting, we approximately have:

PFD ≈ N ⋅ λDU ⋅ τ/2

4.3.3 Loss of Safety due to Test Independent Failures (PTIF)

In reliability analysis it is often assumed that functional testing is “perfect” and as such detects
100% of the failures. In true life this is not necessarily the case; the test conditions may differ
from the real demand conditions, and some dangerous failures can therefore remain in the SIS
after the functional test. In PDS this is catered for by adding the probability of so called test
independent failures (TIF) to the PFD.

PTIF = The Probability that the component/system will fail to carry out its intended
function due to a (latent) failure not detectable by functional testing (therefore the
name “test independent failure”)

40
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

It should be noted that if an imperfect testing principle is adopted for the functional testing, this
will lead to an increase of the TIF probability. For instance, if a gas detector is tested by
introducing a dedicated test gas to the housing via a special port, the test will not reveal a
blockage of the main ports. Another example is the use of partial stroke testing for valves. This
type of testing is likely to increase the PTIF for the valve, since the valve is not fully proof tested
during such a test.

Hence, for a single component, PTIF expresses the likelihood of a component having just been
functionally tested, to fail on demand (irrespective of the interval of manual testing). For
redundant components, the TIF contribution to loss of safety will for a MooN voting be given by
the general formula: CMooN · β · PTIF, where the numerical values of CMooN are assumed identical
to those used for calculating PFD, ref. Table 8.

4.3.4 Loss of Safety due to Downtime Unavailability – DTU

This represents the downtime part of the safety unavailability as described in category 3 above.
The DTU (Downtime Unavailability) quantifies the loss of safety due to:

• repair of dangerous failures, resulting in a period when it is known that the function is
unavailable due to repair. We refer to this unavailability as DTUR;
• planned downtime (or inhibition time) resulting from activities such as testing, maintenance
and inspection. We refer to this unavailability as DTUT.

Depending on the specific application, operational philosophy and the configuration of the
process plant and the SIS, it must be considered whether it is relevant to include (part of) the DTU
in the overall measure for loss of safety. For further discussions on how to quantify the DTUR and
DTUT contributions, reference is made to [10].

4.3.5 Overall Measure for Loss of Safety– Critical Safety Unavailability

The total loss of safety is quantified by the critical safety unavailability (CSU). The CSU is the
probability that the module/safety system (either due to a random hardware or a systematic
failure) will fail to automatically carry out a successful safety action on the occurrence of a
hazardous /accidental event. Thus, we have the relation:

CSU = PFD + PTIF

If we want to include also the “known” downtime unavailability, the formula becomes:

CSUTOT = PFD + PTIF + DTU

The contributions from PTIF and λDU to the Critical Safety Unavailability (CSU) are illustrated in
Figure 4. Failures contributing to the PTIF are systematic test independent failures. These failures
will repeat themselves unless modification/redesign is initiated. The contribution to the CSU from
such systematic failures has been assumed constant, independent of the frequency of functional
testing. Dangerous undetected (DU) failures are assumed eliminated at the time of functional
testing and will thereafter increase throughout the test period.

41
Critical safety unavailability ( CSU )

Time dependent Maximum CSU Average


CSU CSU

Dangerous undetected failures, (PFD) =λDU ·τ/2

Test Independent Failures, (PTIF)

τ 2τ 3τ 4τ 5τ Time
τ
Functional test
interval

Figure 4 Contributions to critical safety unavailability (CSU).

As seen from the figure the CSU will vary throughout time. The CSU is at its maximum right
before a functional test and at its minimum right after a test. However, when we calculate the
CSU and the PFD we actually calculate the average value as illustrated in the figure.

42
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5 DATA DOSSIERS
The following pages present the data dossiers of the control and safety system components. The
dossiers are input to the tables in chapter 3 that summarise the generic input data to PDS analyses.
Note that the generic data, by nature represent a wide variation of equipment populations and as
such should be considered on individual grounds when using the data for a specific application.

The data dossiers are based on the data dossiers in previous editions of the handbook, [12], [13],
[14], and have been updated according to the work done in the PDS-BIP and the new data
available.

Adapting the definitions used in OREDA, several severity class types are referred to in the data
dossiers. The definitions of the various types are, [3]:

• Critical failure: A failure which causes immediate and complete loss of a system's capability
of providing its output.
• Degraded failure: A failure which is not critical, but it prevents the system from providing its
output within specifications. Such a failure would usually, but not necessarily, be gradual or
partial, and may develop into a critical failure in time.
• Incipient failure: A failure which does not immediately cause loss of the system's capability of
providing its output, but if not attended to, could result in a critical or degraded failure in the
near future.
• Unknown: Failure severity was not recorded or could not be deduced.

Note that only the critical failures are included as a basis for the failure rate estimates (i.e. the
λcrit). From the description of the failure mode, the critical failures are further split into dangerous
and safe failures (i.e. λcrit = λD + λS). E.g. for shutdown valves a “fail to close on demand” failure
will be classified as dangerous whereas a “spurious operation” failure will be classified as a safe
(spurious trip) failure.

The following failure modes are referred in the data dossier tables:

DOP - Delayed operation


EXL - External leakage
FTC - Fail to close on demand
FTO - Fail to open on demand
FTR - Fail to regulate
INL - Internal leakage
LCP - Leakage in closed position
LOO - Low output
NOO - No output
PLU - Plugged/choked
SHH - Spurious high level alarm
SLL - Spurious low level alarm
SPO - Spurious operation
STD - Structural deficiency
VLO - Very low output

43
5.1 Input Devices

5.1.1 Pressure Switch

Module: Input Devices


PDS Reliability Data Dossier
Component: Pressure Switch

Description / equipment boundaries Date of Revision


Includes sensing element / pneumatic switch 2009-12-18
and process connections Remarks

Recommended Values for Calculation


Total rate Coverage Undetected rate

λD = 2.3 per 106 hrs cD = 0.15 λDU = 2.0 per 106 hrs

λS = 1.1 per 106 hrs cS = 0.10 λSU = 1.0 per 106 hrs

λcrit = 3.4 per 106 hrs PTIF = 1 · 10-3 (clean medium)


= 5 · 10-3 (unclean medium)

r = 0.5
Assessment
The given failure rate applies to pressure switches. The failure rate estimate is mainly based on
OREDA phase III data, older OREDA data and comparison with other generic data sources
(OREDA phase IV contains no data on process switches, whereas phase V contains only 6
switches). The estimated coverage is based on expert judgement; We assume 5 % coverage due
to line monitoring of the connections and additional 10% detection for dangerous failures due to
operator observation during operation. The coverage for safe failures has been set to 10%, since
there is a small probability that such failures are detected before the shutdown actually occurs.

The PTIF and the r estimates are mainly based on expert judgements. A summary of some of the
main arguments is provided in section 3.3.

Failure Rate References


Overall
failure rate Failure mode
Data source/comment
(per 106 hrs) distribution
λcrit = 3.4 λD = 2.3 per 106 hrs Recommended values for calculation in 2006-
λDU = 1.6 per 106 hrs edition, [12]
λSTU = 1.0 per 106 hrs
Assumed cD = 30%
PTIF = 10-3 - 5·10-3 1)
1)
Clean / unclean medium
λDU = 0.2 per 106 hrs Recommended values for calculation in 2003-
λcrit = 3.4 λSTU = 0.9 per 106 hrs edition, [14]
PTIF = 10-3 - 5·10-3 1)
Assumed cD = 90%
1)
Without/with the sensing line

44
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Module: Input Devices


PDS Reliability Data Dossier
Component: Pressure Switch
N/A D: N/A OREDA phase V database, [6]
ST: N/A Data relevant for conventional process switches.

Observed: Filter:
cD = N/A Inv. Equipment Class = Process Sensors AND
cST = N/A Inv. Design Class = Pressure AND
Inv. Att. Type – process sensor = Switch AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 5

No. of inventories = 6
No. of critical (D or ST) failures = 0
Surveillance Time (hours) = 295 632
λcrit = 1.4 D: 1.39 OREDA phase III database, [8]
ST: 0.0 Data relevant for conventional process switches.

Observed: Filter:
CD = 100% Inv. Equipment Class = Process Sensors AND
(based on only one Inv. Design Class = Pressure AND
failure) Inv. Att. Type – process sensor = Switch AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 3

No. of inventories = 12
No. of critical D failures = 1
No. of critical ST failures = 0
Surveillance Time (hours) = 719 424
λDU = 3.6 per 106 hrs Exida [15]: Generic DP / pressure switch
λSU = 2.4 per 106 hrs

SFF = 40%
Funct. 0.44
ST 1.02 T-Book [16]: Pressure sensor
Other crit 0.37

45
5.1.2 Proximity Switch (Inductive)

Module: Input Devices


PDS Reliability Data Dossier
Component: Inductive Proximity Switch

Description / equipment boundaries Date of Revision


Includes sensing element, electronics and 2009-12-18
moving metal target Remarks

Recommended Values for Calculation


Total rate Coverage Undetected rate

λD = 3.5 per 106 hrs cD = 0.15 λDU = 3.0 per 106 hrs

λS = 2.2 per 106 hrs cS = 0.10 λSU = 2.0 per 106 hrs

λcrit = 5.7 per 106 hrs PTIF = 1 · 10-3

r = 0.3
Assessment
The estimated coverage is based on expert judgement; We assume 5 % coverage due to line
monitoring of the connections and additional 10% detection for dangerous failures due to
operator observation during operation. The coverage for safe failures has been set to 10%, since
there is a small probability that such failures are detected before a trip actually occurs. It should
be noted that (SIL rated) limit switches with significantly higher coverage factors are available.
In such case mechanical installation of the parts must ensure that alignment problems are
minimised.

The PTIF and the r estimates are mainly based on expert judgements. The PTIF is assumed to be
relatively high since mechanical alignment of the interacting parts is often a problem. Such
failures may not be revealed due to inadequate testing. Similarly a relatively high proportion of
systematic failures is assumed resulting in a low r factor

Failure Rate References


Overall
failure rate Failure mode
Data source/comment
(per 106 hrs) distribution
λDU = 2.0 per 106 hrs Internal SINTEF project data applied for SIL
classified proximity switches.

λDU = 3.6 per 106 hrs Exida [15]: Generic position limit switch
λSU = 2.4 per 106 hrs

SFF = 40%
Failure to change T-Book [16]: Electronic limit switch
state: 1.9 per 107 hrs
Spurious change of
state: 5.2 per 107 hrs

46
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.1.3 Pressure Transmitter

Module: Input Devices


PDS Reliability Data Dossier
Component: Pressure Transmitter
Description / equipment boundaries Date of Revision
The pressure transmitter includes the 2009-12-18
sensing element, local electronics and the Remarks
process isolation valves / process
connections.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 0.8 per 106 hrs cD = 0.60 λDU = 0.3 per 106 hrs

λS = 0.5 per 106 hrs cS = 0.30 λSU = 0.4 per 106 hrs

λcrit = 1.3 per 106 hrs PTIF = 5 · 10-4

r = 0.3
Assessment
The failure rate estimate is mainly based on data from OREDA phase III. An insufficient amount
of data has been found in OREDA phase IV in order to update this estimate (no data from phase V,
VI or VII). The rate of DU failures is estimated assuming coverage of 60 % for dangerous failures.
This is based on implemented self test in the transmitter as well as casual observation by control
room operator (the latter assumes that the signal can be observed on the VDU and compared with
other signals). If a higher coverage is claimed, e.g. due to automatic comparison between
transmitters, this should be especially documented / verified. The rate of detected safe failures has
been estimated by expert judgement to 30% (as compared to 50 % in the previous 2006-edition).
This is due to the fact that safe failures will be difficult to detect before a trip has actually
occurred. No data available for pressure transmitters from OREDA phase VI and VII.

The PTIF is entirely based on expert judgements. The estimated r is based on reported failure
causes in OREDA as well as expert judgements. A summary of some of the main arguments is
provided in section 3.3.
Failure Rate References
Overall
failure rate Failure mode
Data source/comment
(per 106 hrs) distribution
λcrit = 1.3 λD = 0.8 per 106 hrs Recommended values for calculation in 2006-
λDU = 0.3 per 106 hrs edition, [12]
λSTU = 0.3 per 106 hrs
Assumed cD = 60%
PTIF = 5·10-4

λcrit = 1.3 λD = 0.8 per 106 hrs Recommended values for calculation in 2004-
λDU = 0.3 per 106 hrs edition, [13]
λSTU = 0.4 per 106 hrs
Assumed cD = 60%
PTIF = 3·10-4 - 5·10-4 1)
1)
For smart/conventional respectively

47
Module: Input Devices
PDS Reliability Data Dossier
Component: Pressure Transmitter

λDU = 0.1 per 106 hrs Recommended values for calculation in 2003-
λcrit = 1.3 λSTU = 0.4 per 106 hrs edition, [13]
PTIF = 3·10-4 - 5·10-4 1)
Assumed cD = 90%
1)
For smart/conventional respectively
N/A D: N/A OREDA phase IV database, [6]
ST: N/A Data relevant for conventional pressure trans-
mitters.
Observed: Filter:
cD = N/A Inv. Equipment Class = Process Sensors AND
cST = N/A Inv. Design Class = Pressure AND
Inv. Att. Type process sensor = Transmitter AND
Inv. Phase = 4 AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 4

No. of inventories = 21
No. of critical (D or ST) failures = 0
Surveillance Time (hours) = 332 784
λcrit = 1.3 D: 0.64 OREDA phase III database, [8]
ST: 0.64 Data relevant for conventional pressure trans-
mitters.
Observed:
cD = 100 % Filter criteria: TAXCOD='PSPR' .AND. FUNCTN='OP' .OR.
(Calculated for 'GP'
transmitters having No. of inventories = 186
Total no. of critical failures = 6
some kind of self-test
Cal. time = 4 680 182 hrs
arrangement only)

λDU = 0.6 per 106 hrs Exida [15]: Generic smart DP / pressure
transmitter
SFF = 60%
Fail. to obtain signal: T-Book [16]: Pressure transmitter
0.83
Fail. to obtain signal: T-Book [16]: Pressure difference transmitter/
0.91 pressure difference cell

48
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.1.4 Level (Displacement) Transmitter

Module: Input Devices


PDS Reliability Data Dossier
Component: Level (Displacement) Transmitter

Description / equipment boundaries Date of Revision


The level transmitter includes the sensing 2009-12-18
element, local electronics and the process Remarks
isolation valves / process connections. Only displacement level transmitters are included in
the OREDA phase III, IV and V data. No data from
later phases available.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 1.4 per 106 hrs cD = 0.60 λDU = 0.6 per 106 hrs

λS = 1.6 per 106 hrs cS = 0.30 λSU = 1.1 per 106 hrs

λcrit = 3.0 per 106 hrs PTIF = 5 · 10-4

r = 0.3
Assessment
The failure rate estimate is mainly based on data from the OREDA phase III database with
additional data from OREDA phase IV and V. The rate of DU failures is estimated by assuming
coverage of 60% for dangerous failures. This is based on implemented self test in the transmitter
as well as casual observation by control room operator (the latter assumes that the signal can be
observed on the VDU and compared with other signals). If a higher coverage is claimed, special
documentation/verification should be required. The rate of safe failures has been estimated by
expert judgement to 30% (as compared to 50 % in the previous 2006-edition). This is due to the
fact that safe failures will be difficult to detect before a trip has actually occurred.

The PTIF is entirely based on expert judgements. The estimated r is based on reported failure
causes in OREDA as well as expert judgements. A summary of some of the main arguments is
provided in section 3.3.

Failure Rate References


Overall
failure rate Failure mode
Data source/comment
(per 106 hrs) distribution
λcrit = 3.0 λD = 1.4 per 106 hrs Recommended values for calculation in 2006-edition,
λDU = 0.6 per 106 hrs [12] and 2004-edition, [13]
λSTU = 0.8 per 106 hrs
Assumed cD = 60%
PTIF = 5·10-4

λDU = 0.1 per 106 hrs Recommended values for calculation in 2003-edition,
λcrit = 3.0 λSTU = 0.8 per 106 hrs [13]
Assumed cD = 90%
PTIF = 3·10-4 - 5·10-4 1)
1)
For smart/conventional respectively

49
Module: Input Devices
PDS Reliability Data Dossier
Component: Level (Displacement) Transmitter

28.5 D: 28.5 OREDA phase V database, [6]


ST: 0.0 Data relevant for conventional displacement level
transmitters.
Observed: Filter:
cD = 0 % (only one Inv. Equipment Class = Process Sensors AND
failure) Inv. Design Class = Level AND
cST = N/A Inv. Att. Type – process sensor = Transmitter AND
Inv. Att. Level sensing principle = Displacement AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 5

No. of inventories = 1
No. of critical D failures = 1
No. of critical ST failures = 0
Surveillance time (hours) = 49 272
1.9 D: 0.0 OREDA phase IV database, [6]
ST: 1.9 Data relevant for conventional displacement level
transmitters.
Observed: Filter:
cST = N/A Inv. Equipment Class = Process Sensors AND
(detection method Inv. Design Class = Level AND
uncertain) Inv. Att. Type process sensor = Transmitter AND
Inv. Att. Level sensing princ. = Displacement AND
Inv. Phase = 4 AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 4

No. of inventories = 17
No. of critical D failures = 0
No. of critical ST failures = 1
Surveillance time (hours) = 530 616
6.17 D: 4.94 OREDA phase III Database, [8]
ST: 1.23 Data relevant for conventional displacement level
transmitters.
Observed: Filter criteria: TAXCOD='PSLE' .AND. FUNCTN='OP' .OR. 'GP'
cD = 100 % No. of inventories = 65
(Calculated for Total no. of failures = 50
transmitters having Cal. time = 1 620 177 hrs
some kind of self-test Note! Only failures classified as "critical" are
included in the failure rate estimates.
arrangement only)
λDU = 1.25 per 106 hrs Exida [15]: Generic level (displacement) transmitter
SFF = 58%
Fail. to obtain signal: T-Book [16]: Level transmitter
2.7

50
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.1.5 Temperature Transmitter

Module: Input Devices


PDS Reliability Data Dossier
Component: Temperature Transmitter

Description / equipment boundaries Date of Revision


The temperature transmitter includes 2009-12-18
the sensing element, local electronics
and the process connections. Remarks
Note that the data material for temperature
transmitters is scarce, i.e., the failure rate estimate is
uncertain.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 0.7 per 106 hrs cD = 0.60 λDU = 0.3 per 106 hrs

λS = 1.3 per 106 hrs cS = 0.30 λSU = 0.9 per 106 hrs

λcrit = 2.0 per 106 hrs PTIF = 5 · 10-4

r = 0.3
Assessment
The failure rate estimate is the same as in the 2006 handbook, ref. [12], which is based on
OREDA phase III including some expert judgement due to scarce data - with OREDA phase IV
data (no temperature transmitters in OREDA phase V, VI or VII). The distribution between
(undetected) dangerous and safe failures is based on the distribution for pressure and flow
transmitters. The given coverage values for dangerous and safe failures are estimated mainly
based on expert judgement and the same argumentation as for pressure and level transmitters.

The PTIF is entirely based on expert judgements. The estimated r is based on reported failure
causes in OREDA together with expert judgements. A summary of some of the main arguments
is provided in section 3.3.

Failure Rate Reference


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λD = 0.7 per 106 hrs Recommended values for calculation in 2006 edition,
λcrit = 1.8 λDU = 0.3 per 106 hrs [12]
λSTU = 0.6 per 106 hrs
Assumed cD = 60%
-4
PTIF = 5·10
λD = 0.7 per 106 hrs Recommended values for calculation in 2003 and
λcrit = 1.8 λDU = 0.3 per 106 hrs 2004-editions, [13] and [14]
λSTU = 0.4 per 106 hrs
Assumed cD = 60%
-4 -4 1)
PTIF = 3·10 - 5·10 1)
For smart/conventional respectively

51
Module: Input Devices
PDS Reliability Data Dossier
Component: Temperature Transmitter
0.0 D: 0.0 OREDA phase IV database, [6]
ST: 0.0 Data relevant for conventional temperature trans-
mitter.
Filter:
Inv. Equipment Class = Process Sensors AND
Inv. Design Class = Temperature AND
Inv. Att. Type process sensor = Transmitter AND
Inv. Phase = 4 AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 4

No. of inventories = 21
No. of critical D failures = 0
No. of critical ST failures = 0
Cal. time = 735 848
D: 5.1 OREDA phase III database, [6]
5.1 Data relevant for conventional temperature trans-
Observed: mitter.
cD = 100 % Filter criteria: TAXCOD='PSTE' .AND. FUNCTN='OP' .OR. 'GP'
(Calculated for No. of inventories = 8
transmitters having Total no. of failures = 7
some kind of self-test Cal. time = 197 808 hrs
arrangement only) Note! Only failures classified as "critical" are included
in the failure rate estimates.

λDU = 0 Data from review of safety critical failures on


λS = 3.6 per 106 hrs Norwegian onshore plant. Data applicable for
temperature (safe act) transmitters
(λDU = 0.6 per 106 hrs
if assuming 0.5 failure) No. of inventories = 95 transmitters
1)
No. of critical DU failures = 0
Cal. time = 832 200 hrs 2)
1)
The review focused on DU failures, but classification of other failure
modes was also performed; 0 DD and 3 safe failures were registered.
2)
One year of operation
λDU = 0.3 per 106 hrs Exida [15]: Generic temperature transmitter

SFF = 63%
Fail. to obtain signal: T-Book [16]: Temperature transmitter
1.27 per 106 hrs

52
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.1.6 Flow Transmitter

Module: Input Devices


PDS Reliability Data Dossier
Component: Flow Transmitter

Description / equipment boundaries Date of Revision


The flow transmitter includes the sens- 2009-12-18
ing element, local electronics and the Remarks
process connections. Based on OREDA phase IV and V data. No data on
flow transmitters from later phases.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 1.5 per 106 hrs cD = 0.60 λDU = 0.6 per 106 hrs

λS = 2.2 per 106 hrs cS = 0.30 λSU = 1.5 per 106 hrs

λcrit = 3.7 per 106 hrs PTIF = 5 · 10-4

r = 0.3
Failure Rate Assessment
The failure rate estimate is the same as in the 2006 handbook, ref. [12], with OREDA phase IV
and V data. The rate of DU failures is estimated assuming 60 % coverage for dangerous failures.
The rate of safe undetected failures is estimated assuming 30 % coverage. The safe failure rate
includes ‘Erratic output’ failures. The argumentation for selecting the coverage values are the
same as above for other transmitters.

The PTIF is entirely based on expert judgements. The estimated r is based on reported failure
causes in OREDA as well as expert judgements. A summary of some of the main arguments is
provided in section 3.3.

Failure Rate References


Overall
failure rate Failure mode
Data source/comment
(per 106 hrs) distribution
λD = 1.5 per 106 hrs Recommended values for calculation in 2006 edition,
λcrit = 3.7 λDU = 0.6 per 106 hrs [12]
λSTU = 1.1 per 106 hrs
Assumed cD = 60%
-4
PTIF = 5·10
λD = 1.5 per 106 hrs Recommended values for calculation in 2003 and 2004
λcrit = 3.7 λDU = 0.6 per 106 hrs –editions, [13] and [14]
λSTU = 1.1 per 106 hrs
Assumed cD = 60%
-4 -4 1)
PTIF = 3·10 - 5·10 1)
For smart/conventional respectively

53
Module: Input Devices
PDS Reliability Data Dossier
Component: Flow Transmitter

7.1 D: 7.1 OREDA phase V database, [6]


ST: 0.0 Data relevant for conventional flow transmitters.
Filter:
Observed: Inv. Equipment Class = PROCESS SENSORS AND
CD,Casual = 100 % Inv. Design Class = Flow AND
(based on only one Inv. Att. Type - process sensor = Transmitter AND
failure) (Inv. System = Gas processing OR
Oil processing OR
CST = N/A
Condensate processing) AND
Inv. OREDA Phase = 5

No. of inventories = 4
No. of critical D failures = 1
No. of critical ST failures = 0
Surveillance time (hours) = 197 088
5.70 D: 2.85 OREDA phase IV database, [6]
ST: 2.85 Data relevant for conventional flow transmitters.
Filter:
Observed: Inv. Equipment Class = Process Sensors AND
CD = 0 % Inv. Design Class = Flow AND
CST = 100 % Inv. Att. Type process sensor = Transmitter AND
Inv. Phase = 4 AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. OREDA Phase = 4

No. of inventories = 11
No. of critical D failures = 1
No. of critical ST failures = 1
Surveillance time (hours) = 350 880
2.89 D: 1.24 OREDA phase III, [8], Database PS31__.
ST: 1.65 Data relevant for conventional flow transmitters.
Filter criteria: TAXCOD='PSFL' .AND. FUNCTN='OP'.OR.'GP'
Observed: No. of inventories = 72
CD = 100 % No. of critical D failures = 3
(Calculated including No. of critical ST failures = 4
transmitters having Cal. time = 2 422 200 hrs
Note! Only failures classified as "critical" are included in
some kind of self-test
the failure rate estimates.
arrangement only,)
λDU = 0.9 per 106 hrs 1) Exida [15]. Generic flow transmitter
λDU = 0.7 per 106 hrs 2)
1)
λDU = 0.5 per 106 hrs 3) 2) Measurement type: Coriolis meter
Measurement type: Mag meter
3)
SFF: 60% - 65% Measurement type: Vortex shedding
Fail. to obtain signal: T-Book [16]: Flow transmitter
2.6 per 106 hrs

54
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.1.7 Catalytic Gas Detector

Module: Input Devices


PDS Reliability Data Dossier
Component: Catalytic Gas Detector

Description / equipment boundaries Date of Revision


The detector includes the sensor and 2009-12-18
local electronics such as the address-/ Remarks
interface unit.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 3.5 per 106 hrs cD = 0.50 λDU = 1.8 per 106 hrs

λS = 1.5 per 106 hrs cS = 0.30 λSU = 1.1 per 106 hrs

λcrit = 5.0 per 106 hrs PTIF = 5 · 10-4

r = 0.4
Failure Rate Assessment
The failure rate estimates are primarily based on OREDA phase III and phase IV data. No data on
catalytic gas detectors in OREDA phase V, VI and VII.

Data from RNNP for the period 2003-2008 for gas detection has also been reviewed. In total
184374 detector tests have been performed during this period, resulting in 1631 failures. Based on
this a λDU = 1.0 · 10-6 can be estimated. It should however be noted that RNNP make no
distinction between IR and catalytic detectors.

The rate between dangerous (D) and safe (S) failures are kept the same as in the 2003 handbook.
The rate of DU failures is estimated assuming a coverage of 50 % for dangerous failures
(somewhat lower than for a process transmitters since the chance of casual operator
detection/observation is smaller). The rate of safe failures is estimated assuming a coverage of
30 %. The D failure rate includes ‘No output’ and ‘Very low output’ failures.

The PTIF is based on expert judgements and is based on the assumption that the detectors are
assumed exposed. The estimated r value is based on observed failure causes for critical detector
failures (40% “expected wear and tear” and 60% “maintenance errors”). A summary of some of
the main arguments is provided in section 3.3.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 5.0 λD = 3.5 per 106 hrs Recommended values for calculation in 2006-edition,
λDU = 1.8 per 106 hrs [12]
λSTU = 0.9 per 106 hrs
Assumed cD = 50%
PTIF = 5·10-4

55
Module: Input Devices
PDS Reliability Data Dossier
Component: Catalytic Gas Detector
λcrit = 5.0 λD = 3.5 per 106 hrs Recommended values for calculation in 2004-edition,
λDU = 1.8 per 106 hrs [13]
λSTU = 0.9 per 106 hrs
Assumed cD = 50%
PTIF = 3·10-4 – 0.1 1)
1)
For large to small gas leaks (large means > 1 kg/s)

λDU = 0.6 per 106 hrs Recommended values for calculation in 2003-edition,
λcrit = 2.3 λSTU = 0.4 per 106 hrs [14]
λD / λST = 2.3 1)
For large to small gas leaks (large means > 1 kg/s)
-4 1)
PTIF = 3·10 – 0.1
0.0 D: 0.0 OREDA phase IV database, [6]
ST: 0.0 Data relevant for conventional catalytic gas detectors.

Filter:
Inv. Equipment class = Fire & Gas Detectors AND
Inv. Att. Sensing principle = Catalytic AND
Inv. Phase = 4 AND
Fail. Severity Class = Critical

No. of inventories = 24
No. of critical D failures = 0
No. of critical ST failures = 0
Cal. time = 420 480
NOO: 3.62 OREDA phase III database, [8]
5.35 SHH: 0.79 Data relevant for conventional catalytic gas detectors.
Sum D: 4.41 More than 97 % of the detectors have automatic loop
test.
SLL: 0.02
VLO: 0.92 Filter criteria: TAXCOD='FGHC', SENSPRI='CATALYTIC'
Sum ST: 0.94 No. of inventories = 2 046
Total no. of critical failures = 263
Cal. time = 49 185 572 hrs
Observed:
CD = 64 %
(Calculated including
detectors having some
kind of self-test
arrangement only)
λDU = 1.75 per 106 hrs Exida [15]: Generic catalytic HC gas detector

SFF = 65%

56
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.1.8 IR Point Gas Detector

Module: Input Devices


PDS Reliability Data Dossier
Component: IR Point Gas Detector

Description / equipment boundaries Date of Revision


The detector includes the sensor and 2009-12-18
local electronics such as the address-/ Remarks
interface unit.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 2.5 per 106 hrs cD = 0.75 λDU = 0.6 per 106 hrs

λS = 2.2 per 106 hrs cS = 0.50 λSU = 1.1 per 106 hrs

λcrit = 4.7 per 106 hrs PTIF = 1 · 10-3

r = 0.4
Failure Rate Assessment
The failure rate estimate is an update of the previous estimate in the 2006 handbook and is based
on previous estimates as well as additional data from OREDA phase VI.

The rate of DU failures has been estimated assuming coverage for dangerous failures of 75%
(based on observations in OREDA phase IV, V and VI). The D failure rate includes ‘Fail to
function on demand’ and ‘No output’ failures. The rate of safe failures has been significantly
increased based on experiences from OREDA phase V and VI. Furthermore, coverage of 50 %
has been assumed for safe failures. The coverage values are given assuming that the detectors
have built-in self-test and monitoring of the optical path. It is then implicitly assumed that the
connected system has the ability to discriminate detected failures without shutting down (e.g. a
3mA signal gives an alarm not a shutdown).

The PTIF is based on expert judgements and is based on the assumption that the detectors are
exposed. The estimated r value is based on observed failure causes for critical detector failures
(40% “expected wear and tear” and 60% “maintenance errors”). A summary of some of the main
arguments is provided in section 3.3.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 4.0 λD = 3.3 per 106 hrs Recommended values for calculation in 2006-
λDU = 0.7 per 106 hrs edition, [12]
λSTU = 0.2 per 106 hrs
Assumed cD = 80%
PTIF = 1·10-3

57
Module: Input Devices
PDS Reliability Data Dossier
Component: IR Point Gas Detector

λcrit = 4.0 λD = 3.3 per 106 hrs Recommended values for calculation in 2004-
λDU = 0.7 per 106 hrs edition, [13]
λSTU = 0.2 per 106 hrs
Assumed cD = 80%
PTIF = 1·10-3 – 6·10-3 1,2)
1)
Range gives values for small to large gas leaks (large gas
leaks are leaks > 1 kg/s)
2)
Average over ventilation type and worst conditions
λcrit = 3.6 λDU = 0.7 per 106 hrs Recommended values for calculation in 2003-
λD / λST = 11 λSTU = 0.1 per 106 hrs edition, [14]
1)
Range gives values for small to large gas leaks (large gas
-3 -3 1,2)
PTIF = 1·10 – 6·10 leaks are leaks > 1 kg/s)
2)
Average over ventilation type and worst conditions
λcrit = 5.7 λD = 1.8 per 106 hrs OREDA phase V-VI database, [6], [8]
λS = 3.9 per 106 hrs Data relevant for IR gas detectors

Observed: Filter:
Inv. Equipment class = Fire & Gas Detectors AND
CD = 70% (Inv. OREDA Phase = 5 OR
Inv. Phase = 6) AND
CS = N/A
Inv. Equipment type = Hydrocarbon gas AND
Inv. Att. Sensing principle = IR

No. of inventories = 221


No. of critical failures = 41
No. of critical D failures = 13
No. of critical S failures = 28
Surveillance Time (hours) = 7 209 840
3.5 D: 3.5 OREDA phase IV database, [6]
ST: 0.0 Data relevant for IR gas detectors.

Observed: Filter:
cD = 100 % Inv. Equipment class = Fire & Gas Detectors AND
cST = NA (Inv. Att. Sensing principle = IR OR
Inv. Att. Sensing principle = IR/UV) AND
Inv. Phase = 4 AND
Fail. Severity Class = Critical

No. of inventories = 54
No. of critical D failures = 4
No. of critical ST failures = 0
Cal. time = 1 148 472
λDU = 0.4 per 106 hrs Exida [15]: Generic IR gas detector

SFF = 78%
Ddet: 2.9 Oseberg C, [18]
4.1 Dundet: 1.2 Data relevant for conventional IR gas detectors.
STdet: 0 No. of inventories = 41
STundet: 0 Total no. of failures = 26 (4 critical)
Time = 977 472 hrs
Note! Only failures classified as "critical" are
included in the failure rate estimates.

58
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.1.9 IR Line Gas Detector

Module: Input Devices


PDS Reliability Data Dossier
Component: IR Line Gas Detector

Description / equipment boundaries Date of Revision


The detector includes the sensor and 2009-12-18
local electronics such as the address-/ Remarks
interface unit.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 2.8 per 106 hrs cD = 0.75 λDU = 0.7 per 106 hrs

λS = 2.2 per 106 hrs cS = 0.50 λSU = 1.1 per 106 hrs

λcrit = 5.0 per 106 hrs PTIF = 1 · 10-3

r = 0.4
Assessment
The failure rate estimate is an update of the previous estimate in the 2006 handbook and is based
on previous estimates as well as additional information on IR detectors from OREDA phase VI
(only new data on IR point detectors). It should be noted that data on IR line detectors are scarce,
and therefore experience from IR point detectors has been applied.

As for IR point detectors the rate of DU failures has been estimated assuming coverage for
dangerous failures of 75%, whereas for safe failures coverage of 50 % has been assumed. The
coverage values are given assuming that the detectors have built-in self-test and monitoring of the
optical path. It is then implicitly assumed that the connected system has the ability to discriminate
detected failures without shutting down (e.g. a 3mA signal gives an alarm not a shutdown).

The PTIF is based on expert judgements and is based on the assumption that the detectors are
exposed. The estimated r value is based on observed failure causes for critical detector failures
(40% “expected wear and tear” and 60% “maintenance errors”). A summary of some of the main
arguments is provided in section 3.3.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 5.3 λD = 3.3 per 106 hrs Recommended values for calculation in 2006-
λDU = 0.7 per 106 hrs edition, [12]
λSTU = 0.6 per 106 hrs
Assumed cD = 80%
PTIF = 1·10-3

59
Module: Input Devices
PDS Reliability Data Dossier
Component: IR Line Gas Detector

λcrit = 5.3 λD = 3.3 per 106 hrs Recommended values for calculation in 2004-
λDU = 0.7 per 106 hrs edition [13]
λSTU = 0.6 per 106 hrs
Assumed cD = 80%
PTIF = 1·10-2 – 6·10-2 1,2)
1)
Range gives values for small to large gas leaks (large gas
leaks are leaks > 1 kg/s)
2)
Average over ventilation type and worst conditions
λcrit = 3.6 λDU = 0.7 per 106 hrs Previously recommended values for calculation in
λD / λST = 11 λSTU = 0.1 per 106 hrs 2003-edition [14]
1)
Range gives values for small to large gas leaks (large gas
-2 -2 1,2)
PTIF = 1·10 – 6·10 leaks are leaks > 1 kg/s)
2)
Average over ventilation type and worst conditions
4.1 D: 4.1 OREDA phase IV+V database [6], [4]
ST: 0.0 Data relevant for conventional IR gas detectors.

Observed: Filter:
cD = 100 % Inv. Equipment class = Fire & Gas Detectors AND
cST = N/A Inv. Design Class = Hydrocarbon gas AND
Inv. Att. Sensing principle = PH-EL BEAM AND
Inv. OREDA Phase = 4 + 5 AND
Fail. Severity Class = Critical

No. of inventories = 55
No. of critical D failures = 5
No. of critical ST failures = 0
Cal. time = 1 202 472

60
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.1.10 Smoke Detector

Module: Input Devices


PDS Reliability Data Dossier
Component: Smoke Detector

Description / equipment boundaries Date of Revision


The detector includes the sensor and 2009-12-18
local electronics such as the address-/ Remarks
interface unit. Fire central not included
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 1.2 per 106 hrs cD = 0.40 λDU = 0.7 per 106 hrs

λS = 2.0 per 106 hrs cS = 0.30 λSU = 1.4 per 106 hrs

λcrit = 3.2 per 106 hrs PTIF = 1 · 10-3

r = 0.4
Assessment
The failure rate estimate is an update of the 2006 figure which was primarily based on OREDA
phase III as well as some phase V data. The rate of DU failures is estimated still assuming
coverage of 40 % (observed in OREDA incomplete and complete phase III were 29% and 50%,
respectively). The rate of dangerous and safe failures has been slightly decreased based on
observations from failure reviews and later OREDA phases. For safe failures 30 % coverage -
mainly based on OREDA phase III observations as well as expert judgement - has been assumed.
It should be noted that for some type of smoke detectors with more extensive self test, the
coverage may be significantly higher. This must be assessed for each specific detector type.

The PTIF is based on expert judgements and is based on the assumption that the detectors are
exposed. The estimated r value is based on observed failure causes for critical detector failures
(40% “expected wear and tear” and 60% “maintenance errors”). A summary of some of the main
arguments is provided in section 3.3.

Failure Rate Reference


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 3.7 λD = 1.3 per 106 hrs Recommended values for calculation in 2006-edition,
λDU = 0.8 per 106 hrs [12]
λSTU = 1.4 per 106 hrs
Assumed cD = 40%
PTIF = 10-3

λcrit = 3.7 λD = 1.3 per 106 hrs Recommended values for calculation in 2004- and
λDU = 0.8 per 106 hrs 2003-edition, [13], [14]
λSTU = 1.2 per 106 hrs
Assumed cD = 40%
PTIF = 10-3 – 0.05 1)
1)
The range represents different types of fires (smoke/flame)

61
Module: Input Devices
PDS Reliability Data Dossier
Component: Smoke Detector

0.0 D: 0.0 OREDA phase V database [6]


ST: 0.0 Data relevant for smoke/combustion detectors.
Filter:
Observed: Inv. Equipment class = Fire & Gas Detectors AND
cD = N/A Inv. Design Class = Smoke/Combustion AND
cST = N/A Inv. Phase = 5 AND
Fail. Severity Class = Critical

No. of inventories = 103


No. of critical D failures = 0
No. of critical ST failures = 0
Surveillance Time (hours) = 3 238 320
3.7 D: 1.0 OREDA phase III database, [8].
SPO: 2.7 Data relevant for smoke/combustion detectors. Both
conventional (65 %) and addressable (35 %) detectors
Observed: are included. 56 % have automatic loop test, 35 %
cD = 29 % have a combination of loop and built-in self-test, the
(Calculated including residual (9 %) have no self-test feature.
detectors having some
kind of self-test No. of inventories = 1 897
arrangement only) Total no. of failures = 218
Cal. time = 50 374 800 hrs
Note! Only failures classified as "critical" are included in
the failure rate estimates.

λDU = 0.3 per 106 hrs Data from review of safety critical failures on
Norwegian onshore plant. Data applicable for optical
smoke detectors

No. of inventories = 807 detectors (460 early warning)


No. of critical DU failures = 2 1)
Cal. time = 7 069 320 hrs 2)
1)
The failure review focused on DU failures, but classification of other
failure modes was also performed. No DD or safe failures registered.
2)
One year of operation

λDU = 0.6 per 106 hrs Data from review of safety critical failures on
Norwegian semi-submersible platform. Data applicable
for optical smoke detectors

No. of inventories = 788 detectors


No. of critical DU failures = 8 1)
Cal. time = 13 805 760 hrs 2)
1)
The failure review focused on DU failures. In addition 10 DD and 14
safe failures were also registered
2)
Two years of operation

λDU = 1.65 per 106 hrs Exida [15]: Generic smoke (ionization) detector
λSU = 3.85 per 106 hrs

SFF = 70 %

62
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.1.11 Heat Detector

Module: Input Devices


PDS Reliability Data Dossier
Component: Heat Detector

Description Date of Revision


The detector includes the sensor and 2009-12-18
local electronics such as the address-/ Remarks
interface unit. It is assumed that the heat detectors have a digital
on/off output
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 1.0 per 106 hrs cD = 0.40 λDU = 0.6 per 106 hrs

λS = 1.5 per 106 hrs cS = 0.40 λSU = 0.9 per 106 hrs

λcrit = 2.5 per 106 hrs PTIF = 1 · 10-3

r = 0.4
Assessment
The failure rate estimate is an update of the figures in the 2006 handbook. The rate of D failures
is estimated assuming coverage of 40% (observed in OREDA phase III incomplete and complete
to be 50% and 36%, respectively). The rate of safe failures is estimated assuming coverage of
40% (previously assumed to be 20%, observed in OREDA (complete) phase III to be
significantly higher).

The PTIF is based on expert judgements given the assumption that the detector is exposed. The
estimated r value is based on observed failure causes for critical detector failures (40%
“expected wear and tear” and 60% “maintenance errors”). A summary of some of the main
arguments is provided in section 3.3.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 2.5 λD = 1.0 per 106 hrs Recommended values for calculation in 2006-
λDU = 0.6 per 106 hrs edition, [12]
λSTU = 0.9 per 106 hrs
Assumed cD = 50%
PTIF = 10-3

λcrit = 2.4 λD = 0.9 per 106 hrs Recommended values for calculation in 2004-
λDU = 0.5 per 106 hrs edition, [13]
λSTU = 0.8 per 106 hrs
1)
Assumed cD = 50%
PTIF = 0.05 – 0.5 1)
The range represents the occurrence of different types of
fires (smoke/flame)

63
Module: Input Devices
PDS Reliability Data Dossier
Component: Heat Detector

λDU = 0.5 per 106 hrs Previously recommended values for calculation
λcrit = 2.4 λSTU = 0.75 per 106 hrs in 2003-edition [12]
λD / λST = 0.6 1)
The range represents the occurrence of different types of
1)
PTIF = 0.05 – 0.5 fires (smoke/flame)

0.00 D: 0.00 OREDA phase V database [6]


ST: 0.00 Data relevant for heat detectors.
Filter:
Observed: Inv. Equipment class = Fire & Gas Detectors AND
cD = N/A Inv. Design Class = Heat AND
cST = N/A Inv. Phase = 5 AND
Fail. Severity Class = Critical

No. of inventories = 23
No. of critical D failures = 0
No. of critical ST failures = 0
Surveillance Time (hours) = 723 120
D: 0.82 OREDA phase III database [8]
2.21 SPO: 1.39 Data relevant for conventional heat detectors.
Both rate-of rise (23 %) and rate-compensated
Observed: (77 %) detectors are included. Of the detectors,
cD = 50 % 89 % have automatic loop test, the residual
(Calculated including (11 %) have no self-test feature.
detectors having some
kind of self-test No. of inventories = 865
arrangement only) Total no. of failures = 79
Cal. time = 24 470 588 hrs
Note! Only failures classified as "critical" are
included in the failure rate estimates.

λDU = 1.9 per 106 hrs Exida [15]: Generic heat detector
λSU = 3.6 per 106 hrs

SFF = 65%

64
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.1.12 Flame Detector

Module: Input Devices


PDS Reliability Data Dossier
Component: Flame Detector

Description Date of Revision


The detector includes the sensor and 2006-01-27
local electronics such as the address/- Remarks
interface unit. Combined sample of IR, UV and IR/UV detectors.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 2.7 per 106 hrs cD = 0.70 λDU = 0.8 per 106 hrs
λS = 3.8 per 106 hrs cS = 0.50 λSU = 1.9 per 106 hrs

λcrit = 6.5 per 106 hrs PTIF = 1 · 10-3

r = 0.4
Failure Rate Assessment
The failure rate estimate is an update of the previous estimate in the 2006 handbook [12] (primarily
based on OREDA phase III data). The rate of dangerous failures has been slightly reduced as
compared to the 2006 estimate due to input from operational reviews. Coverage for D failures has
been assumed to be 70 % based on expert judgement. The rate of safe failures is estimated
assuming coverage of 50 %. It should be noted that these coverage values assume that the detectors
have built-in self-test and monitoring of the optics.

The PTIF is based on expert judgements and has been updated based on the fact that the detectors
are now assumed exposed. The estimated r value is based on observed failure causes for critical
detector failures (50% “expected wear and tear” and 50% “maintenance errors”). A summary of
some of the main arguments is provided in section 3.3.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 6.8 λD = 3.0 per 106 hrs Recommended values for calculation in 2006-edition,
λDU = 095 per 106 hrs [12]
λSTU = 1.5 per 106 hrs
Assumed cD = 70%
PTIF = 1·10-3

λcrit = 2.4 λD = 0.9 per 106 hrs Recommended values for calculation in 2004-edition,
λDU = 0.5 per 106 hrs [13]
λSTU = 0.8 per 106 hrs
Assumed cD = 60%
PTIF = 3·10-4 – 0.5 1)
1)
The range represents the occurrence of different types of fires
(smoke/flame)

65
Module: Input Devices
PDS Reliability Data Dossier
Component: Flame Detector

λDU = 2.1 per 106 hrs Previously recommended values for calculation in
λcrit = 8.3 λSTU = 2.1 per 106 hrs 2003-edition [14]
λD / λST = 1.0 1)
The range represents the occurrence of different types of fires
-4 1)
PTIF = 3·10 – 0.5 (smoke/flame)

0.7 D: 0.0 OREDA phase V database [6]


ST: 0.7 Data relevant for conventional flame detectors.
Filter:
Observed: Inv. Equipment class = Fire & Gas Detectors AND
cD = N/A Inv. design Class = Flame AND
cST,Casual = N/A Inv. Phase = 5 AND
Fail. Severity Class = Critical

No. of inventories = 27
No. of critical D failures = 0
No. of critical ST failures = 1
Surveillance Time (hours) = 1 686 096
D: 3.2 OREDA phase III database [8]
7.2 SPO: 4.0 Data relevant for conventional flame detectors. IR
(52 %), UV (13 %) and combined IR/UV (35 %)
Observed: detectors are included. Of the detectors, 75 % have
cD = 48 % automatic loop test, 3 % have built-in self-test, 15 %
(Calculated including have combination of automatic loop and built-in self-
detectors having some test, the residual (11 %) has no self-test feature.
kind of self-test
arrangement only) No. of inventories = 1 010
No. of failures = 292
Cal. time = 23 136 820 hrs
Note! Only failures classified as "critical" are included in
the failure rate estimates.

λDU = 1.4 per 106 hrs Data from review of safety critical failures on
Norwegian onshore plant. Data applicable for IR flame
detectors

No. of inventories = 580 detectors


No. of critical DU failures = 7 1)
Cal. time = 5 080 800 hrs 2)
1)
The review focused on DU failures, but classification of other failure
modes was also performed; 2 DD and 43 safe failures were also
registered.
2)
One year of operation

λDU = 0.2 per 106 hrs 1) Data from review of safety critical failures on
Norwegian semi-submersible platform. Data applicable
for IR flame detectors
1)
when assuming one
failure (occurring No. of inventories = 241 detectors
tomorrow) No. of critical DU failures = 0 2)
Cal. time = 4 222 321 hrs 3)
2)
The failure review focused on DU failures, but classification of other
failure modes was also performed; 3 DD and 3 safe failures were also
registered.
3)
Two years of operation

66
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Module: Input Devices


PDS Reliability Data Dossier
Component: Flame Detector
λDU = 1.8 per 106 hrs Exida [15]: Generic fire/flame detector

SFF = 69%
D: 2.0 Oseberg C report [18]
4.5 ST: 2.5 Data relevant for IR flame detectors.
No. of inventories = 162
No. of failures = 30 (18 critical)
Time = 3 978 240 hrs
Note! It is assumed that only failures classified as
"critical" are included in the failure rate esti-
mates.

67
5.1.13 H2S Detector

Module: Input Devices


PDS Reliability Data Dossier
Component: Catalytic H2S Detector

Description / equipment boundaries Date of Revision


The detector includes the sensor and 2009-12-18
local electronics such as the address-/ Remarks
interface unit.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 1.0 per 106 hrs cD = 0.50 λDU = 0.5 per 106 hrs

λS = 0.3 per 106 hrs cS = 0.30 λSU = 0.2 per 106 hrs

λcrit = 1.3 per 106 hrs PTIF = 5 · 10-4

r = 0.4
Failure Rate Assessment
The failure rate estimate is based on OREDA phase V data as well as expert judgement and other
data sources. The rate of DU failures is primarily based on reported “Fail to function on demand”
failures although these failures in OREDA phase V have been reported as degraded instead of
critical failures. The coverage factors for dangerous and safe failures are assumed similar as for
catalytic gas detectors. The same distribution between dangerous and safe failures as for catalytic
gas detectors is also assumed.

The PTIF is based on expert judgements and on the assumption that the detectors are exposed. The
estimated r value is assumed the same as for catalytic gas detectors. A summary of some of the
main arguments is provided in section 3.3.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λDU = 0.4 per 106 hrs OREDA phase V database, [6]
Data relevant for H2S gas detectors
(based on reported
“Fail to function on Filter:
demand” failures) Inv. Equipment class = Fire & Gas Detectors AND
Inv. Equipment type = H2S gas AND
Inv. Att. Sensing principle = H2S gas AND
Inv. Phase = 5 AND

No. of inventories = 542


No. of critical failures = 0
No. of degraded failures = 157
No. of fail to function failures = 6
Surveillance time = 16 769 160

68
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Module: Input Devices


PDS Reliability Data Dossier
Component: Catalytic H2S Detector
λDU = 0.4 per 106 hrs Exida [15]: Toxic (electrochemical) gas sensor
λSU = 0.2 per 106 hrs
Infrequent presence of toxic gas
SFF = 93%
λDU = 1.95 per 106 hrs Exida [15]: Toxic (electrochemical) gas sensor
λSU = 0.2 per 106 hrs
Normal presence of toxic gas
SFF = 67%

69
5.1.14 ESD Push Button

Module: Input Devices


PDS Reliability Data Dossier
Component: ESD Push Button

Description / equipment boundaries Date of Revision


Pushbutton including wiring 2009-12-18
Remarks
It is assumed that line monitoring and termination
resistors are implemented.

Recommended Values for Calculation


Total rate Coverage Undetected rate

λD = 0.5 per 106 hrs cD = 0.20 λDU = 0.4 per 106 hrs

λS = 0.3 per 106 hrs cS = 0.10 λSU = 0.3 per 106 hrs

λcrit = 0.8 per 106 hrs PTIF = 1 · 10-5

r = 0.8
Failure Rate Assessment
The failure rate is based on all listed data sources, also taking into account some expert
judgements. As compared to the 2006 estimate some additional experience from two operational
reviews has been added.

The PTIF as well as the r values are entirely based on expert judgements. A summary of some of
the main arguments is provided in section 3.3.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λD = 0.5 per 106 hrs Previously recommended values for calculation
λcrit = 0.9 λDU = 0.4 per 106 hrs in 2006-edition
λSTU = 0.4 per 106 hrs
PTIF = 10-5 Assumed cD = 20%

λD = 0.3 per 106 hrs Previously recommended values for calculation


λcrit = 1.1 λDU = 0.2 per 106 hrs in 2003- and 2004 editions [13] and [14]
λSTU = 0.6 per 106 hrs
PTIF = 10-5 Assumed cD = 20%

70
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Module: Input Devices


PDS Reliability Data Dossier
Component: ESD Push Button

λDU = 1.2 per 106 hrs Data from review of safety critical failures on
Norwegian onshore plant. Data applicable for
manual initiators / pushbuttons

No. of inventories = 93
1)
No. of critical DU failures = 1
Cal. time = 814 680 hrs 2)
1)
The review focused on DU failures, no failures were classified
as DD or safe.
2)
One year of operation

λDU = 0.2 per 106 hrs 1) Data from review of safety critical failures on
λS = 0.2 per 106 hrs 1) Norwegian semi-submersible platform. Data
applicable for manual initiators / pushbuttons
1)
when adding the
experience from onshore No. of inventories = 203
2)
plant and offshore No. of critical DU failures = 0
installation together Cal. time = 3 556 560 hrs 3)
2)
The failure review focused on DU failures, 1 additional failure
was classified as safe.
3)

λDU = 0.8 per 106 hrs Exida [15]: Generic push button
λSU = 0.2 per 106 hrs
SFF = 20%

71
5.2 Control Logic Units
Below, reliability figures for control logic units are given. Data are given for standard industrial
PLC, programmable safety systems and hardwired safety systems respectively. The following
general assumptions and notes apply throughout section 5.2.1- 5.2.3:

• A single system with analogue input, CPU/logic and digital output configuration is
generally assumed;
• For the input and output part, figures are given for one channel plus the common part of
the input/output card (except for hardwired safety system where figures for one channel
only are given);
• Single CPU / logic part is assumed throughout
• If the figures for input and output are to be used for redundant configurations, separate
input cards and output cards must be used since the given figures assume a common part
on each card;
• If separate Ex barriers or other interface devices are used, figures for these must be added
separately;
• The systems are generally assumed used in de-energised to trip functions, i.e. loss of
power or signal will result in a safe state.

72
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.2.1 Standard Industrial PLC

5.2.1.1 Analogue Input


Module: Control Logic Units – Standard Industrial PLC
PDS Reliability Data Dossier
Component: Analogue Input

Description / equipment boundaries Date of Revision


Analogue input part of standard 2009-12-18
industrial PLC including one analogue Remarks
input channel and common part of input
The data is applicable for a non SIL rated standard
card.
industrial PLC, used for de-energised to trip functions
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 1.8 per 106 hrs cD = 0.6 λDU = 0.7 per 106 hrs

λS = 1.8 per 106 hrs cS = 0.2 λSU = 1.4 per 106 hrs

λcrit = 3.6 per 106 hrs PTIF = N/A *

r = 0.1
Assessment
The presented failure rates are updated values from the 2006 handbook, [12], where a common
failure rate was presented for input, logic and output. Since no new OREDA data for control
logic has been collected during the latter years, the 2006 figures were based on manufacturer
data as well as judgements made by the project group.

In this new edition of the handbook safety system manufacturers (ABB, HIMA, Kongsberg and
Siemens) have again been asked to provide their “best estimate failure rates” including
percentagewise distribution between the different elements. Based on these estimates as well as
additional judgements, updated failure rates have been provided based on an assumed
distribution between input, logic and output of 15%, 70% and 15% respectively.

The estimated coverage factors, PTIF and r values are based on expert judgements. A summary of
some of the main arguments is provided in section 3.3.

*Note that for control logic units only a PTIF for the CPU is given.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 30 λD = 15 per 106 hrs Recommended values for calculation in 2006-
λDU = 5.0 per 106 hrs edition, [12]. Apply for input, logic and output
λSTU = 12 per 106 hrs (standard industrial PLC - single system)

PTIF = 5·10-4 Assumed coverage cD = 67%

73
Module: Control Logic Units – Standard Industrial PLC
PDS Reliability Data Dossier
Component: Analogue Input
λDU = 0.3 per 106 hrs 1)
Exida [15]: Analogue in - general purpose PLC
λDD = 0.8 per 106 hrs 1) (1oo1)
1)
λSU = 0.2 per 106 hrs 1) Includes one analogue in module and one channel
λSD = 0.9 per 106 hrs 1)
SFF = 84 % (analogue input module + 3 ch’s.)

74
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.2.1.2 Central Processing Unit (CPU)


Module: Control Logic Units – Standard Industrial PLC
PDS Reliability Data Dossier
Component: CPU

Description / equipment boundaries Date of Revision


Logic part of standard industrial PLC 2009-12-18
including single CPU, memory, watch- Remarks
dog, electronics, bus, communication,
The data is applicable for a non SIL rated standard
etc.
industrial PLC, used for de-energised to trip
functions
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 8.8 per 106 hrs cD = 0.6 λDU = 3.5 per 106 hrs

λS = 8.8 per 106 hrs cS = 0.2 λSU = 7.0 per 106 hrs

λcrit = 17.6 per 106 hrs PTIF = 5 · 10-4

r = 0.1
Assessment
The presented failure rates are updated values from the 2006 handbook, [12], where a common
failure rate was presented for input, logic and output. Since no new OREDA data for control
logic has been collected during the latter years, the 2006 figures were based on manufacturer
data as well as judgements made by the project group.

In this new edition of the handbook safety system manufacturers (ABB, HIMA, Kongsberg and
Siemens) have again been asked to provide their “best estimate failure rates” including
percentagewise distribution between the different elements. Based on these estimates as well as
additional judgements, updated failure rates have been provided based on an assumed
distribution between input, logic and output of 15%, 70% and 15% respectively.

The estimated coverage factors, PTIF and r values are based on expert judgements. A summary of
some of the main arguments is provided in section 3.3.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 30 λD = 15 per 106 hrs Recommended values for calculation in 2006-
λDU = 5.0 per 106 hrs edition, [12]. Apply for input, logic and output
λSTU = 12 per 106 hrs (standard industrial PLC - single system)

PTIF = 5·10-4 Assumed coverage cD = 67%

75
Module: Control Logic Units – Standard Industrial PLC
PDS Reliability Data Dossier
Component: CPU

λcrit = 32 λD = 16 per 106 hrs Recommended values for calculation in 2003-


λDU = 1.6 per 106 hrs edition, [13]
λSTU = 1.6 per 106 hrs
Assumed coverage cD = 90%
PTIF = 5·10-5 - 5·10-4 1)
1)
For TÜV certified and standard system, respectively
75.0 D: 59.4 OREDA phase IV database [6]
ST: 15.6 Data relevant for control logic units including
I/O-cards. Both PLCs (14 %) and computers
Observed: (86 %) are included. The control logic units are
cD = 93 % used both in ESD/PSD system (70 %) and F&G
cST = 88 % systems (30 %).

Filter:
Inv. Equipment class = Control Logic Units AND
Inv. Phase = 4 AND
Fail. Severity Class = Critical

No. of inventories = 71
No. of critical D failures = 103
No. of critical ST failures = 27
Cal. time = 1 733 664
λDU = 1.5 per 106 hrs 1)
Exida: Main processor – general purpose PLC
λDD = 3.7 per 106 hrs 1) (1oo1)
λSU = 0.7 per 106 hrs 1)
1)
λSD = 9.1 per 106 hrs 1) Includes main processor and power supply

SFF = 85% (main processor)


= 99.7 % (power supply)

76
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.2.1.3 Digital Output


Module: Control Logic Units – Standard Industrial PLC
PDS Reliability Data Dossier
Component: Digital Output

Description / equipment boundaries Date of Revision


Digital output part of standard industrial 2009-12-18
PLC including one digital output Remarks
channel and common part of output card.
The data is applicable for a non SIL rated standard
industrial PLC, used for de-energised to trip
functions.
If a relay output is used, figures for a relay should be
added.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 1.8 per 106 hrs cD = 0.6 λDU = 0.7 per 106 hrs

λS = 1.8 per 106 hrs cS = 0.2 λSU = 1.4 per 106 hrs

λcrit = 3.6 per 106 hrs PTIF = N/A *

r = 0.1
Assessment
The presented failure rates are updated values from the 2006 handbook, [12], where a common
failure rate was presented for input, logic and output. Since no new OREDA data for control
logic has been collected during the latter years, the 2006 figures were based on manufacturer
data as well as judgements made by the project group.

In this new edition of the handbook safety system manufacturers (ABB, HIMA, Kongsberg and
Siemens) have again been asked to provide their “best estimate failure rates” including
percentagewise distribution between the different elements. Based on these estimates as well as
additional judgements, updated failure rates have been provided based on an assumed
distribution between input, logic and output of 15%, 70% and 15% respectively.

The estimated coverage factors, PTIF and r values are based on expert judgements. A summary of
some of the main arguments is provided in section 3.3.

*Note that for control logic units only a PTIF for the CPU is given.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 30 λD = 15 per 106 hrs Recommended values for calculation in 2006-
λDU = 5.0 per 106 hrs edition, [12]. Apply for input, logic and output
λSTU = 12 per 106 hrs (standard industrial PLC - single system)

PTIF = 5·10-4 Assumed coverage cD = 67%

77
Module: Control Logic Units – Standard Industrial PLC
PDS Reliability Data Dossier
Component: Digital Output
λDU = 0.2 per 106 hrs 1)
Exida [15]: Digital out - general purpose PLC
λDD = 0.4 per 106 hrs 1) (1oo1)
λSU = 0.1 per 106 hrs 1)
1)
λSD = 0.5 per 106 hrs 1) Includes one digital out low module and one channel

SFF = 80% (digital out low module + 2 ch’s.)

78
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.2.2 Programmable Safety System

5.2.2.1 Analogue Input


Module: Control Logic Units – Programmable Safety System
PDS Reliability Data Dossier
Component: Analogue Input

Description / equipment boundaries Date of Revision


Analogue input part of programmable 2009-12-18
safety system including one analogue Remarks
input channel and common part of input
The data is applicable for a SIL certified
card.
programmable safety system, used for de-energised
to trip functions
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 1.6 per 106 hrs cD = 0.9 λDU = 0.16 per 106 hrs

λS = 1.6 per 106 hrs cS = 0.2 λSU = 1.3 per 106 hrs

λcrit = 3.2 per 106 hrs PTIF = N/A *

r = 0.4
Assessment
The presented failure rates are updated values from the 2006 handbook, [12], where a common
failure rate was presented for input, logic and output. Since no new OREDA data for control
logic has been collected during the latter years, the 2006 figures were based on manufacturer
data as well as judgements made by the project group.

In this new edition of the handbook safety system manufacturers (ABB, HIMA, Kongsberg and
Siemens) have again been asked to provide their “best estimate failure rates” including
percentagewise distribution between the different functional parts. Based on these estimates as
well as additional judgements, updated failure rates have been provided based on an assumed
distribution between input, logic and output of 20%, 60% and 20% respectively.

The estimated coverage factors, PTIF and r values are based on expert judgements. A summary of
some of the main arguments is provided in section 3.3.

*Note that for control logic units only a PTIF for the CPU is given.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 20 λD = 10 per 106 hrs Recommended values for calculation in 2006-
λDU = 1.0 per 106 hrs edition, [12]. Apply for input, logic and output
λSTU = 8 per 106 hrs (programmable safety system - single system)

PTIF = 5·10-5 Assumed coverage cD = 90%

79
Module: Control Logic Units – Programmable Safety System
PDS Reliability Data Dossier
Component: Analogue Input
λDU = 0.1 per 106 hrs 1) Exida [15]: Analogue in – generic SIL2 certified
λDD = 0.9 per 106 hrs 1) PLC (1oo1D)
1)
λSU = 0.1 per 106 hrs 1) Includes one analogue in module and one channel
λSD = 1.0 per 106 hrs 1)
SFF = 95 % (analogue input module + 1 channel)

80
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.2.2.2 Central Processing Unit (CPU)


Module: Control Logic Units – Programmable Safety System
PDS Reliability Data Dossier
Component: CPU

Description / equipment boundaries Date of Revision


Logic part of programmable safety 2009-12-18
system including single CPU, memory, Remarks
watchdog, electronics, bus,
The data is applicable for a SIL certified
communication, etc.
programmable safety system, used for de-energised
to trip functions
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 4.8 per 106 hrs cD = 0.9 λDU = 0.48 per 106 hrs

λS = 4.8 per 106 hrs cS = 0.2 λSU = 3.8 per 106 hrs

λcrit = 9.6 per 106 hrs PTIF = 5 · 10-5

r = 0.4
Assessment
The presented failure rates are updated values from the 2006 handbook, [12], where a common
failure rate was presented for input, logic and output. Since no new OREDA data for control
logic has been collected during the latter years, the 2006 figures were based on manufacturer
data as well as judgements made by the project group.

In this new edition of the handbook safety system manufacturers (ABB, HIMA, Kongsberg and
Siemens) have again been asked to provide their “best estimate failure rates” including
percentagewise distribution between the different functional parts. Based on these estimates as
well as additional judgements, updated failure rates have been provided based on an assumed
distribution between input, logic and output of 20%, 60% and 20% respectively.

The estimated coverage factors, PTIF and r values are based on expert judgements. A summary of
some of the main arguments is provided in section 3.3.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 20 λD = 10 per 106 hrs Recommended values for calculation in 2006-
λDU = 1.0 per 106 hrs edition, [12]. Apply for input, logic and output
λSTU = 8 per 106 hrs (programmable safety system - single system)

PTIF = 5·10-5 Assumed coverage cD = 90%

81
Module: Control Logic Units – Programmable Safety System
PDS Reliability Data Dossier
Component: CPU

λcrit = 32 λD = 16 per 106 hrs Recommended values for calculation in 2003-


λDU = 1.6 per 106 hrs edition, [13]
λSTU = 1.6 per 106 hrs
Assumed coverage cD = 90%
PTIF = 5·10-5 - 5·10-4 1)
1) For TÜV certified and standard system, respectively

75.0 D: 59.4 OREDA phase IV database [6]


ST: 15.6 Data relevant for control logic units including
I/O-cards. Both PLCs (14 %) and computers
Observed: (86 %) are included. The control logic units are
cD = 93 % used both in ESD/PSD system (70 %) and F&G
cST = 88 % systems (30 %).

Filter:
Inv. Equipment class = Control Logic Units AND
Inv. Phase = 4 AND
Fail. Severity Class = Critical

No. of inventories = 71
No. of critical D failures = 103
No. of critical ST failures = 27
Cal. time = 1 733 664
λDU = 0.2 per 106 hrs 1)
Exida: Main processor – generic SIL 2 certified
λDD = 2.9 per 106 hrs 1) PLC (1oo1D)
λSU = 0.1 per 106 hrs 1)
1)
λSD = 9.2 per 106 hrs Includes main processor and power supply
1)
SFF = 98.5% (main processor)
= 100 % (power supply)

82
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.2.2.3 Digital Output


Module: Control Logic Units – Programmable Safety System
PDS Reliability Data Dossier
Component: Digital Output

Description / equipment boundaries Date of Revision


Digital output part of programmable 2009-12-18
safety system including one digital Remarks
output channel and common part of
The data is applicable for a SIL certified
output card.
programmable safety system, used for de-energised
to trip functions. If a relay output is used, figures for
a relay should be added.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 1.6 per 106 hrs cD = 0.9 λDU = 0.16 per 106 hrs

λS = 1.6 per 106 hrs cS = 0.2 λSU = 1.3 per 106 hrs

λcrit = 3.2 per 106 hrs PTIF = N/A *

r = 0.4
Assessment
The presented failure rates are updated values from the 2006 handbook, [12], where a common
failure rate was presented for input, logic and output. Since no new OREDA data for control
logic has been collected during the latter years, the 2006 figures were based on manufacturer
data as well as judgements made by the project group.

In this new edition of the handbook safety system manufacturers (ABB, HIMA, Kongsberg and
Siemens) have again been asked to provide their “best estimate failure rates” including
percentagewise distribution between the different functional parts. Based on these estimates as
well as additional judgements, updated failure rates have been provided based on an assumed
distribution between input, logic and output of 20%, 60% and 20% respectively.

The estimated coverage factors, PTIF and r values are based on expert judgements. A summary of
some of the main arguments is provided in section 3.3.

*Note that for control logic units only a PTIF for the CPU is given.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 20 λD = 10 per 106 hrs Recommended values for calculation in 2006-
λDU = 1.0 per 106 hrs edition, [12]. Apply for input, logic and output
λSTU = 8 per 106 hrs (programmable safety system - single system)

PTIF = 5·10-5 Assumed coverage cD = 90%

83
Module: Control Logic Units – Programmable Safety System
PDS Reliability Data Dossier
Component: Digital Output
λDU = 0.01 per 106 hrs 1) Exida [15]: Digital out – generic SIL 2 certified
λDD = 0.25 per 106 hrs 1) PLC (1oo1D)
λSU = 0.01 per 106 hrs 1) 1)
λSD = 0.93 per 106 hrs 1) Includes one digital out low module and one channel

SFF = 99% (digital out low module + 1 ch.)

84
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.2.3 Hardwired Safety System

5.2.3.1 Trip Amplifier / Analogue Input


Module: Control Logic Units – Hardwired Safety System
PDS Reliability Data Dossier
Component: Trip Amplifier / Analogue Input

Description / equipment boundaries Date of Revision


Input part of hardwired safety system 2009-12-18
including one analogue input channel Remarks
The data is applicable for a SIL certified hardwired
safety system, used for de-energised to trip functions
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 0.04 per 106 hrs cD = 0 λDU = 0.04 per 106 hrs

λS = 0.4 per 106 hrs cS = 0 λSU = 0.4 per 106 hrs

λcrit = 0.44 per 106 hrs PTIF = N/A *

r = 0.8
Assessment
Based on the estimate in the 2006-handbook and input from system vendor (Bjørge Safety
System), a distribution between analogue input, logic and digital output of 40%, 30% and 30%
respectively has been assumed.

The hardwired safety system is assumed to be a fail safe design without diagnostic coverage, i.e.
failures will either be dangerous undetected or they will result in a trip action (SU). Hence, the
coverage for both dangerous and safe failures is assumed to be zero.

The PTIF and r values are based on expert judgements. A summary of some of the main
arguments is provided in section 3.3.

*Note that for control logic units only a PTIF for the logic unit is given.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 2 λD = 1 per 106 hrs Recommended values for calculation in 2006-
λDU = 0.1 per 106 hrs edition, [12]. Apply for input, logic and output
λSTU = 1.0 per 106 hrs (hardwired safety system - single system)

PTIF = 5·10-5 Assumed coverage cD = 90%

85
5.2.3.2 Logic
Module: Control Logic Units – Hardwired Safety System
PDS Reliability Data Dossier
Component: Logic

Description / equipment boundaries Date of Revision


Logic part of hardwired safety system 2009-12-18
including AND-, OR circuits etc. and Remarks
wiring.
The data is applicable for a SIL certified hardwired
safety system, used for de-energised to trip functions
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 0.03 per 106 hrs cD = 0 λDU = 0.03 per 106 hrs

λS = 0.3 per 106 hrs cS = 0 λSU = 0.3 per 106 hrs

λcrit = 0.33 per 106 hrs PTIF = 5 · 10-6

r = 0.8
Assessment
Based on the estimate in the 2006-handbook and input from system vendor (Bjørge Safety
System), a distribution between analogue input, logic and digital output of 40%, 30% and 30%
respectively has been assumed.

The hardwired safety system is assumed to be a fail safe design without diagnostic coverage, i.e.
failures will either be dangerous undetected or they will result in a trip action (SU). Hence, the
coverage for both dangerous and safe failures is assumed to be zero.

The PTIF and r values are based on expert judgements. A summary of some of the main
arguments is provided in section 3.3.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 2 λD = 1 per 106 hrs Recommended values for calculation in 2006-
λDU = 0.1 per 106 hrs edition, [12]. Apply for input, logic and output
λSTU = 1.0 per 106 hrs (hardwired safety system - single system)

PTIF = 5·10-5 Assumed coverage cD = 90%

86
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.2.3.3 Digital Output


Module: Control Logic Units – Hardwired Safety System
PDS Reliability Data Dossier
Component: Digital Output

Description / equipment boundaries Date of Revision


Output part of hardwired safety system 2009-12-18
including one digital output channel Remarks
The data is applicable for a SIL certified hardwired
safety system, used for de-energised to trip functions.
If a relay output is used, figures for a relay should be
added.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 0.03 per 106 hrs cD = 0 λDU = 0.03 per 106 hrs

λS = 0.3 per 106 hrs cS = 0 λSU = 0.3 per 106 hrs

λcrit = 0.33 per 106 hrs PTIF = N/A *

r = 0.8
Assessment
Based on the estimate in the 2006-handbook and input from system vendor (Bjørge Safety
System), a distribution between analogue input, logic and digital output of 40%, 30% and 30%
respectively has been assumed.

The hardwired safety system is assumed to be a fail safe design without diagnostic coverage, i.e.
failures will either be dangerous undetected or they will result in a trip action (SU). Hence, the
coverage for both dangerous and safe failures is assumed to be zero.

The PTIF and r values are based on expert judgements. A summary of some of the main
arguments is provided in section 3.3.

*Note that for control logic units only a PTIF for the logic unit is given.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 2 λD = 1 per 106 hrs Recommended values for calculation in 2006-
λDU = 0.1 per 106 hrs edition, [12]. Apply for input, logic and output
λSTU = 1.0 per 106 hrs (hardwired safety system - single system)

PTIF = 5·10-5 Assumed coverage cD = 90%

87
5.3 Final Elements

5.3.1 ESV/XV

Module: Final Elements


PDS Reliability Data Dossier
Component: ESV/XV (ex. pilot)

Description / equipment boundaries Date of Revision


Main valve including actuator. Not 2009-12-18
including pilot valve. Valve/actuator Remarks
assumed to be spring return to closed ESV/XV incl. actuator (ex. pilot). Full stroke with
position. tight shut off
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 3.0 per 106 hrs cD = 0.30 λDU = 2.1 per 106 hrs

λS = 2.3 per 106 hrs cS = 0.10 λSU = 2.1 per 106 hrs

λcrit = 5.3 per 106 hrs PTIF = 1 · 10-4 (standard functional testing)

r = 0.5

88
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Module: Final Elements


PDS Reliability Data Dossier
Component: ESV/XV (ex. pilot)

Assessment
The failure rate estimate is an update of the previous estimate in the 2006 handbook [12]. Data
from OREDA phase V-VII indicates a higher rate of dangerous failures as compared to the
previous estimate. Also, the data (and other sources) indicates a somewhat lower proportion of
safe failures as compared to the 2006 estimate.

Data from RNNP for the period 2003-2008 for riser ESVs has also been reviewed. In total some
6239 valve tests have been performed during this period, resulting in 96 failures. Based on this a
λDU = 1.8 · 10-6 (incl. pilot valve) can be estimated. It should be noted that this only include
failures revealed through functional testing.

As seen there is a relatively big difference between the failure rate indicated by the RNNP data
and the rate obtained from the OREDA phase V-VII data. The data from RNNP include only riser
ESV data which may be one explanation (tighter follow-up of riser valves). The main reason
however is assumed to be the fact that OREDA data includes a large portion of dangerous failures
revealed in-between tests by other detection methods, whereas RNNP only report test data.

The coverage for dangerous failures have been slightly increased to 30% due to information from
OREDA phase V-VII where it appears that a high fraction of dangerous failures (more than 50%)
are detected by operator observation. It should be noted that this is not diagnostic coverage in its
true meaning (e.g. ref. IEC definition) but will however imply that dangerous fault are detected in
between testing. For valves that are never operated except for testing, the coverage should
therefore be lower.

The size of the PTIF will vary depending on the completeness of the functional testing. Here, a
standard functional test where the valve is fully closed but not tested for internal leakage has been
assumed.

The estimated r is based on reported failure causes in OREDA as well as expert judgements. A
summary of some of the main arguments is provided in section 3.3.

Failure Rate References


Overall
failure rate Failure mode distribution Data source/comment
(per 106 hrs)
λcrit = 5.4 λD = 2.7 per 106 hrs Recommended values for calculation in 2006-
λDU = 2.0 per 106 hrs edition [12]
λSTU = 2.7 per 106 hrs
Assumed cD = 25%
PTIF = 1 · 10-5
= 1 · 10-4 1)
For extended, standard and incomplete functional testing
= 1 · 10-3 respectively.

89
Module: Final Elements
PDS Reliability Data Dossier
Component: ESV/XV (ex. pilot)

λcrit = 5.4 λD = 2.7 per 106 hrs Recommended values for calculation in 2004-
λDU = 2.0 per 106 hrs edition [13]
λSTU = 2.7 per 106 hrs
Assumed cD = 25%
PTIF = 10-6 - 10-5 1)
1)
For complete and incomplete functional testing
respectively.

λDU = 1.3 per 106 hrs Previously recommended values for calculation in
λcrit = 1.6 λSTU = 0.3 per 106 hrs 2003-edition [14]
λD / λST = 4.3 1)
PTIF = 10-6 - 10-5 1) For complete and incomplete functional testing
respectively.

λcrit = 12.3 λD = 8.5 OREDA phase V - VII database


λS = 3.8 Data relevant for topside ESD, ESD/PSD and PSD
valves, excluding the pilot and control &
Observed: monitoring.
cD = 55% 1)
cST = 24% 1) Filter:
Inv. Equipment class = VALVES AND
1) (Inv. System = Gas export OR
OREDA reporting on Inv. System = Gas processing OR
detection method partly Inv. System = Oil export OR
incomplete, especially on Inv. System = Oil processing OR
safe failures, so Inv. System = Condensate processing OR
additional judgements Inv. System = Crude oil handling OR
required. Inv. System = Gas (re)injection AND
Inv. OREDA Phase = 5-7 AND
Inv. Att. Application = ESD AND
Inv. Att. Application = ESD/PSD AND
Inv. Att. Application = ESDPSD AND
Inv. Att. Application = PSD AND
(Fail. Item Failed <> Pilot valve AND
Fail. Subunit Failed <> Control & Monitoring) AND
Fail. Severity Class = Critical

No. of installations = 13
No. of inventories = 125
No. of critical D failures = 47
No. of critical ST failures = 21
Surveillance Time (hours) = 5 517 120

90
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Module: Final Elements


PDS Reliability Data Dossier
Component: ESV/XV (ex. pilot)

3.1 D: 2.4 OREDA phase IV database [6]


ST: 0.7 Data relevant for process ESD/PSD valves,
excluding the pilot and control & monitoring.
Observed:
cD = 40 % Filter:
cST = N/A Inv. Equipment class = VALVES AND
(Inv. System = Gas export OR
Inv. System = Gas processing OR
Inv. System = Oil export OR
Inv. System = Oil processing OR
Inv. System = Emergency shutdown) AND
Inv. Phase = 4 AND
Inv. Att. Application = ESD/PSD AND
Fail. Severity Class = Critical AND
(Fail. Item Failed <> Pilot valve AND
Fail. Subunit Failed <> Control & Monitoring)

No. of inventories = 140


No. of critical D failures = 11
No. of critical ST failures = 3
Cal. time = 4 495 272
λDU = 3.2 per 106 hrs 1) Exida [15]: Generic air operated ball valve, hard
λSU = 0.5 per 106 hrs 1) seat, spring return (data includes critical failure
modes related to full stroke with tight shut-off)
λDD = 0.7 pr 106 hrs 2) 1)
λDU = 2.5 pr 106 hrs 2) normal operation
2)
λSD = 0.5 pr 106 hrs 2) Including partial stroke testing

SFF = 14% (normal operation)


SFF = 32% (partial stroke testing)

λDU = 2.4 per 106 hrs 1)


Exida [15]: Generic air operated gate valve, spring
λSU = 0.5 per 106 hrs 1) return (data includes critical failure modes related
to full stroke with tight shut-off)
λDD = 0.6 per 106 hrs 2) 1)
λDU = 1.8 pr 106 hrs 2) Normal operation
2)
λSD = 0.5 per 106 hrs 2) Including partial stroke testing

SFF = 17% (normal operation)


SFF = 38% (partial stroke testing)
λDU = 3.1 per 106 hrs 1) Exida [15]: Generic hydraulic operated ball valve,
λSU = 0.5 per 106 hrs 1) spring return (data includes critical failure modes
related to full stroke with tight shut-off)
λDD = 0.7 per 106 hrs 2) 1)
λDU = 2.5 pr 106 hrs 2) normal operation
2)
λSD = 0.5 per 106 hrs 2) Including partial stroke testing

SFF = 13% (normal operation)


SFF = 31% (partial stroke testing)

91
5.3.2 ESV, X-mas Tree

Module: Final Elements


PDS Reliability Data Dossier
Component: ESV, X-mas Tree Valve (ex. pilot)

Description / equipment boundaries Date of Revision


Hydraulically operated production master, 2009-12-18
wing and swab valves. Main valve Remarks
including actuator. Not including pilot Topside X-mas tree ESV incl. actuator (ex. pilot).
valve. Full stroke with tight shut off.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 1.1 per 106 hrs cD = 0.30 λDU = 0.8 per 106 hrs

λS = 0.9 per 106 hrs cS = 0.10 λSU = 0.8 per 106 hrs

λcrit = 2.0 per 106 hrs PTIF = 1 · 10-4 (standard functional testing)

r = 0.5
Assessment
The failure rate estimate is an update of the previous 2006 estimate [12] (which was based
primarily on OREDA phase III, with some OREDA phase IV data). Additional data from phase
VII on X-mas tree valves indicate a somewhat lower dangerous failure rate than the OREDA
phase III data, but the aggregated exposure time is lesser in phase VII than for phase III.

Data from RNNP for the period 2003-2008 for X-mas tree wing and master valves has also been
reviewed. In total some 29032 valve tests have been performed during this period, resulting in
317 failures. Based on this a λDU = 1.2 · 10-6 can be estimated. It should be noted that this only
include failures revealed through functional testing. Also, note that RNNP data include the entire
valve, i.e. also the pilot valve, and is therefore not directly comparable to the OREDA data where
the pilot valve has been excluded.

Based on new data from OREDA and RNNP, it appears that the rate of dangerous failures may be
somewhat lower than previously assumed. The amount of new OREDA data is however
somewhat scarce and the RNNP data is not directly comparable. The rate of DU failures has
therefore been kept in line with the 2006 estimate.

For similar reasons as the ESV/XV valves the coverage for dangerous failures has been slightly
increased from 25% to 30%. As for ESV/XV’s the proportion of safe failures (as compared to
dangerous failures) have been reduced in line with data from OREDA and other sources.

The size of the PTIF will vary depending on the completeness of the functional testing. Here, a
standard functional test where the valve is fully closed but not tested for internal leakage has been
assumed.

The estimated r is based on reported failure causes in OREDA as well as expert judgements. A
summary of some of the main arguments is provided in section 3.3.

92
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Module: Final Elements


PDS Reliability Data Dossier
Component: ESV, X-mas Tree Valve (ex. pilot)

Failure Rate References


Overall
Failure mode distribu-
failure rate Data source/comment
tion
(per 106 hrs)
λcrit = 2.1 λD = 1.1 per 106 hrs Recommended values for calculation in 2006-edition
λDU = 0.8 per 106 hrs [12]
λSTU = 1.0 per 106 hrs
Assumed cD = 25%
PTIF = 1 · 10-5
= 1 · 10-4 1)
For extended, standard and incomplete functional testing
= 1 · 10-3 respectively.

λcrit = 2.1 λD = 1.1 per 106 hrs Recommended values for calculation in 2004-edition
λDU = 0.8 per 106 hrs [13]
λSTU = 1.0 per 106 hrs
cD = 25%
PTIF = 10-6 - 10-5 1)
1)
For complete and incomplete functional testing respectively.

λDU = 0.8 per 106 hrs Previously recommended values for calculation in
λcrit = 1.5 λSTU = 0.5 per 106 hrs 2003-edition [13]
λD / λST = 1.1 1)
PTIF = 10-6 - 10-5 1) For complete and incomplete functional testing respectively.

λcrit = 0.8 λD =0.8 per 106 hrs OREDA phase VII database [6]
Data relevant for x-mas tree production and injection
valves

Observed: Filter:
cD = N/A Inv. Eq. Class = Valves AND
cS = N/A Inv. Phase = 7 AND
(Inv. Att Application = PROD MASTER OR
(Inv. Att Application = PROD WING OR
(Inv. Att Application = PROD SWAB OR
(Inv. Att Application = INJ MASTER) AND
(Fail. Item Failed <> Pilot valve AND
Fail. Subunit Failed <> Control & Monitoring)

No. of inventories = 148


No. of critical D failures = 2 (no critical safe failures)
Cal. time = 2 578 488

93
Module: Final Elements
PDS Reliability Data Dossier
Component: ESV, X-mas Tree Valve (ex. pilot)

1.1 D: 0.0 OREDA phase IV database [6]


ST: 1.1 Data relevant for hydraulically operated wellhead
master valves, swab valves and wing valves.
Observed:
cD = N/A Filter:
cS = N/A Inv. Eq. Class = Wellheads And X-mas Trees AND
(Inv. System = Gas production OR
Inv. System = Oil Production OR
Inv. System = Gas re-injection) AND
Inv. Phase = 4 AND
Fail. Severity Class = Critical AND
(Fail. Item Failed = Prod. master valve, hyd. op. OR
Fail. Item Failed = Prod. swab valve, hyd. op. OR
Fail. Item Failed = Prod. wing valve, hyd. op.)

No. of inventories = 18
No. of critical D failures = 0
No. of critical ST failures = 1
Cal. time = 902 976
DOP: 0.15 OREDA phase III database [8]
Crit: 7.36 EXL: 1.84 Data relevant for wellhead ESD/PSD valves, main
FTC: 0.77 valve or actuator.
FTO: 0.46
INL: 2.30 No. of inventories = 349
LCP: 1.69 Number of critical failures = 48
PLU: 0.15 Cal. time = 6 518 058 hrs

94
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.3.3 Blowdown Valve

Module: Final Elements


PDS Reliability Data Dossier
Component: Blowdown Valve (ex. pilot)

Description / equipment boundaries Date of Revision


Blowdown valve including actuator. Not 2009-12-18
including pilot valve. Valve/actuator Remarks
assumed to be spring return to open Blowdown valve incl. actuator (ex. pilot). Valve de-
position. energised to open.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 2.6 per 106 hrs cD = 0.20 λDU = 2.1 per 106 hrs

λS = 1.3 per 106 hrs cS = 0 λSU = 1.3 per 106 hrs

λcrit = 3.9 per 106 hrs PTIF = 1 . 10-4

r = 0.5
Assessment
The failure rate for blowdown valves is an update of the previous estimate in the 2006 handbook,
[12] based on new data from OREDA phase V and VI as well as data from RNNP.

Data from RNNP for the period 2004-2008 for blowdown valves has been reviewed. In total
some 15392 valve tests have been performed during this period, resulting in 397 failures. Based
on this a λDU = 2.9 · 10-6 (incl. pilot valve) can be estimated. This is in line with the DU estimate
for blowdown valves given in the 2006 PDS handbook.

Data from OREDA phase V-VII on the other hand indicates a lower rate of dangerous (and safe)
failures as compared to the 2006 estimate which was primarily based on OREDA phase IV data.
Recorded data from phase V-VII is however significantly less than for phase IV (approximately
half the surveillance time).

Based on the above, the rate of DU failures has been kept approximately the same as in the 2006
edition whereas the rate of safe failures has been somewhat reduced. The coverage for dangerous
failures have been reduced to 20% since blowdown valves will rarely be operated in-between
tests and therefore few dangerous failures will be detected by operator observation.

The PTIF and the r values are assumed the same as for ESV/XV valves (where the PTIF is given
assuming a normal/average functional testing standard).
Failure Rate References
Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)

95
Module: Final Elements
PDS Reliability Data Dossier
Component: Blowdown Valve (ex. pilot)

λcrit = 5.4 λD = 2.7 per 106 hrs Recommended values for calculation in 2006-
λDU = 2.0 per 106 hrs edition [12]
λSTU = 2.7 per 106 hrs
PTIF = 10-4
λcrit = 3.7 λD = 2.7 per 106 hrs Recommended values for calculation in 2004-
λDU = 2.0 per 106 hrs edition [13]
λSTU = 1.0 per 106 hrs 1)
For complete and incomplete functional testing
-6 -5 1)
PTIF = 10 - 10 respectively
λcrit = 2.0 λD = 1.6 per 106 hrs OREDA phase V-VII database, [4] and [6]
λS = 0.4 per 106 hrs Data relevant for blowdown valves.
Note: these data also include the pilot valve

Observed: Filter:
cD = N/A Inv. Equipment class = VALVES AND
cST = N/A Inv. Att. Application = BLOWDOWN AND
Inv. OREDA Phase = 5 - 7 AND
Fail. Severity Class = Critical

No. of inventories = 50
No. of critical D failures = 4
No. of critical S failures = 1
Surveillance Time (hours) = 2 442 984
6.40 D: 5.52 OREDA phase IV database [6]
ST: 0.88 Data relevant for blowdown valves.
Note: these data also include the pilot valve
Observed:
cD = N/A Filter:
cST = N/A Inv. Equipment class = VALVES AND
Inv. Att. Application = BLOWDOWN AND
Inv. OREDA Phase = 4

No. of inventories = 92
No. of critical D failures = 25
No. of critical S failures = 4
Surveillance Time (hours) = 4 532 640

96
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.3.4 Pilot/Solenoid Valve

Module: Final Elements


PDS Reliability Data Dossier
Component: Pilot/Solenoid Valve

Description Date of Revision


Pilot/solenoid valve on hydraulically or 2009-12-18
pneumatically operated valves. Remarks
Valve de-energised to bleed-off
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 1.1 per 106 hrs cD = 0.30 λDU = 0.8 per 106 hrs

λS = 1.9 per 106 hrs cS = 0.10 λSU = 1.7 per 106 hrs

λcrit = 3.0 per 106 hrs PTIF = N/A *

r = 0.4
Assessment
The failure rate estimate is an update of the previous 2006 estimate based on new data from
OREDA phase VI and VII as well as other sources. Note that part of the failures reported under
"control and monitoring" (approx. 50%) are included as part of the valve itself. The distribution
between dangerous failures and safe failures has been kept the same as in the previous edition.

The coverage factor for D failures has been estimated to 30%, due to registered detection methods
in OREDA phase IV and V-VII. As for ESV/XV valves this coverage include some manual
observation by operators.

Based on the above and the new data on solenoids, the rate of DU failures have been slightly
reduced as compared to the 2006 estimate whereas the rate of safe failures has been kept the
same.

*The PTIF for pilot valve is included as part of the PTIF for the valve itself.

The estimated r is based on reported failure causes in OREDA as well as expert judgements. A
summary of some of the main arguments is provided in section 3.3.

Failure Rate References


Overall
Failure mode distribu-
failure rate Data source/comment
tion
(per 106 hrs)
λcrit = 3.2 λD = 1.3 per 106 hrs Recommended values for calculation in 2006-edition
λDU = 0.9 per 106 hrs [12]
λSTU = 1.9 per 106 hrs 1)
PTIF for pilot valve included in PTIF for main valve.
1)

97
Module: Final Elements
PDS Reliability Data Dossier
Component: Pilot/Solenoid Valve

λcrit = 3.2 λD = 1.3 per 106 hrs Recommended values for calculation in 2004-edition
λDU = 0.9 per 106 hrs [13]
λSTU = 1.3 per 106 hrs 1)
PTIF for pilot valve included in PTIF for main valve.
1)

λDU = 1.4 per 106 hrs Recommended values for calculation in 2003-edition
λcrit = 4.2 λSTU = 1.8 per 106 hrs [14]
λD / λST = 0.7 1)
1) PTIF for pilot valve included in PTIF for main valve.

λcrit = 2.8 λD = 1.6 per 106 hrs 1)


OREDA phase V-VII database, [4] and [6]
λS = 1.2 per 106 hrs Data relevant for pilot valves with control &
monitoring in ESD/PSD applications
Observed:
cD = 40% Filter:
cST = N/A Inv. Equipment class = VALVES AND
(Inv. Att. Application = ESD/PSD/…. OR
1)
The D failure rate includes Inv. Att. Application = BLOWDOWN) AND
‘Fail to close on demand’ Inv. OREDA Phase = 5 - 7 AND
failures. When calculating (Fail. Item Failed = Pilot valve OR
the failure rate for pilot Fail. Subunit Failed = Control & Monitoring) AND
valves it has been assumed Fail. Severity Class = Critical
that 55% of the valves have
two solenoids No. of inventories = 175 valves (assumed 272 solenoids)
No. of critical failures = 35
No. of critical D failures = 20
No. of critical S failures = 15
Calendar time (hours) = 7 960 104
4.5 D: 1.7 OREDA phase IV database [6].
ST: 2.8 Data relevant for pilot valves with control &
monitoring in ESD/PSD applications.
Observed:
cD = 67 % Filter:
cST = N/A Inv. Equipment class = VALVES AND
(Inv. Att. Application = ESD/PSD OR
Inv. Att. Application = Shut-off AND
Inv. Phase = 4 AND
Fail. Severity Class = Critical AND
(Fail. Item Failed = Pilot valve OR
Fail. Subunit Failed = Control & Monitoring)

No. of inventories = 184


No. of critical D failures = 10
No. of critical ST failures = 17
Calendar time (hours) = 6 023 256

98
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Module: Final Elements


PDS Reliability Data Dossier
Component: Pilot/Solenoid Valve
λDU = 0.3 per 106 hrs Data from review of safety critical failures on
(λD = 0.5 per 106 hrs) Norwegian onshore plant. Data applicable for
(λS = 1.8 per 106 hrs) solenoid valves on ESVs, BDVs and XVs.

No. of inventories = 438


1)
No. of critical DU failures = 1
Cal. time = 3 836 880 hrs 2)
1)
The review focused on the DU failures; 1 failure was classified as
DU, i.e. a stuck solenoid causing fail to close of associated ESV. In
addition, 1 failure was classified as DD and 7 failures were classified
as safe.
2)
One year of operation
Funct. 0.38 T-Book [16]: Solenoid valve, normally activated
Fail. to change position
T-Book [16]: Solenoid valve, normally not activated
0.30
λDU = 0.6 per 106 hrs 1) Exida [15]: Generic solenoid valve
λSU = 1.0 per 106 hrs 1)
1)
2-way solenoid
6 2) 2)
λDU = 0.6 per 10 hrs 3-way solenoid
3)
λSU = 1.0 per 106 hrs 2) 4-way solenoid

λDU = 0.6 per 106 hrs 3) SFF = 72%


λSU = 1.0 per 106 hrs 3)

99
5.3.5 Process Control Valve

Module: Final Elements


PDS Reliability Data Dossier
Component: Process Control Valve

Description / equipment boundaries Date of Revision


Process control valves used in combined 2009-12-18
control- and shutdown service. Not Remarks
including pilot valve*. The dangerous failure mode considered is failed to
close failures.
Recommended Values for Calculation
Total rate Coverage

λD = 4.4 per 106 hrs cD = 0.50

λS = 2.5 per 106 hrs cS = 0.50

λcrit = 6.9 per 106 hrs PTIF = 1 . 10-4

r = 0.6
Assessment
The figures for control valves have been updated as compared to the 2006 handbook, [12]. The
failure rate estimates are based on a “weighted” average of the OREDA phase III – V data.
Included in the λD failures are all ‘fail to close’ (FTC) failures, 50% of the ‘delayed operation’
(DOP) failures and 25% of the ‘fail to regulate’ (FTR) failures. Hence, only the failure modes
assumed relevant for shutdown purposes are included. Included in the safe failures (S) are
‘spurious operation’ and ‘fail to open’ failures as well as 25% of the ‘fail to regulate’ failures (i.e.
we assume that only 50% of the FTR failures are critical with respect to spurious operation or
valve closure). Note that no split has been made between small and large control valves (as was
done in [13] and [14]).

Based on the registered observation method for the relevant failure modes, as well as expert
judgement, coverage for both dangerous and safe failures of 50% has been estimated. It is then
implicitly assumed that the control valve is used in normal operation resulting in a relatively high
coverage.

For some cases (e.g. on some onshore plants) selected control valves may be used solely for
shutdown purposes. In this case the valves will be operated infrequently, resulting in a
significantly lower coverage factor. For control valves used only as shutdown valves, the
coverage is suggested reduced to 20%, giving λDU and λSU estimates of 3.5 per 106 hrs and 2.0 per
106 hrs respectively.

The PTIF and r values are entirely based on expert judgements. A summary of some of the main
arguments is provided in section 3.3.

*Data for control valves are mainly collected for valves in control service and not from
applications where control valves are used for on/off shutdown service. The solenoid valves will
normally not be part of the control function and therefore no solenoid valve failures are registered
under control valves in OREDA. When considering failure rates for control valves used for
shutdown purposes, the failure rate of a solenoid valve should therefore be added.

Failure Rate References

100
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Module: Final Elements


PDS Reliability Data Dossier
Component: Process Control Valve
Overall failure
Failure mode distribu-
rate Data source/comment
tion
(per 106 hrs)
λcrit = 5.6 λD = 3.8 per 106 hrs Previously recommended values for calculation in
λDU = 2.7 per 106 hrs 2006-edition, [12]
λSTU = 1.3 per 106 hrs
Assumed cD = 30%
PTIF = 10-4

Small valves: Small valves: Previously recommended values for calculation in


λcrit = 7.5 λD = 7.1 per 106 hrs 2003- and 2004-edition, [13] and [14]
λDU = 2.8 per 106 hrs
λSTU = 0.1 per 106 hrs Assumed cD = 60%
Large valves:
Large valves:
λcrit = 2.8
λD = 2.1 per 106 hrs
λDU = 0.8 per 106 hrs
λSTU = 0.2 per 106 hrs

PTIF = 10-5
2.9 FTC 0.41 OREDA phase V database [6]
FTO 0.82 Data relevant for process control valves including
FTR 1.23 pilot valve etc. Note! All sizes are included.
LCP 0.41
Filter:
Inv. Equipment class = VALVES AND
(Inv. System = Gas export OR
Inv. System = Gas processing OR
Inv. System = Oil export OR
Inv. System = Oil processing OR
Inv. System = Condensate processing OR
Inv. System = Gas (re)injection OR
Inv. System = Gas production OR
Inv. System = Gas treatment OR
Inv. System = Oil production) AND
Inv. OREDA Phase = 5 AND
Inv. Att. Application = Process Control

No. of inventories = 54
No. of critical failures = 7
Calendar time (hours) = 2 446 080

101
Module: Final Elements
PDS Reliability Data Dossier
Component: Process Control Valve

16.9 FTC 4.45 OREDA phase IV database [6]


FTO 2.23 Data relevant for process control valves including
FTR 7.64 pilot valve etc. Note! All sizes are included.
EXL 1.59
DOP 0.32 Filter (small valves):
SPO 0.32 Inv. Equipment class = VALVES AND
STD 0.32 (Inv. System = Gas export OR
Inv. System = Gas processing OR
Inv. System = Oil export OR
Inv. System = Oil processing OR
Observed:
Inv. System = Condensate processing OR
cD = 60 % Inv. System = Gas (re)injection OR
cST = N/A Inv. System = Gas production OR
(Only one observation) Inv. System = Gas treatment OR
Inv. System = Oil production) AND
Inv. Phase = 4 AND
Inv. Att. Application = Process Control AND
Fail. Severity Class = Critical AND

No. of inventories = 107


No. of critical failures = 53
Calendar time (hours) = 3 140 856
DOP: 0.72 OREDA phase III database [8]
18.6 EXL: 0.36 Data relevant for process control valves including
FID: 1.79 pilot valve etc. Note! All sizes are included.
FTC 4.29
FTO: 2.15 Filter criteria: APPLIC='PROC CTRL', FUNCTN='OP' .OR.
LCP 1.43 'GP'.
OTH 3.94 No. of inventories = 100
No. of critical failures = 52
PLU 2.50
Cal. time = 2 796 745 hrs
SPO: 1.43
Fail. to change position: T-Book [16]: Motor-operated control valve
9.1
λDU = 1.2 per 106 hrs Exida [15]: Generic control valve
6
λSU = 0.5 per 10 hrs

102
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.3.6 Pressure Relief Valve

Module: Final Elements


PDS Reliability Data Dossier
Component: Pressure Relief Valve

Description / equipment boundaries Date of Revision


Complete PSV. Data includes both self 2009-12-18
acting and pilot operated valves. Remarks
The dangerous failure rate relates to a failure to open
within 20% of the set point pressure.
Recommended Values for Calculation
Total rate Coverage

λD = 2.2 per 106 hrs cD = 0

λS = 1.1 per 106 hrs cS = 10%

λcrit = 3.3 per 106 hrs PTIF = 1 . 10-3


1) For fail to open before test pressure failures, a
failure rate of λDU = 1.1 per.106 hours is suggested r = 0.5
Assessment
The failure data for PSV is an update of the 2006-edition, [12]. The failure rates are based on
OREDA phase IV - VII data, data from RNNP, as well as data from a student investigation into
PSV data, [17].

For OREDA data only failures classified as ‘fail to open’ are considered as D failures. For safe
failures, the critical failure modes ‘spurious operation’, ‘leakage in closed position’ and ‘fail to
close’ have been included. Note that for relief valves, operational time is used in the failure rate
estimates. Based on all OREDA data from phase IV-VII a weighted dangerous failure rate of 1.9
per 106 hours can be estimated. Similarly, a weighted average for safe failure of 1.0 per 106 hours
can be found.

In the RNNP project, data on PSVs are available for the period 2004-2008. A total of 53347 valve
tests have been performed resulting in 2226 failures. Assuming annual testing, these data gives an
estimated λDU of 4.8.10-6 per hour. On many installations the PSVs are only tested every second
year. Assuming a test interval of 2 years, a λDU of 2.4.10-6 per hour results.

As seen, the data from RNNP gives somewhat higher λDU values than the latest OREDA data.
Since the amount of RNNP data is very extensive, the rate of dangerous failures for PSVs has been
slightly increased as compared to the 2006 estimate. The rate of safe failures has been kept
approximately the same.

The given λDU applies for a fail to open failure within 20% of the set point pressure. If a critical
failure is defined as fail to open at a higher pressure, a reduced failure rate may be applied. For the
failure mode ‘fail to open before test pressure’, a λDU = 1.1.10-6 per hour is suggested (i.e. a 50%
reduction as compared to the rate of failures to open within 20% of the set point , ref [17]).

The PTIF and r values are entirely based on expert judgements. A summary of some of the main
arguments is provided in section 3.3.

103
Module: Final Elements
PDS Reliability Data Dossier
Component: Pressure Relief Valve

Failure Rate References


Overall
Failure mode distribu-
failure rate Data source/comment
tion
(per 106 hrs)
λD = 2.0 per 106 hrs Previously recommended values for calculation in
λcrit = 3.2 λDU = 2.0 per 106 hrs 2006-edition, [12]
λSTU = 1.0 per 106 hrs
cD = 0%
-3
PTIF = 10
λD = 1.0 per 106 hrs Previously recommended values for calculation in
λcrit = 1.2 λDU = 1.0 per 106 hrs 2003- and 2004-edition, [13] and [14]
λSTU = 0.2 per 106 hrs 1)
cD = 0%
-3
PTIF = 10 1)
Trip of PSV does not necessarily lead to system trip

λcrit = 1.0 λD = 0.7 per 106 hrs OREDA phase V-VII database [4], [6]
λS = 0.3 per 106 hrs Data relevant for self-acting or self-acting/pilot
actuated relief valves.
Observed:
cD = N/A Filter:
cST = N/A Inv. Equipment class = VALVES AND
Inv. Att. Application = Relief AND
Inv. OREDA Phase = 5 - 7

No. of inventories = 130


No. of critical fail to open failures = 2
No. of critical safe failures = 1
Operational time (hours) = 3 032 299
3.7 D: 2.4 OREDA phase IV database [6]
ST: 1.3 Data relevant for self-acting (86%) or pilot
actuated (14%) relief valves.
Observed:
cD = 0 % Filter:
cST = 11 % Inv. Equipment class = VALVES AND
Inv. Phase = 4 AND
Inv. Att. Application = Relief AND
Fail. Severity Class = Critical

No. of inventories = 275


No. of critical fail to open on demand failures = 17
No. of critical valve leakage in closed position = 8
No. of critical spurious operation failures = 1
Operational time (hours) = 7 062 366

104
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.3.7 Deluge Valve

Module: Final Elements


PDS Reliability Data Dossier
Component: Deluge Valve

Description / equipment boundaries Date of Revision


Deluge valve including actuator, 2009-12-18
solenoid and pilot valve. Assumed Remarks
energised to open.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 3.0 per 106 hrs cD = 0 λDU = 3.0 per 106 hrs

λS = 1.5 per 106 hrs cS = 0 λSU = 1.5 per 106 hrs

λcrit = 4.5 per 106 hrs PTIF = 1 . 10-3

r = 0.6
Assessment
The failure rate applies for deluge valves and is based on data from RNNP as well as OREDA
(limited population with only diaphragm type of valves), taking into account some expert
judgements. The coverage for both D and S failures has been assumed to be zero.

In the RNNP project, test data for deluge valves for the period 2004-2008 are available. A total
of 17284 deluge valve tests have been performed resulting in 163 failures. With 6 and 12
monthly testing, these data gives an estimated λDU of 2.2.10-6 and 1.1.10-6 per hours respectively.
The RNNP data is assumed to include both diaphragm and inbal type of deluge valves.

The PTIF and r values for deluge valves are entirely based on expert judgements.
Failure Rate References
Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 8.8 λD = 8.8 per 106 hrs 1)
OREDA phase VI, [4]

Observed: Filter:
cD = 0% Inv. Equipment class = Valves AND
cST = N/A Inv. Att. Application = DELUGE AND
Inv. OREDA Phase = 6
1)
The limited population
only includes diaphragm No. of inventories = 43
type deluge valves from one No. of critical fail to open failures = 10
installation. 7 of the No. of critical safe failures = 0
dangerous failures were due Operational time (hours) = 1 130 040
to improper design.
λDU = 4.7 .10-6 / hour OLF 070 (based on PDS-BIP data), [19]

105
5.3.8 Fire Damper

Module: Final Elements


PDS Reliability Data Dossier
Component: Fire Damper

Description / equipment boundaries Date of Revision


Fire damper including solenoid valve. 2009-12-18
Assumed de-energised to close. Remarks
Available data is limited
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 3.2 per 106 hrs cD = 0 λDU = 3.2 per 106 hrs

λS = 2.3 per 106 hrs cS = 0 λSU = 2.3 per 106 hrs

λcrit = 5.5 per 106 hrs PTIF = 1 . 10-3

r = 0.7
Assessment
The failure rate applies for fire dampers and is based on data from different installations, taking
into account some expert judgements. The coverage for both D and S failures has been assumed
to be zero.

The PTIF and r values for fire dampers are entirely based on expert judgements.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λDU = 3.8 per 106 hrs Failure rate used on Norwegian offshore project,
based on input from different sources.
λDU = 3.0 per 106 hrs Data from review of safety critical failures on
λS = 1.0 per 106 hrs Norwegian semi-submersible platform. Data
applicable for fire dampers including solenoid
valve.

No. of inventories = 57
1)
No. of critical DU failures = 3
2)
Cal. time = 998 640 hrs
1)
The failure review focused on DU failures, 1 additional failure
was classified as safe.
2)
Two years of operation

106
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Module: Final Elements


PDS Reliability Data Dossier
Component: Fire Damper
λDU = 2.5 per 106 hrs Data from review of safety critical failures on
λS = 3.1 per 106 hrs Norwegian onshore plant. Data applicable for fire
dampers including solenoid valve.

No. of inventories = 92 fire dampers


No. of critical DU failures = 4 1)
Cal. time = 1 611 840 hrs 2)
1)
The review focused on DU failures, but classification of other
failure modes was also performed; 1 DD, 5 safe failures and 12
failures classified as not relevant were also registered.
2)
Two years of operation

λDU = 7.3 .10-6 / hour OLF 070 (based on PDS-BIP data), [19]

107
5.3.9 Circuit Breaker

Module: Final Elements


PDS Reliability Data Dossier
Component: Circuit Breaker

Description Date of Revision


Circuit Breaker. Assumed de-energised 2009-12-18
to open. Remarks
Includes internal relay / solenoid. Any external relays
etc. must be added.
Recommended Values for Calculation

Total rate Coverage Undetected rate

λD = 0.3 per 106 hrs cD = 0 λDU = 0.3 per 106 hrs

λS = 0.5 per 106 hrs cS = 0 λSU = 0.5 per 106 hrs

λcrit = 0.8 per 106 hrs PTIF = 5 . 10-5

r = 0.6
Assessment
The failure rate applies for large circuit breakers and is based on the listed data sources, taking
into account some expert judgements. The coverage for both D and S failures has been assumed
to be zero.

The PTIF and r values for circuit breaker are entirely based on expert judgements.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λDU = 0.2 .10-6 / hour T-Book [16]: Circuit Breaker, 6kV – 10kV

λDU = 0.6 per 106 hrs Exida [15]: Generic motor starter
λSU = 0.9 per 106 hrs
SFF = 60%

108
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.3.10 Relay

Module: Final Elements


PDS Reliability Data Dossier
Component: Relay

Description Date of Revision


Relay. Assumed de-energised to open. 2009-12-18
Remarks

Recommended Values for Calculation

Total rate Coverage Undetected rate

λD = 0.2 per 106 hrs cD = 0 λDU = 0.2 per 106 hrs

λS = 0.3 per 106 hrs cS = 0 λSU = 0.3 per 106 hrs

λcrit = 0.5 per 106 hrs PTIF = 5 . 10-5

r = 0.6
Assessment
The failure rate applies for relays and smaller circuit breakers and is based on the listed data
sources, taking into account some expert judgements. The coverage for both D and S failures has
been assumed to be zero.

The PTIF and r values for relay are entirely based on expert judgements.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λDU = 0.15 .10-6 / hour T-Book [16]: Circuit Breaker, < 660V

λDU = 0.6 per 106 hrs Exida [15]: Generic relay


λSU = 0.9 per 106 hrs
SFF = 60%

109
5.3.11 Downhole Safety Valve – DHSV

Module: Final Elements


PDS Reliability Data Dossier
Component: Downhole Safety Valve – DHSV

Description Date of Revision


Downhole Safety Valve incl. actuation 2009-12-18
device Remarks
Full stroke with tight shut off.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 3.2 per 106 hrs cD = 0 λDU = 3.2 per 106 hrs

λS = 2.4 per 106 hrs cS = 0 λSU = 2.4 per 106 hrs

λcrit = 5.6 per 106 hrs PTIF = 1 . 10-4

r = 0.5
Assessment
The updated failure rates for DHSV is based on two main sources:
• internal SINTEF data which gives an estimated λD of 2.0 per 106 hrs, and
• updated test data from RNNP for the period 2003-2008. Here, 25926 valve tests have
been performed, resulting in 764 failures. Assuming an average test interval of 6 month,
this gives an estimated λDU of 6.7 per 106 hrs. If tested annually, the corresponding λDU
becomes 3.4 per 106 hrs

Furthermore, the same distribution between dangerous and safe failures as for topside ESV/XV
valves is assumed. Zero coverage has been assumed both for S and D failures. PTIF and r is based
on expert judgements.

Failure Rate References


Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
2.0 Fail to close Internal SINTEF data

3.4 – 6.7 Fail to close or too high Data from RNNP, [9]
internal leakage rate

λDU = 3.6 per 106 hrs Data from review of safety critical failures on
Norwegian semi-submersible platform.

No. of inventories = 16
1)
No. of critical DU failures = 1
2)
Cal. time = 280 320 hrs
1)
Focus on DU failures. Reporting on other failure types
questionable
2)
Two years of operation

110
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.4 Subsea Equipment


As part of the PDS-BIP work in 2004, a recommended data set for subsea equipment was
established and provided as input to the updated OLF 070 guideline, [19], as well as to the PDS
2006 data handbook, [12]. The data were mainly based on the OREDA 2002 handbook, [5], and
input from and discussions with experts.

In the present version of the PDS data handbook, additional subsea data from the new OREDA
2009 Handbook have been utilised, thus providing a much better basis for the suggested values. It
should however be noted that for some equipment groups the population is still limited.

For the subsea equipment, focus has been on dangerous failures and only values for the coverage
factor for dangerous failures cD is specified. Hence, the rate of undetected spurious trip failures,
λSU, is not given. Values for the safe failure fraction (SFF) have been indicated. It should be noted
that the SFF figures are mainly based on the reported failure mode distributions in OREDA
subsea (as well as some additional expert judgements) and will therefore rely on the quality of the
failure reporting in OREDA. Higher (or lower) SFFs than given in the tables may therefore apply
for specific equipment types and this should in such case be documented separately.

Furthermore, specific β and PTIF values for subsea components are not given. As a starting point,
estimates for topside equipment can be used for these unspecified parameters. They should,
however, be assessed on a case by case basis depending on their specific (subsea) application.

Table 11 briefly summarizes the discussion underlying the proposed data for subsea equipment.
For more detailed descriptions about the equipment configurations, the equipment boundaries and
the data, reference is made to the new OREDA 2009 subsea handbook, [3]. For comments or
feedback concerning the OREDA subsea data; contact the OREDA project manager or one of the
participating companies, ref. http://www.oreda.com.

Table 11 Discussion of proposed subsea data


Component λcrit 1) λD 1) cD λDU 1) SFF Reference/comments
(%) (%)

ESD/PSD logic including 15.6 8.0 90 0.8 95 Ref. section 5.2.2. Data for programmable
analogue input and digital topside safety system (single system) referred.
output
* Topside located ESD/PSD node which may
Located topside* communicate with the subsea equipment via
the master control station (MCS).

MCS - Master control 9.4 2.8 60 1.1 88 OREDA Subsea Handbook 2009, [3], Tax. No.
station 5.1. Master control station (25 off, 11 crit.
failures).
Located topside*
Based on reported critical failures in OREDA a
distribution between safe and dangerous
failures of 70% / 30% has been assumed.
Further, a coverage of 60% for dangerous
failures has been assumed.

* This is the topside unit communicating with


the SEM. In addition, there will normally be a
topside located ESD/PSD node which performs
the safety actions via the master control
station.

111
Component λcrit 1) λD 1) cD λDU 1) SFF Reference/comments
(%) (%)

SEM – subsea electronic 9.9 4.0 70 1.2 84 OREDA Subsea Handbook 2009, [3], Tax. No.
module (located in subsea 5.1. Subsea electronic module (461 off, 138
control module, SCM) crit. failures).

Assuming a distribution between safe and


dangerous failures of 60% / 40% and coverage
for dangerous failures of 70%.

Note that the number of reported critical


failures is uncertain. Also note that there may
be redundant SEMs in the same subsea control
module (SCM).
Solenoid control valves 0.40* 0.16 0 0.16 60 OREDA Subsea Handbook 2009, [3], Tax. No.
(located in subsea control 5.1. Solenoid control valve (4718 off, 48 crit.
module, SCM) failures)

Based on reported OREDA failures and some


additional expert judgements, a distribution
between safe and dangerous failures of 60% /
40% has been assumed. Further, zero coverage
for subsea valves has generally been assumed.

*The failure rate includes different types of


solenoid control valves. The OREDA data
handbook does not differentiate between e.g.
mono stable (continuously energized) and bi-
stable (energized to shift) solenoids.
Pressure sensor 0.62 0.37 60 0.15 76 OREDA Subsea Handbook 2009, [3], Tax. No.
5.1. Pressure sensor (1890 off, 34 crit.
failures).

Assuming 40% / 60% distribution between


safe and dangerous failures and 60% coverage
for dangerous failures.
Temperature sensor 0.30 0.18 60 0.07 76 OREDA Subsea Handbook 2009, [3], Tax. No.
5.1. Temperature sensor (272 off, 2 crit.
failures)

Assuming 40% / 60% distribution between


safe and dangerous failures and 60% coverage
for dangerous failures.

Note that due to the relatively low number of


components in the population and the higher
failure rate of a combined pressure/temperature
sensor, the total critical failure rate has been
slightly increased (one additional critical
failure has been assumed).
Combined pressure and 2.5 * 1.25 60 0.5 80 OREDA Subsea Handbook 2009, [3], Tax. No.
temperature sensor 5.1. Combined pressure and temperature sensor
(303 off, 16 crit. failure)

Based on the reported failure modes in


OREDA a 50% / 50% distribution between
safe and dangerous failures has been assumed.
Further, coverage of 60% for dangerous
failures has been assumed.

*The failure rate represents a failure of either


the pressure or the temperature measurement.
Flow sensor 2.0* 1.4 60 0.56 72 OREDA Subsea Handbook 2009, [3], Tax. No.
5.1, Flow sensor (336 off, 11 crit. failures).

112
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Component λcrit 1) λD 1) cD λDU 1) SFF Reference/comments


(%) (%)

The reported OREDA failures indicate a


distribution between safe and dangerous
failures of some 30% / 70%. Further, coverage
for dangerous failures of 60% has been
assumed.

*The OREDA data handbook does not


differentiate between different technologies
such as e.g. a simple Venturi device (with DP
cell) and a more advanced multiphase flow
meter.
Umbilical 0.31 0.22 80 0.04 87 OREDA Subsea Handbook 2009, [3], Tax. No.
hydraulic/chemical line 5.1. Static umbilical, hydraulic/chemical line
(per line) (803 off, 9 crit. failures)

λDU has been based on critical failures of


hydraulic/chemical lines in static umbilicals.
Assuming 70% dangerous failures and a
coverage of 80%, since the majority of failures
should be detectable immediately.

Umbilical power/signal 0.51 0.36 80 0.07 86 OREDA Subsea Handbook 2009, [3], Tax. No.
line (per line) 5.1. Static umbilical, power/signal line (407
off, 8 crit. failures)

λDU has been based on critical failures of


power/signal lines in static umbilicals.
Assuming 70% dangerous failures and a
coverage of 80%, since the majority of failures
should be detectable immediately.
Process isolation valves 1.32 0.40 0 0.40 70 OREDA Subsea Handbook 2009, [3], Tax. No.
(located on subsea 5.3. Process isolation valves (1111 off, 62
manifold) critical failures)

The OREDA subsea figures indicate a


distribution between safe and dangerous
failures of approximately 70 % / 30 %. Zero
coverage generally assumed for subsea valves.

Subsea isolation valve, 0.52* 0.21 0 0.21 60 OREDA Subsea Handbook 2009, [3], Tax. No.
SSIV (part of subsea 5.4. Valve subsea isolation (149 off, 0 crit.
isolation system, SSIS) failures)

Assuming zero coverage and a 60% / 40%


distribution between safe and dangerous
failures.

* The critical failure rate estimate has been


obtained by including the 2 critical failures
reported for process isolation valves also
included as part of the subsea isolation system.

Production master valve, 0.26 0.18 0 0.18 30 OREDA Subsea Handbook 2009, [3], Tax. No.
(PMV) 5.8. Subsea X-mas tree; Valve process
Production wing valve, isolation (2267 off, 19 crit. failures).
(PWV)

The reported critical failure modes for X-mas


tree process isolation valves indicate a

113
Component λcrit 1) λD 1) cD λDU 1) SFF Reference/comments
(%) (%)

distribution between safe and dangerous


failures of approximately 30 % / 70 %. Zero
coverage generally assumed for subsea valves.

Chemical injection valve, 0.37* 0.22 0 0.22 40 OREDA Subsea Handbook 2009, [3], Tax. No.
(CIV) 5.8. Subsea X-mas tree; Valve utility isolation
(928 off, 4 crit. failures)

When interpreting the reported critical failure


modes for X-mas tree utility isolation valves
conservatively, a distribution between safe and
dangerous failures of some 40 % / 60 % can be
assumed.

*For this particular case the multi-sample


estimator from OREDA has (conservatively)
been applied due to possible lack of reporting
of critical failures.
Downhole safety valve, 5.6 3.2 0 3.2 42 Ref. section 5.3.11
(DHSV)
1)
All failure rates given per 106 hours

114
6 REFERENCES

[1] IEC 61508 Standard. “Functional safety of electrical/electronic/programmable electronic


(E/E/PE) safety related systems”, part 1-7, Edition 1.0 (various dates).
[2] IEC 61511 Standard. “Functional safety - safety instrumented systems for the process
industry sector”, part 1-3. 2003
[3] OREDA participants, OREDA; Offshore Reliability Data Handbook, Volume 1 - topside
data and Volume 2 – subsea data. 5th edition, 2009.
[4] OREDA participants, OREDA phases VI and VII, Computerised database on topside
equipment, (data collected during the period 2000-2003).
[5] OREDA participants, OREDA Handbook; Offshore Reliability Data Handbook, 4th edition,
2002.
[6] OREDA participants, OREDA phases IV and V, Computerised database on topside
equipment, (data collected during the period 1993-2000).
[7] OREDA Participants, OREDA Handbook; Offshore Reliability Data Handbook, 3rd edition,
1997.
[8] OREDA participants, OREDA phase III, Computerised database on topside equipment,
(data collected during the period 1990-1992).
[9] Norwegian Petroleum Safety Authorities, Risikonivået i Norsk Petroleumsindustri (RNNP).
Reported safety barrier data from 2003 - 2008.
[10] Hauge, S., Lundteigen, M.A., Hokstad, P., and Håbrekke, S., Reliability Prediction Method
for Safety Instrumented Systems – PDS Method Handbook, 2010 Edition. SINTEF report
A13503
[11] Hauge, S., Hokstad, P., Langseth, H. and Øien K., Reliability Prediction Method for Safety
Instrumented Systems – PDS Method Handbook, 2006 Edition. SINTEF report STF50
A06031
[12] Hauge, S., Langseth, H. and Onshus T., Reliability Data for Safety Instrumented Systems –
PDS Data Handbook, 2006 Edition. SINTEF report STF50 A06030
[13] Hauge, S. and Hokstad, P., Reliability Data for Safety Instrumented Systems, PDS Data
Handbook, 2004 Edition. SINTEF report STF38 A04423
[14] Albrechtsen, E. and Hokstad, P., Reliability Data for Safety Instrumented Systems, PDS
Data Handbook, 2003 Edition. SINTEF report STF38 A02421.
[15] EXIDA, Safety Equipment Reliability Handbook, 3rd edition, Volume 1 – 3, exida.com,
2007
[16] T-Book, Version 5, Reliability Data of Components in Nordic Nuclear Power Plants. TUD-
office and Pörn Consulting, 2000.
[17] Lunde, M., Ytelsesvurdering av sikkerhetsventiler (evaluation of pressure safety valve
performance), NTNU, november 2004
[18] Grammeltvedt, J.A., Oseberg C – Gjennomgang av erfaringsdata for brann- og
gassdetektorer på Oseberg C. Forslag til testintervaller for detektorene. Report from Norsk
Hydro, Forskningssenteret Porsgrunn (in Norwegian), 1994.

116
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

[19] OLF Guideline 070: “Application of IEC 61508 and IEC 61511 in the Norwegian Petroleum
Industry”. The Norwegian Oil Industry Association, rev. 02, 2004.
[20] Angela Summers (2008). IEC Product Approval – Veering Off Course. Article posted
11.06.08 in www.controlglobal.com
[21] Centre for Chemical Process Safety (CCPS): Guidelines for safe and reliable instrumented
protective systems, Wiley, 2007
[22] Béla G. Lipták (Editor): Instrument Engineers Handbook – Process Control and
Optimisation, fourth edition, Taylor & Francis, 2006
[23] Guidelines for follow-up of Safety Instrumented Systems (SIS) in the operating phase.
SINTEF report A8788, Rev. 01, 01.12.2008 (Web:
http://www.sintef.no/project/PDS/Reports/PDS%20Report-
SIS follow up guideline final v01.pdf)
[24] Hauge, S., Lundteigen M.A and Rausand M., Updating failure rates and test intervals in the
operational phase: A practical implementation of IEC 61508 and IEC 61511, ESREL
September 2009, Prague, Czech Republic

117

You might also like