Professional Documents
Culture Documents
TITLE
SINTEF Technology and Society Reliability Data for Safety Instrumented Systems
Safety research
Address: NO-7465 Trondheim,
PDS Data Handbook, 2010 Edition
NORWAY
Location: S P Andersens veg 5
NO-7031 Trondheim
Telephone: +47 73 59 27 56
Fax: +47 73 59 28 96 AUTHOR(S)
This report provides reliability data estimates for components of control and safety systems. Data
dossiers for input devices (sensors, detectors, etc.), control logic (electronics) and final elements
(valves, etc.) are presented, including some data for subsea equipment. Efforts have been made to
document the presented data thoroughly, both in terms of applied data sources and underlying
assumptions. The data are given on a format suitable for performing reliability analyses in line with
the requirements in the IEC 61508 and IEC 61511 standards.
As compared to the former 2006 edition, the following main changes are included:
• A general review and update of the failure rates, coverage values, β-values and other relevant
parameters;
• Some new equipment groups have been added;
• Data for control logic units have been updated and refined.
PREFACE
The present report is an update of the 2006 edition of the Reliability Data for Control and Safety
Systems, PDS Data Handbook [12]. The handbook presents data in line with the latest available
data sources as well as data for some new equipment.
The work has been carried out as part of the research project “Managing the integrity of safety
instrumented systems”. 1
Stein Hauge
Oil Companies/Operators
• A/S Norske Shell
• BP Norge AS
• ConocoPhillips Norge
• Eni Norge AS
• Norsk Hydro ASA
• StatoilHydro ASA (Statoil ASA from Nov. 1st 2009)
• Talisman Energy Norge
• Teekay Petrojarl ASA
• TOTAL E&P NORGE AS
Governmental Bodies
• The Directorate for Civil Protection and Emergency Planning (Observer)
• The Norwegian Maritime Directorate (Observer)
• The Petroleum Safety Authority Norway (Observer)
1
This user initiated research project has been sponsored by the Norwegian Research Council and the PDS
forum participants. The project work has been carried out by SINTEF.
3
ABSTRACT
This report provides reliability data estimates for components of control and safety systems. Data
dossiers for input devices (sensors, detectors, etc.), control logic (electronics) and final elements
(valves, etc.) are presented, including some data for subsea equipment. Efforts have been made to
document the presented data thoroughly, both in terms of applied data sources and underlying
assumptions. The data are given on a format suitable for performing reliability analyses in line
with the requirements in the IEC 61508 and IEC 61511 standards.
As compared to the former 2006 edition, the following main changes are included:
• A general review and update of the failure rates, coverage values, β-values and other
relevant parameters;
• Some new equipment groups have been added;
• Data for control logic units have been updated and refined.
4
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Table of Contents
PREFACE ........................................................................................................................... 3
ABSTRACT .................................................................................................................................... 4
1 INTRODUCTION ................................................................................................................... 9
1.1 Objective and Scope ......................................................................................................... 9
1.2 Benefits of Reliability Analysis – the PDS Method ......................................................... 9
1.3 The IEC 61508 and 61511 Standards ............................................................................. 10
1.4 Organisation of Data Handbook ..................................................................................... 10
1.5 Abbreviations ................................................................................................................. 10
2 RELIABILITY CONCEPTS ................................................................................................. 13
2.1 The Concept of Failure ................................................................................................... 13
2.2 Failure Rate and Failure Probability............................................................................... 13
2.2.1 Failure Rate Notation ...................................................................................... 13
2.2.2 Decomposition of Failure Rate........................................................................ 14
2.3 Reliability Measures and Notation ................................................................................. 15
2.4 Reliability Parameters .................................................................................................... 16
2.4.1 Rate of Dangerous Undetected Failures .......................................................... 16
2.4.2 The Coverage Factor, c ................................................................................... 17
2.4.3 Beta-factors and CMooN .................................................................................... 17
2.4.4 Safe Failure Fraction, SFF............................................................................... 18
2.5 Main Data Sources ......................................................................................................... 18
2.6 Using the Data in This Handbook .................................................................................. 19
3 RELIABILITY DATA SUMMARY ..................................................................................... 21
3.1 Topside Equipment ......................................................................................................... 21
3.2 Subsea Equipment .......................................................................................................... 27
3.3 Comments to the PDS Data ............................................................................................ 28
3.3.1 Probability of Test Independent Failures (PTIF) .............................................. 28
3.3.2 Coverage .......................................................................................................... 29
3.3.3 Fraction of Random Hardware Failures (r) ..................................................... 30
3.4 Reliability Data Uncertainties – Upper 70% Values ...................................................... 32
3.4.1 Data Uncertainties ........................................................................................... 32
3.4.2 Upper 70% Values........................................................................................... 33
3.5 What is “Sufficient Operational Experience“? – Proven in Use .................................... 34
4 MAIN FEATURES OF THE PDS METHOD ...................................................................... 37
4.1 Main Characteristics of PDS .......................................................................................... 37
4.2 Failure Causes and Failure Modes ................................................................................. 37
4.3 Reliability Performance Measures ................................................................................. 39
4.3.1 Contributions to Loss of Safety ....................................................................... 40
4.3.2 Loss of Safety due to DU Failures - Probability of Failure on Demand (PFD)40
4.3.3 Loss of Safety due to Test Independent Failures (PTIF)................................... 40
4.3.4 Loss of Safety due to Downtime Unavailability – DTU ................................. 41
4.3.5 Overall Measure for Loss of Safety– Critical Safety Unavailability .............. 41
5 DATA DOSSIERS ................................................................................................................. 43
5.1 Input Devices .................................................................................................................. 44
5.1.1 Pressure Switch ............................................................................................... 44
5.1.2 Proximity Switch (Inductive) .......................................................................... 46
5.1.3 Pressure Transmitter ........................................................................................ 47
5
5.1.4 Level (Displacement) Transmitter................................................................... 49
5.1.5 Temperature Transmitter ................................................................................. 51
5.1.6 Flow Transmitter ............................................................................................. 53
5.1.7 Catalytic Gas Detector..................................................................................... 55
5.1.8 IR Point Gas Detector...................................................................................... 57
5.1.9 IR Line Gas Detector ....................................................................................... 59
5.1.10 Smoke Detector ............................................................................................... 61
5.1.11 Heat Detector ................................................................................................... 63
5.1.12 Flame Detector ................................................................................................ 65
5.1.13 H2S Detector .................................................................................................... 68
5.1.14 ESD Push Button ............................................................................................. 70
5.2 Control Logic Units ........................................................................................................ 72
5.2.1 Standard Industrial PLC .................................................................................. 73
5.2.2 Programmable Safety System ......................................................................... 79
5.2.3 Hardwired Safety System ................................................................................ 85
5.3 Final Elements ................................................................................................................ 88
5.3.1 ESV/XV........................................................................................................... 88
5.3.2 ESV, X-mas Tree ............................................................................................ 92
5.3.3 Blowdown Valve ............................................................................................. 95
5.3.4 Pilot/Solenoid Valve........................................................................................ 97
5.3.5 Process Control Valve ................................................................................... 100
5.3.6 Pressure Relief Valve .................................................................................... 103
5.3.7 Deluge Valve ................................................................................................. 105
5.3.8 Fire Damper ................................................................................................... 106
5.3.9 Circuit Breaker .............................................................................................. 108
5.3.10 Relay.............................................................................................................. 109
5.3.11 Downhole Safety Valve – DHSV.................................................................. 110
5.4 Subsea Equipment ........................................................................................................ 111
6 REFERENCES..................................................................................................................... 116
6
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
List of Tables
List of Figures
7
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
1 INTRODUCTION
Safety standards like IEC 61508, [1] and IEC 61511, [2], require quantification of failure
probability for operation of safety systems. Such quantification may be part of design
optimization or verification that the design is according to stated performance requirements.
The use of relevant failure data is an essential part of any quantitative reliability analysis. It is also
one of the most challenging parts and raises a number of questions concerning the availability and
relevance of the data, the assumptions underlying the data and what uncertainties are related to the
data.
In this handbook recommended data for reliability quantification of Safety Instrumented Systems
(SIS) are presented. Efforts have been made to document the presented data thoroughly, both in
terms of applied data sources and underlying assumptions.
Various data sources have been applied when preparing this handbook, the most important source
being the OREDA database and handbooks (ref. section 2.5).
As compared to the former 2006 edition, [12], the following main changes are included:
• A general update / review of the failure rates, coverage values, β-values and other relevant
parameters;
• Some new equipment groups have been added;
• Data for control logic units have been updated and refined.
Reliability analysis represents a systematic tool for evaluating the performance of safety
instrumented systems (SIS) from a safety and production availability point of view. Some main
applications of reliability analysis are:
• Reliability assessment and follow-up; verifying that the system fulfils its safety and
reliability requirements;
• Design optimisation; balancing the design to get an optimal solution with respect to safety,
production availability and lifecycle cost;
• Operation planning; establishing the optimal testing and maintenance strategy;
9
• Modification support; verifying that planned modifications are in line with the safety and
reliability requirements.
The PDS method has been developed in order to enable the reliability engineer and non-experts to
perform such reliability considerations in various phases of a project. The main features of the
PDS method are discussed in chapter 4.
The PDS method is in line with the main principles advocated in the IEC standards, and is a
useful tool when implementing and verifying quantitative (SIL) requirements as described in the
IEC standards.
The recommended reliability data estimates are summarised in chapter 3 of this report. A split has
been made between input devices, logic solvers and final elements.
Chapter 4 gives a brief summary of the main characteristics of the PDS method. The failure
classification for safety instrumented systems is presented together with the main reliability
performance measures used in PDS.
In chapter 5 the detailed data dossiers providing the basis for the recommended reliability data are
given. As for previous editions of the handbook, some data are scarcely available in the data
sources, and it is necessary to, partly or fully, rely on expert judgements.
1.5 Abbreviations
CCF - Common cause failure
CSU - Critical safety unavailability
DTU - Downtime unavailability
FMECA - Failure modes, effects, and criticality analysis
FMEDA - Failure modes, effects, and diagnostic analysis
IEC - International Electro technical Commission
JIP - Joint industry project
MTTR - Mean time to restoration
NDE - Normally de-energised
NE - Normally energised
OLF - The Norwegian oil industry association
OREDA - Offshore reliability data
10
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
AI - Analogue input
BDV - Blowdown valve
CPU - Central Processing Unit
DO - Digital output
ESV - Emergency shutdown valve
DHSV - Downhole safety valve
XV - Production shutdown valve
11
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
2 RELIABILITY CONCEPTS
In this chapter some selected concepts related to reliability analysis and reliability data are
discussed. For a more detailed discussion reference is made to the updated PDS method
handbook, ref. [10].
From a safety point of view, the first category will be the more critical and such failures are
defined as dangerous failures (D), i.e. they have the potential to result in loss of the ability to shut
down or go to a safe state when required.
Loss of the ability to maintain production is normally not so critical to safety and such failures
have therefore in PDS traditionally been denoted spurious trip (ST) failures whereas IEC 61508
categorise such failures as ‘safe’ (S). In the forthcoming update of the IEC 61508 standard the
definition of safe failures is more in line with the PDS interpretation. Therefore PDS have in this
updated version also applied the notation ‘S’ (instead of ‘ST’ failures).
It should be noted that a given failure may be classified as either dangerous or safe depending on
the intended application. E.g. loss of hydraulic supply to a valve actuator operating on-demand
will be dangerous in an energise-to-trip application and safe in a de-energise-to-trip application.
Hence, when applying the failure data, the assumptions underlying the data as well as the context
in which the data shall be used must be carefully considered.
λcrit = Rate of critical failures; i.e., failures that may cause loss of one of the two main
functions of the component/system (see above).
Critical failures include dangerous (D) failures which may cause loss of the ability to
shut down production when required and safe (S) failures which may cause loss of
the ability to maintain production when safe (i.e. spurious trip failures). Hence:
13
λDU = Rate of dangerous undetected failures, i.e. failures undetected both by automatic
self-test or personnel
λDD = Rate of dangerous detected failures, i.e. failures detected by automatic self-test or
personnel
λS = Rate of safe (spurious trip) failures, including both undetected as well as detected
failures. λS = λSU + λSD (see below)
λSU = Rate of safe (spurious trip) undetected failures, i.e. undetected both by automatic
self-test and personnel
λSD = Rate of safe (spurious trip) detected failures, i.e. detected by automatic self-test or
personnel
λundet = Rate of (critical) failures that are undetected both by automatic self-test and by
personnel (i.e., detected in functional testing only). λundet = λDU + λSU
λdet = Rate of (critical) failures that are detected by automatic self-test or personnel
(independent of functional testing). λdet = λDD + λSD
CMooN = Modification factor for voting configurations other than 1oo2 in the beta-factor
model (e.g. 1oo3, 2oo3 and 2oo4 voting logics)
Some important relationships between different fractions of the critical failure rate are illustrated
in Table 1 and Figure 1.
14
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Dangerous failure,
λDU undetected by automatic
self-test or personnel
λundet
Safe (spurious trip) failure,
λSU undetected by automatic
self-test or personnel
λcrit
Contribute to SFF
λDD (Safe Failure
Fraction)
λdet
λSD
Term Description
PFD Probability of failure on demand. This is the measure for loss of safety caused by
dangerous undetected failures, see section 4.3.
PTIF Probability of a test independent failure. This is the measure for loss of safety
caused by a failure not detectable by functional testing, but occurring upon a true
demand (see section 4.3).
CSU Critical safety unavailability, CSU = PFD + PTIF
15
Term Description
MTTR Mean time to restoration. Time from failure is detected/revealed until function is
restored, ("restoration period"). Note that this restoration period may depend on a
number of factors. It can be different for detected and undetected failures: The
undetected failures are revealed and handled by functional testing and could have
shorter MTTR than the detected failures. The MTTR could also depend on
configuration, operational philosophy and failure multiplicity.
STR Spurious trip rate. Rate of spurious trips of the safety system (or set of redundant
components), taking into consideration the voting configuration.
τ Interval of functional test (time between functional tests of a component)
As discussed in section 2.2.2, the critical failure rate, λ rit are split into dangerous and safe
failures, (i.e. λcrit = λD + λS) which are further split into detected and undetected failures. When
performing safety unavailability calculations, the rate of dangerous undetected failures, λDU, is of
special importance, since this parameter - together with the test interval - to a large degree
governs the prediction of how often a safety function is likely to fail on demand.
Equipment specific failure data reports prepared by manufacturers (or others) often provide λDU
estimates being an order of magnitude (or even more) lower than those reported in generic data
handbooks. There may be several causes for such exaggerated claims of performance, including
imprecise definition of equipment- and analysis boundaries, incorrect failure classification or too
optimistic predictions of the diagnostic coverage factor (see e.g. [20]).
When studying the background data for generic failure rates (λDU) presented in data sources such
as OREDA and RNNP, it is found that these data will include both random hardware failures as
well as systematic failures. Examples of the latter include incorrect parameter settings for a
pressure transmitter, an erroneous output from the control logic due to a failure during software
modification, or a PSV which fails due to excessive internal erosion or corrosion. These are all
failures that are detectable during functional testing and therefore illustrate the fact that systematic
failures may well be part of the λDU for generic data.
Since failure rates provided by manufacturers frequently tend to exclude all types of failures
related to installation, commissioning or operation of the equipment (i.e. systematic type of
failures), a mismatch between manufacturer data and generic data appears. Our question then
becomes - since systematic failures inevitably will occur - why not include these failures in
predictive reliability analyses?
In order to elucidate the fact that the failure rate will comprise random hardware failures as well
as systematic failures, the parameter r has therefore been defined as the fraction of dangerous
undetected failures originating from random hardware failures. Rough estimates of the r factor are
given in the detailed data sheets in chapter 5. For a more thorough discussion and arguments
concerning the r factor, reference is made to [10].
16
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Modules often have built-in automatic self-test, i.e. on-line diagnostic testing to detect failures
prior to an actual demand 2. The fraction of failures being detected by the automatic self-test is
called the fault coverage and quantifies the effect of the self-test. Note that the actual effect on
system performance from a failure that is detected by the automatic self-test will depend on
system configuration and operating philosophy. In particular it should be considered whether the
detected failure is configured to only raise an alarm or alternatively bring the system to a safe
state. It is often seen that failures classified as dangerous detected only raise an alarm and in such
case it must be ensured that the failure initiates an immediate response in the form of a repair
and/or introduction of risk reducing measures.
In addition to the diagnostic self-test, an operator or maintenance crew may detect dangerous
failures incidentally in between tests. For instance, the panel operator may detect a transmitter that
is “stuck” or a sensor that has been left in by-pass. Similarly, when a process segment is isolated
for maintenance, the operator may detect that one of the valves will not close. The PDS method
also aims at incorporating this effect, and defines the total coverage factor; c reflecting detection
both by automatic self-test and by operator. Further, the coverage factor for dangerous failures is
denoted cD whereas the coverage factor for safe failures is denoted cS.
Critical failures that are not detected by automatic self-testing or by observation are assumed
either to be detectable by functional (proof) testing 3 or they are so called test independent failures
(TIF) that are not detected during a functional test but appear upon a true demand (see section 2.3
and chapter 4 for further description).
It should be noted that the term “detected safe failure” (of rate λS), is interpreted as a failure
which is detected such that a spurious trip is actually avoided. Hence, a spurious closure of a
valve which is detected by, e.g., flow metering downstream the valve, can not be categorised as a
detected safe failure. On the other hand, drifting of a pressure transmitter which is detected by the
operator, such that a shutdown is avoided, will typically be a detected safe failure.
When quantifying the reliability of systems employing redundancy, e.g., duplicated or triplicated
systems, it is essential to distinguish between independent and dependent failures. Random
hardware failures due to natural stressors are assumed to be independent failures. However, all
systematic failures, e.g. failures due to excessive stresses, design related failures and maintenance
errors are by nature dependent (common cause) failures. Dependent failures can lead to
simultaneous failure of more than one (redundant) component in the safety system, and thus
reduce the advantage of redundancy.
Traditionally, the dependent or common cause failures have been accounted for by the β-factor
approach. The problem with this approach has been that for any M-out-of-N (MooN) voting
(M<N) the rate of dependent failures is the same, and thus the approach does not distinguish
between e.g. a 1oo2 and a 2oo3 voting. The PDS method extends the β-factor model, and
distinguishes between the voting logics by introducing β-factors which depend on the voting
configuration; i.e. β(MooN) = β · CMooN. Here, CMooN is a modification factor depending on the
voting configuration, MooN.
2
Also refer to IEC 61508-4, section 3.8.6 and 3.8.7
3
See also IEC 61508-4, section 3.8.5.
17
Standard (average) values for the β-factor are given in Table 7. Note that when performing
reliability calculations, application specific β-factors should preferably be obtained, e.g. by using
the checklists provided in IEC 61508-6, or by using the simplified method as described in
Appendix D of the PDS method handbook, [10].
Values for CMooN are given in Table 8. For a more complete description of the extended β-factor
approach of PDS, see [10].
The Safe Failure Fraction as described in IEC 61508 is given by the ratio between dangerous
detected failures plus safe failures and the total rate of failure; i.e. SFF = (λDD + λS) /(λD + λS).
The objective of including this measure (and the associated hardware fault tolerance; HFT) was to
prevent manufacturers from claiming excessive SILs based solely on PFD calculations. However,
experience has shown that failure modes that actually do not influence the main functions of the
SIS (ref. section 2.1) are frequently included in the safe failure rate so as to artificially increase
the SFF, [20].
It is therefore important to point out that when estimating the SFF, only failures with a potential to
actually cause a spurious trip of the component should be included among the safe failures. Non-
critical failures, such as a minor external leakage of hydraulic oil from a valve actuator, should not
be included.
The SFF figures presented in this handbook are based on reported failure mode distributions in
OREDA as well as some additional expert judgements. Higher (or lower) SFFs than given in the
tables may apply for specific equipment types and this should in such case be well documented,
e.g. by FMEDA type of analyses.
18
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
The recommended data is based on a number of assumptions concerning safe state, fail safe
design, self-test ability, loop monitoring, NE/NDE design, etc. These assumptions are, for each
piece of equipment, described in the detailed data sheets in chapter 5, Hence, when using the data
for reliability calculations, it is important to consider the relevance of these assumptions for each
specific application.
19
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Observe that λD (third column of tables 3 to 5), together with λcrit =λD + λS, will provide the λS.
The rates of undetected failures λDU and λSU follow from the given coverage values, cD and cS. I.e.
λDU = λD · (1- cD / 100%) and λSU = λS · (1- cS / 100%). The safe failure fraction, SFF, can be
calculated by SFF = ((λcrit - λDU)/ λcrit) · 100%.
Data dossiers with comprehensive information for each component are given in chapter 5 as
referred to in tables 3 to 5.
Input Devices
H2S detector 1.3 1.0 50% 30% 0.5 0.2 62% Sect. 5.1.13
21
Table 4 Failure rates, coverages and SFF for control logic
units
The following additional assumptions and notes apply for the above data on control logic units:
• A single system with analogue input, CPU/logic and digital output configuration is
generally assumed;
• For the input and output part, figures are given for one channel plus the common part of
the input/output card (except for hardwired safety system where figures for one channel
only are given);
• Single processing unit / logic part is assumed throughout;
• If the figures for input and output are to be used for redundant configurations, separate
input cards and output cards must be used since the given figures assume a common part
on each card;
• If separate Ex barriers or other interface devices are used, figures for these must be added
separately;
• The systems are generally assumed used in de-energised to trip functions, i.e. loss of
power or signal will result in a safe state.
22
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Final elements
Table 6 below gives suggested values for the PTIF, i.e. the probability of a test independent failure
occurring upon a demand.
23
Table 6 PTIF for various components
Component
Component PTIF Comments (see section 3.3.1)
group
Table 7 and 8 give suggested values for the β-factor and the configuration factor CMooN
respectively. Note that the CMooN factors have been updated as compared to previous values, ref.
[12].
Regarding the suggested β-factors it should be pointed out that these are typical values. Any
application specific factors may be implemented in the estimates by e.g. applying the checklists in
24
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
IEC 61508 or the simplified method described in appendix D in [10]. Some beta values have been
slightly increased as compared to the figures in the 2006 edition, [12]. This is based on results
from operational reviews where it was observed that a fairly large proportion of the SIS failures
actually involved more than one component.
Component
Component β Comment/source
group
Relay 0.03
25
Table 8 Numerical values for configuration factors, CMooN
M=1 C1oo2 = 1.0 C1oo3 = 0.5 C1oo4 = 0.3 C1oo5 = 0.21 C1oo6 = 0.17
Note that the CMooN factors have been updated as compared to the previous 2006 handbook, [12].
It should be pointed out that the CMooN factors are suggested values and not exact figures. C1oo5
and C1oo6 have been given to two decimal places in order to be able to distinguish the two
configurations. The reasoning behind the CMooN factors is further discussed in [10].
26
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Similarly, the values given for the safe failure fraction (SFF) should be considered as indicative
only. Higher (or lower) SFFs may apply for specific equipment types and this should in such case
be documented separately.
Subsea equipment
Solenoid control valve (in subsea control module) 0.40 0.16 0% 0.16 60 %
Production master valve (PMV), Production wing
0.26 0.18 0% 0.18 30 %
valve (PWV)
Chemical injection valve (CIV) 0.37 0.22 0% 0.22 40 %
27
3.3 Comments to the PDS Data
The data presented in Table 3 – Table 9 are mainly based on operational experience (OREDA,
RNNP, etc.) and as such reflect some kind of average expected field performance. It is stressed
that these generic data should not be used uncritically – if valid application specific data is
available, these should be preferred. When comparing the data in this handbook with figures
found in manufacturer certificates and reports, major gaps will often be found. As discussed in
section 2.4.1 such data sources often exclude failures caused by inappropriate maintenance, usage
mistakes and design related systematic failures. Care should therefore be taken when data from
certificates and similar reports are used for predicting reliability performance in the field.
For some equipment types and some of the parameters, the listed data sources provide limited
information and additional expert judgement must therefore be applied. In particular for the PTIF,
the coverage c and the r factor, the data sources are scarce, and some arguments are therefore
required concerning the recommended values.
General
No testing is 100% perfect and some dangerous undetected failures may therefore be present also
after a functional test. The suggested PTIF values attempt to quantify the likelihood of such failures
to be present after a test. Obviously, such values will depend heavily on the given application, and
specific measures may have been introduced to minimise the likelihood of test independent
failures. Hence, it may be argued to reduce (or increase) the given values. This is further
discussed in appendix D of the method handbook, [10].
Process Switch
The proposed PTIF of 10-3 applies to a pressure switch operating in clean medium, and the main
contribution is assumed to be failures during human intervention (e.g. by-pass, wrong set point,
etc.). If the switch is operating in unclean medium and clogging of the sensing line is a possibility,
the PTIF may be increased to 5·10-3.
Proximity Switch
For proximity switches a relatively high PTIF of 10-3 has been suggested. The main contributors to
this TIF are assumed to be failures during installation and maintenance, in particular mounting
and misalignment problems related to the interacting parts.
Process Transmitters
Transmitters have a “live signal”. Thus, blocking of the sensing line may be detected by the
operator and is included in the λdet. Also a significant part of the failures of the transmitter itself
(all "stuck" failures) may be detected by the operator and therefore contribute to λdet. Thus, the
PTIF for transmitters is expected to be less than that of the switch and a value of 5·10-4 has been
suggested. In previous editions of the handbook, smart and field bus transmitters have, due to
more complete self-test, been given an even smaller PTIF. However, since smart transmitter also
has some additional software, other test independent failures may be introduced. Consequently,
one common PTIF is given.
Gas Detectors
In previous versions of the PDS data handbook the given PTIF values for gas detectors
differentiated with respect to detector type, the size of the leakage, ventilation, and other
conditions expected to influence the PTIF probability for detectors. The PTIF values were then
given as intervals depending on the state of the conditions listed. It is now assumed that the
detector is already exposed and the present values generally represent lower end values of the
previously given intervals, as this represented the “best conditions”. Note that catalytic gas
28
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
detectors and H2S detectors have been given a somewhat lower PTIF than the IR gas detectors. The
catalytic gas detectors have a simpler design which is assumed to result in a lower probability of
test independent systematic failures.
Fire Detectors
PTIF values are given based on the assumptions that (1) a detector with the "appropriate" detection
principle is applied (e.g. that smoke detectors are applied where smoke fires are expected and
flame detectors where flame fires are expected), and (2) the detector is already exposed to the
flame/heat or smoke (depending on detector type). A PTIF value of 10-3 has been suggested for all
fire detectors.
Valves
The PTIF for ESV/XVs will depend on the quality of the functional testing performed. Here, a
standard functional test has been assumed where the valve is fully closed but not tested for
internal leakage. In such case a PTIF value of 10-4 is suggested. For control valves used for
shutdown purposes and blowdown valves a PTIF of 10-4 is also suggested. All these values include
PTIF for the pilot valve. For PSVs a relatively high PTIF value of 10-3 has been suggested due to the
possibility of human failures related to incorrect setting/adjustment of the PSV.
3.3.2 Coverage
General
As compared to the ’03, ‘04 and ’06 editions of the PDS Reliability data handbook, some of the
coverage factors have been updated. The reasoning behind this is partly discussed below. The
discussion is mainly limited to dangerous failures.
For process transmitters 60 % coverage for dangerous failures has been assumed. This is based on
implemented self test in the transmitter as well as casual observation by control room operator.
The latter assumes that the transmitter signal can be observed on the VDU and compared with
other signals so that e.g. stuck or drifting signal can be revealed. If a higher coverage is claimed,
e.g. due to automatic comparison between transmitters, this should be especially documented.
29
coverage. Catalytic gas detectors normally have limited built-in self-test. The same applies for
smoke and heat detectors. Hence, these detector types will have a lower coverage.
For a standard industrial PLC (single system) the coverage factor for dangerous failures, cD, has
been set lower than for a SIL certified programmable safety system. For safe failures, the
coverage factor is low, since it is assumed that upon detection of such a failure (e.g. loss of signal)
the single safety system should normally go to a safe state (i.e. a shutdown). It should be noted
that if the safety system is redundant, the rate of undetected safe (i.e. spurious trip) failures may
be reduced significantly by the use of voting.
The hardwired safety system is assumed to be a fail safe design without diagnostic coverage, i.e.
failures will either be dangerous undetected or a detected failure will result in a trip action (SU).
Hence, this implies that the coverage for both dangerous and safe failures has been assumed to be
zero. Note that this applies for single systems and as a consequence hardwired safety systems are
often voted 2oo2 in order to avoid spurious trip failures.
Valves
No automatic self-test for valves is assumed. For ESV/XV valves the coverage for dangerous
failures have been slightly increased to 30% due to information from OREDA phase V-VII where
it appears that a high fraction of dangerous failures (more than 50%) are detected in between tests
by operator observation or other methods. It should be noted that this is not automatic diagnostic
coverage (as e.g. defined in IEC-61508) but will however imply that dangerous faults are detected
in between testing. For valves that are operated infrequently, the coverage will be lower, and the
cD for blowdown valves has therefore been set to 20%. Based on information from OREDA the
coverage for safe failures has been set to 10% for ESV/XV and 0% for blowdown valves.
For control valves used also for shutdown purposes, a relatively high coverage of 50% has been
estimated based on the registered observation methods for the relevant failure modes in OREDA.
It is then implicitly assumed that the control valve is frequently operated resulting in a relatively
high coverage.
Occasionally, e.g. on some onshore plants, selected control valves may be used solely for
shutdown purposes (i.e. not normally operated). In this case the valves will be operated
infrequently, resulting in a significantly lower coverage factor. For control valves used only as
shutdown valves, the coverage is therefore suggested reduced to 20%.
For PSV valves and deluge valves no coverage has generally been assumed.
General
Based on input from discussions with experts as well as a study of available OREDA data,
estimates of r have been established. As discussed previously, r is the fraction of dangerous
undetected (DU) failures that can be “explained” by random hardware failures (hence 1-r is the
fraction of DU failures that can be explained by systematic failures). Below, a brief discussion of
the r values suggested in the detailed data sheets is given.
30
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Process Switch
For process switches the reported failure causes from OREDA are scarce and the r has been
estimated by expert judgement to be approximately 50%.
Process Transmitters
Data from OREDA on critical transmitter failures, results from operational reviews as well as
discussions with experts, all indicate that a significant proportion of the critical dangerous failures
for transmitters are caused by factors such as “excessive vibration”, “erroneous maintenance”
(e.g. ‘wrong calibration’ ‘erroneous specification of measurement area’ and ‘left in inhibit’) and
“incorrect installation”. As seen all these are examples of systematic failures which according to
OREDA are detectable (either by casual observation or during functional testing/maintenance).
Based on the observed failure cause distribution, an r = 30% has therefore been proposed.
Detectors
When going through data from OREDA phase V and VI for fire and gas detectors, it is found that
for some 40% of the critical failures the failure cause is reported as being due to ‘expected wear
and tear, whereas some 60% of the critical failures are due to ‘maintenance errors’. When going in
more detail into the failure mechanisms, it is seen that the failures are described by e.g. ‘out of
adjustment’ (30%), ‘general instrument failure’ (28%), ‘contamination’ (21%), ‘vibration’ (10%)
and ‘maintenance/external/others’ (11%). Even though ‘contamination’ (i.e. typical dirty lens) and
instrument failure partly can be explained by expected wear and tear, it is seen that many of the
critical failures are systematic ones. Based on this an r = 40% has been proposed.
Control Logic
For control logic no updated OREDA data is available on failure causes and the proposed r values
are therefore entirely based on expert judgements. It has been assumed that for a standard
industrial PLC the major part of the failures can be explained by (systematic) software related
errors. Hence, a small r of 10% has been proposed. On the other hand, for a hardwired safety
system, it is assumed that a large part of the failure rate is due to random hardware failures, and a
large r of 80% has been suggested.
Valves
The reported failure causes in OREDA for critical failures are somewhat scarce and therefore
additional expert judgement has to be applied. When considering what types of valve failures that
are typically revealed upon functional testing, this includes stuck valve, insufficient actuator
force, valve not shutting off tight due to excessive erosion or corrosion (unclean medium),
incorrect installation, etc. Several of these failures represent (detectable) systematic failures,
hence it is evident that the r is significantly lower than 1 (only random hardware failures).
For ESV/XV and X-mas tree valves an r = 50% has been proposed, mainly based on expert
judgement and reported failure causes for other type of valves. For pilot valves, there are more
reported failure causes and these indicate a relatively high proportion of systematic failures. Here
an r equal to 40% has been suggested based on the reported OREDA data.
For control valves used for shutdown purposes, a somewhat higher proportion of ‘wear and tear’
failures are expected, and therefore an r equal to 60% has been proposed. Reported failures causes
for deluge valves also indicate a relatively high proportion of ‘wear an tear’ related failures and an
r equal to 60% has been proposed also for deluge valves.
For PSV valves, limited data on failure causes is available from OREDA, and an r = 50% has
been suggested.
31
3.4 Reliability Data Uncertainties – Upper 70% Values
The failure rates given in this handbook are best (mean) estimates based on the available data
sources listed in section 2.5. The data in these sources have mainly been collected on oil and gas
installations where environment, operating conditions and equipment types are comparable, but
not at all identical. The presented data are therefore associated with uncertainties due to factors
such as:
• The data collection itself; inadequate failure reporting, classification or data interpretation.
• Variations between installations; the failure rates are highly dependant upon the operating
conditions and also the equipment make will vary between installations.
• Relevance of data / equipment boundaries; what components are included / not included in
the reported data? Have equipment parts been repaired or simply replaced, etc.?
• Assumed statistical model; is the standard assumption of a constant failure rate always
relevant for the equipment type under consideration?
• Aggregated operational experience; what is the total amount of operational experience
underlying the given estimates?
The last bullet concerning amount of operational experience, is related to the possibility of
establishing a confidence interval for the failure rate. Instead of only specifying a single mean
value, an interval likely to include the parameter is given. How likely the interval is to contain the
parameter is determined by the confidence level. E.g. a 90% confidence interval for λDU may be
given by: [0.1·10-6 per hour, 5·10-6 per hour]. This means that we are 90% confident that the
failure rate will lie within this interval. It is also possible to specify one-sided confidence intervals
where the lower bound of the interval is zero. E.g. a one-sided 70% interval for λDU may be given
by: [0, 4·10-6 per hour], implying that we can be 70% certain that the failure rate is lower than
4·10-6 per hour.
In particular, in IEC 61508-2, section 7.4.7.4, it is stated that any failure rate data based on
operational experience should have a confidence level of at least 70% (a similar requirement is
found in IEC 61511-1, section 11.9.2). Hence, IEC 61508 and IEC 61511 indicate that when using
historic data one should be conservative and the recommended approach is to choose the upper
70% confidence value for the failure rate as illustrated on Figure 2 below.
0 Mean λ Conservative λ
Some data sources, such as OREDA, provide confidence intervals for the failure rate estimates,
whereas most sources, including this handbook, provide mean values only. However, in the next
section an attempt has been made to indicate failure rate values with a confidence level of at least
70% as required in the IEC 61508/61511 standards.
32
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
When looking in more detail at the data dossiers in chapter 5, it is seen that there is a varying
amount of operational experience underlying the failure rate estimates. Hence, there will also be a
varying degree of confidence associated with the given data. Based on the aggregated operational
time, number of dangerous failures and some additional expert judgement, an attempt has been
made, whenever possible, to establish a one-sided 70% confidence interval and thereby provide
some upper 70% values for the dangerous undetected failure rate. The result of this exercise is
summarised in the below table (done only for the topside data where the most detailed
information has been available).
33
1) 1)
Component λDU λDU
group Component Comments
(mean) (70%)
Final Control valve (ex. pilot)
2.2·10-6 3.5·10-6
Elements (frequently operated)
(cont.) Control valve (ex. pilot)
3.5·10-6 5.5·10-6
(shutdown service only)
Pressure relief valve, PSV 2.2·10-6 3.2·10-6
• Establishing confidence intervals based on data from different sources and different
installations is not a straightforward task. The suggested upper 70% values should
therefore be taken as rough estimates only.
• As discussed in section 2.4.1, the generic data presented in this handbook include failure
mechanisms that are frequently excluded from e.g. manufacturer failure reports and
certificates. As such, the mean failure rates given in Table 3-5 are considered
representative when predicting the expected risk reduction from the equipment. Using the
upper 70% confidence values presented above should therefore be considered as a way of
increasing the robustness of the results e.g. when performing sensitivity analyses.
• In the SINTEF report “Guidelines for follow-up of Safety Instrumented Systems (SIS) in
the operating phase”, [23], a procedure for updating failure rates in operation is described.
For this purpose a conservative estimate for the λDU is required. Unless other equipment
specific values are available, the above upper 70% values can then be applied.
In the SINTEF report “Guidelines for follow-up of Safety Instrumented Systems (SIS) in the
operating phase”, [23], it has been discussed how much operational experience is required before
a reasonable confidence in a new failure rate estimate can be established. For SIS component data
(for detectors, sensors and valves) from OREDA, it can be found that the upper 95% confidence
limit for the rate of DU-failures is typically some 2–3 times the mean value of the failure rate,
[23], [24]. A suggested “cut-off” criterion for claiming proven in use can then be that the gathered
operational experience shall be sufficient to establish a failure rate estimate with comparable
confidence, i.e. the upper 95% confidence for λDU shall be within 2–3 times the mean value.
Based on this criterion and further work from [23] and [24], some suggested rules for claiming
“proven in use” for a given piece of field equipment are:
34
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
• Minimum aggregated time in service should be 2.5 million (2.5·106) operational hours or
at least 2 dangerous undetected failures 4 should have been registered for the considered
observation period;
• Operational data should be available from at least 2 installations with comparable
operational environments;
• The data should be collected from the useful period of life of the equipment (typically this
implies that the first 6 months of operation should be considered excluded);
• A systematic data collection and reporting system should be implemented to ensure that all
failures have been formally recorded;
• It should be ensured that all equipment units included in the sample have been activated
(i.e. tested or demanded) at least once during the observation period (in order to ensure
that components that have never been activated are counted in).
Additional requirements are given in the IEC standards. It should be noted that whereas IEC
61508 uses the term ‘proven in use’, IEC 61511 applies the term ‘prior use’. However, neither
IEC 61508 nor IEC 61511 quantify the required amount of operating experience, but states that
for field equipment there may be extensive operating experience that can be used as a basis for the
evidence [for prior use, ref. IEC 61511-1, section 11.5.3].
It may be argued that the above requirement concerning aggregated time in service is difficult to
fulfil for equipment other than e.g. fire and gas detectors. However, an important part of claiming
proven in use is to have a clear understanding of failure mechanisms, how the failure is detected
and repaired and what maintenance activities are required in order to keep the equipment in an “as
good as new condition”, [21]. For this purpose considerable operational experience is necessary
and focus should therefore be on improved data collection and failure registration. Furthermore, it
will require that the manufacturers obtain feedback on operational performance from the
operators, also beyond the warranty period of the equipment.
4
In general, an increasing number of failures will result in a narrower confidence interval, i.e. a higher
confidence in the estimated mean value. Hence, experienced DU failures may “compensate” for limited
operational experience (but will anyhow require significant operational time if a low failure rate is to be
claimed).
35
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
The method gives an integrated approach to random hardware and systematic failures. Thus,
the model accounts for relevant failure causes such as:
- normal ageing
- software failures
- stress induced failures
- design failures
- installation failures
- operational related failures
The model includes all relevant failure types that may occur, and explicitly accounts for
dependent (common cause) failures and the effect from different types of testing (auto-
matic/self-test as well as manual observation).
The model distinguishes between the ways a system can fail (failure mode), such as fail-to-
operate, spurious operation and non-critical failures.
A main benefit of the PDS taxonomy is the direct relationship between failure causes and the
measures used to improve safety system performance.
The method is simple and structured:
- highlighting the important factors contributing to loss of safety and spurious operation
- promoting transparency and communication
As stressed in IEC 61508, it is important to incorporate the complete safety function when
performing reliability analyses. This is a core issue in PDS; it is function-oriented, and the
whole path from the sensors, via the control logic to the actuators is taken into consideration
when modelling the system.
The PDS method has a somewhat different approach to systematic failures compared to IEC
61508. Whereas IEC 61508 only quantifies part of the total failure rate, represented by the
random hardware failures, PDS also attempts to quantify the contribution from systematic
failures (see Figure 3 below) and therefore gives a more complete picture of how the
equipment is likely to operate in the field.
37
Failure
Random Systematic
hardware failure
failure
Random hardware failures are failures resulting from the natural degradation mechanisms of the
component. For these failures it is assumed that the operating conditions are within the design
envelope of the system.
Systematic failures are failures that can be related to a particular cause other than natural
degradation and foreseen stressors. Systematic failures are due to errors made during
specification, design, operation and maintenance phases of the lifecycle. Such failures can
therefore normally be eliminated by a modification, either of the design or manufacturing process,
the testing and operating procedures, the training of personnel or changes to documentation.
There are several possible schemes for classifying systematic failures. Here, a further split into
five categories has been suggested:
• Software faults may be due to programming errors, compilation errors, inadequate testing,
unforeseen application conditions, change of system parameters, etc. Such faults are present
from the point where the incorrect code is developed until the fault is detected either through
testing or through improper operation of the safety function. Software faults can also be
introduced during modification to existing process facilities, e.g. inadequate update of the
application software to reflect the revised shutdown sequences or erroneous setting of a high
alarm outside its operational limits.
• Design related failures, are failures (other than software faults) introduced during the design
phase of the equipment. It may be a failure arising from incorrect, incomplete or ambiguous
system or software specification, a failure in the manufacturing process and/or in the quality
assurance of the component. Examples are a valve failing to close due to insufficient actuator
force or a sensor failing to discriminate between true and false demands.
• Installation failures are failures introduced during the last phases prior to operation, i.e. during
installation or commissioning. If detected, such failures are typically removed during the first
months of operation and such failures are therefore often excluded from data bases. These
failures may however remain inherent in the system for a long period and can materialise
38
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
during an actual demand. Examples are erroneous location of e.g. fire/gas detectors, a valve
installed in the wrong direction or a sensor that has been erroneously calibrated during
commissioning.
• Excessive stress failures occur when stresses beyond the design specification are placed upon
the component. The excessive stresses may be caused either by external causes or by internal
influences from the medium. Examples may be damage to process sensors as a result of
excessive vibration or valve failure caused by unforeseen sand production.
• Operational failures are initiated by human errors during operation or maintenance/testing.
Examples are loops left in the override position after completion of maintenance or a process
sensor isolation valve left in closed position so that the instrument does not sense the medium.
• Dangerous (D). Safety system/module does not operate on demand (e.g. sensor stuck upon
demand)
• Safe (S). Safety system/module may operate without demand (e.g. sensor provides signal
without demand – potential spurious trip)
• Non-Critical (NONC). Main functions not affected (e.g. sensor imperfection, which has no
direct effect on control path)
The first two of these failure modes, dangerous (D) and safe (S) are considered "critical" in the
sense that they have a potential to affect the operation of the safety function. The safe failures
have a potential to cause a trip of the safety function, while the dangerous failures may cause the
safety function not to operate upon a demand. The failure modes above are further split into the
following categories:
Note that for high demand mode systems IEC 61508 uses PFH (Probability of Failure per Hour)
as the measure for loss of safety. PFH is not discussed here but is treated separately in the updated
method handbook, [10].
39
4.3.1 Contributions to Loss of Safety
The potential contributors to loss of safety (safety unavailability) can be split into the following
categories:
1) Unavailability due to dangerous undetected (DU) failures. For a single component, these
failures occur with rate λDU. The average period of unavailability due to such a failure is τ/2
(where τ = period of functional testing), since the failure can have occurred anywhere inside
the test interval.
2) Unavailability due to failures not revealed during functional testing. This unavailability is
caused by “unknown” ("dormant"), dangerous and undetected failures which can only be
detected during a true demand. These failures are denoted Test Independent Failures (TIF), as
they are not detected during functional testing.
Below, we discuss separately the loss of safety measures for the three failure categories, and
finally an overall measure for loss of safety is given.
The PFD quantifies the loss of safety due to dangerous undetected failures (with rate λDU), during
the period when it is unknown that the function is unavailable. The average duration of this period
is τ/2, where τ = test period. For a single (1oo1) component the PFD can be approximated by:
For a MooN voting logic (M<N), the main contribution to PFD (accounting for common cause
failures) is given by:
Here, CMooN is a modification factor depending on the voting configuration, ref. Table 8. Further,
for a NooN voting, we approximately have:
In reliability analysis it is often assumed that functional testing is “perfect” and as such detects
100% of the failures. In true life this is not necessarily the case; the test conditions may differ
from the real demand conditions, and some dangerous failures can therefore remain in the SIS
after the functional test. In PDS this is catered for by adding the probability of so called test
independent failures (TIF) to the PFD.
PTIF = The Probability that the component/system will fail to carry out its intended
function due to a (latent) failure not detectable by functional testing (therefore the
name “test independent failure”)
40
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
It should be noted that if an imperfect testing principle is adopted for the functional testing, this
will lead to an increase of the TIF probability. For instance, if a gas detector is tested by
introducing a dedicated test gas to the housing via a special port, the test will not reveal a
blockage of the main ports. Another example is the use of partial stroke testing for valves. This
type of testing is likely to increase the PTIF for the valve, since the valve is not fully proof tested
during such a test.
Hence, for a single component, PTIF expresses the likelihood of a component having just been
functionally tested, to fail on demand (irrespective of the interval of manual testing). For
redundant components, the TIF contribution to loss of safety will for a MooN voting be given by
the general formula: CMooN · β · PTIF, where the numerical values of CMooN are assumed identical
to those used for calculating PFD, ref. Table 8.
This represents the downtime part of the safety unavailability as described in category 3 above.
The DTU (Downtime Unavailability) quantifies the loss of safety due to:
• repair of dangerous failures, resulting in a period when it is known that the function is
unavailable due to repair. We refer to this unavailability as DTUR;
• planned downtime (or inhibition time) resulting from activities such as testing, maintenance
and inspection. We refer to this unavailability as DTUT.
Depending on the specific application, operational philosophy and the configuration of the
process plant and the SIS, it must be considered whether it is relevant to include (part of) the DTU
in the overall measure for loss of safety. For further discussions on how to quantify the DTUR and
DTUT contributions, reference is made to [10].
The total loss of safety is quantified by the critical safety unavailability (CSU). The CSU is the
probability that the module/safety system (either due to a random hardware or a systematic
failure) will fail to automatically carry out a successful safety action on the occurrence of a
hazardous /accidental event. Thus, we have the relation:
If we want to include also the “known” downtime unavailability, the formula becomes:
The contributions from PTIF and λDU to the Critical Safety Unavailability (CSU) are illustrated in
Figure 4. Failures contributing to the PTIF are systematic test independent failures. These failures
will repeat themselves unless modification/redesign is initiated. The contribution to the CSU from
such systematic failures has been assumed constant, independent of the frequency of functional
testing. Dangerous undetected (DU) failures are assumed eliminated at the time of functional
testing and will thereafter increase throughout the test period.
41
Critical safety unavailability ( CSU )
τ 2τ 3τ 4τ 5τ Time
τ
Functional test
interval
As seen from the figure the CSU will vary throughout time. The CSU is at its maximum right
before a functional test and at its minimum right after a test. However, when we calculate the
CSU and the PFD we actually calculate the average value as illustrated in the figure.
42
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
5 DATA DOSSIERS
The following pages present the data dossiers of the control and safety system components. The
dossiers are input to the tables in chapter 3 that summarise the generic input data to PDS analyses.
Note that the generic data, by nature represent a wide variation of equipment populations and as
such should be considered on individual grounds when using the data for a specific application.
The data dossiers are based on the data dossiers in previous editions of the handbook, [12], [13],
[14], and have been updated according to the work done in the PDS-BIP and the new data
available.
Adapting the definitions used in OREDA, several severity class types are referred to in the data
dossiers. The definitions of the various types are, [3]:
• Critical failure: A failure which causes immediate and complete loss of a system's capability
of providing its output.
• Degraded failure: A failure which is not critical, but it prevents the system from providing its
output within specifications. Such a failure would usually, but not necessarily, be gradual or
partial, and may develop into a critical failure in time.
• Incipient failure: A failure which does not immediately cause loss of the system's capability of
providing its output, but if not attended to, could result in a critical or degraded failure in the
near future.
• Unknown: Failure severity was not recorded or could not be deduced.
Note that only the critical failures are included as a basis for the failure rate estimates (i.e. the
λcrit). From the description of the failure mode, the critical failures are further split into dangerous
and safe failures (i.e. λcrit = λD + λS). E.g. for shutdown valves a “fail to close on demand” failure
will be classified as dangerous whereas a “spurious operation” failure will be classified as a safe
(spurious trip) failure.
The following failure modes are referred in the data dossier tables:
43
5.1 Input Devices
λD = 2.3 per 106 hrs cD = 0.15 λDU = 2.0 per 106 hrs
λS = 1.1 per 106 hrs cS = 0.10 λSU = 1.0 per 106 hrs
r = 0.5
Assessment
The given failure rate applies to pressure switches. The failure rate estimate is mainly based on
OREDA phase III data, older OREDA data and comparison with other generic data sources
(OREDA phase IV contains no data on process switches, whereas phase V contains only 6
switches). The estimated coverage is based on expert judgement; We assume 5 % coverage due
to line monitoring of the connections and additional 10% detection for dangerous failures due to
operator observation during operation. The coverage for safe failures has been set to 10%, since
there is a small probability that such failures are detected before the shutdown actually occurs.
The PTIF and the r estimates are mainly based on expert judgements. A summary of some of the
main arguments is provided in section 3.3.
44
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Observed: Filter:
cD = N/A Inv. Equipment Class = Process Sensors AND
cST = N/A Inv. Design Class = Pressure AND
Inv. Att. Type – process sensor = Switch AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 5
No. of inventories = 6
No. of critical (D or ST) failures = 0
Surveillance Time (hours) = 295 632
λcrit = 1.4 D: 1.39 OREDA phase III database, [8]
ST: 0.0 Data relevant for conventional process switches.
Observed: Filter:
CD = 100% Inv. Equipment Class = Process Sensors AND
(based on only one Inv. Design Class = Pressure AND
failure) Inv. Att. Type – process sensor = Switch AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 3
No. of inventories = 12
No. of critical D failures = 1
No. of critical ST failures = 0
Surveillance Time (hours) = 719 424
λDU = 3.6 per 106 hrs Exida [15]: Generic DP / pressure switch
λSU = 2.4 per 106 hrs
SFF = 40%
Funct. 0.44
ST 1.02 T-Book [16]: Pressure sensor
Other crit 0.37
45
5.1.2 Proximity Switch (Inductive)
λD = 3.5 per 106 hrs cD = 0.15 λDU = 3.0 per 106 hrs
λS = 2.2 per 106 hrs cS = 0.10 λSU = 2.0 per 106 hrs
r = 0.3
Assessment
The estimated coverage is based on expert judgement; We assume 5 % coverage due to line
monitoring of the connections and additional 10% detection for dangerous failures due to
operator observation during operation. The coverage for safe failures has been set to 10%, since
there is a small probability that such failures are detected before a trip actually occurs. It should
be noted that (SIL rated) limit switches with significantly higher coverage factors are available.
In such case mechanical installation of the parts must ensure that alignment problems are
minimised.
The PTIF and the r estimates are mainly based on expert judgements. The PTIF is assumed to be
relatively high since mechanical alignment of the interacting parts is often a problem. Such
failures may not be revealed due to inadequate testing. Similarly a relatively high proportion of
systematic failures is assumed resulting in a low r factor
λDU = 3.6 per 106 hrs Exida [15]: Generic position limit switch
λSU = 2.4 per 106 hrs
SFF = 40%
Failure to change T-Book [16]: Electronic limit switch
state: 1.9 per 107 hrs
Spurious change of
state: 5.2 per 107 hrs
46
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 0.8 per 106 hrs cD = 0.60 λDU = 0.3 per 106 hrs
λS = 0.5 per 106 hrs cS = 0.30 λSU = 0.4 per 106 hrs
r = 0.3
Assessment
The failure rate estimate is mainly based on data from OREDA phase III. An insufficient amount
of data has been found in OREDA phase IV in order to update this estimate (no data from phase V,
VI or VII). The rate of DU failures is estimated assuming coverage of 60 % for dangerous failures.
This is based on implemented self test in the transmitter as well as casual observation by control
room operator (the latter assumes that the signal can be observed on the VDU and compared with
other signals). If a higher coverage is claimed, e.g. due to automatic comparison between
transmitters, this should be especially documented / verified. The rate of detected safe failures has
been estimated by expert judgement to 30% (as compared to 50 % in the previous 2006-edition).
This is due to the fact that safe failures will be difficult to detect before a trip has actually
occurred. No data available for pressure transmitters from OREDA phase VI and VII.
The PTIF is entirely based on expert judgements. The estimated r is based on reported failure
causes in OREDA as well as expert judgements. A summary of some of the main arguments is
provided in section 3.3.
Failure Rate References
Overall
failure rate Failure mode
Data source/comment
(per 106 hrs) distribution
λcrit = 1.3 λD = 0.8 per 106 hrs Recommended values for calculation in 2006-
λDU = 0.3 per 106 hrs edition, [12]
λSTU = 0.3 per 106 hrs
Assumed cD = 60%
PTIF = 5·10-4
λcrit = 1.3 λD = 0.8 per 106 hrs Recommended values for calculation in 2004-
λDU = 0.3 per 106 hrs edition, [13]
λSTU = 0.4 per 106 hrs
Assumed cD = 60%
PTIF = 3·10-4 - 5·10-4 1)
1)
For smart/conventional respectively
47
Module: Input Devices
PDS Reliability Data Dossier
Component: Pressure Transmitter
λDU = 0.1 per 106 hrs Recommended values for calculation in 2003-
λcrit = 1.3 λSTU = 0.4 per 106 hrs edition, [13]
PTIF = 3·10-4 - 5·10-4 1)
Assumed cD = 90%
1)
For smart/conventional respectively
N/A D: N/A OREDA phase IV database, [6]
ST: N/A Data relevant for conventional pressure trans-
mitters.
Observed: Filter:
cD = N/A Inv. Equipment Class = Process Sensors AND
cST = N/A Inv. Design Class = Pressure AND
Inv. Att. Type process sensor = Transmitter AND
Inv. Phase = 4 AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 4
No. of inventories = 21
No. of critical (D or ST) failures = 0
Surveillance Time (hours) = 332 784
λcrit = 1.3 D: 0.64 OREDA phase III database, [8]
ST: 0.64 Data relevant for conventional pressure trans-
mitters.
Observed:
cD = 100 % Filter criteria: TAXCOD='PSPR' .AND. FUNCTN='OP' .OR.
(Calculated for 'GP'
transmitters having No. of inventories = 186
Total no. of critical failures = 6
some kind of self-test
Cal. time = 4 680 182 hrs
arrangement only)
λDU = 0.6 per 106 hrs Exida [15]: Generic smart DP / pressure
transmitter
SFF = 60%
Fail. to obtain signal: T-Book [16]: Pressure transmitter
0.83
Fail. to obtain signal: T-Book [16]: Pressure difference transmitter/
0.91 pressure difference cell
48
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 1.4 per 106 hrs cD = 0.60 λDU = 0.6 per 106 hrs
λS = 1.6 per 106 hrs cS = 0.30 λSU = 1.1 per 106 hrs
r = 0.3
Assessment
The failure rate estimate is mainly based on data from the OREDA phase III database with
additional data from OREDA phase IV and V. The rate of DU failures is estimated by assuming
coverage of 60% for dangerous failures. This is based on implemented self test in the transmitter
as well as casual observation by control room operator (the latter assumes that the signal can be
observed on the VDU and compared with other signals). If a higher coverage is claimed, special
documentation/verification should be required. The rate of safe failures has been estimated by
expert judgement to 30% (as compared to 50 % in the previous 2006-edition). This is due to the
fact that safe failures will be difficult to detect before a trip has actually occurred.
The PTIF is entirely based on expert judgements. The estimated r is based on reported failure
causes in OREDA as well as expert judgements. A summary of some of the main arguments is
provided in section 3.3.
λDU = 0.1 per 106 hrs Recommended values for calculation in 2003-edition,
λcrit = 3.0 λSTU = 0.8 per 106 hrs [13]
Assumed cD = 90%
PTIF = 3·10-4 - 5·10-4 1)
1)
For smart/conventional respectively
49
Module: Input Devices
PDS Reliability Data Dossier
Component: Level (Displacement) Transmitter
No. of inventories = 1
No. of critical D failures = 1
No. of critical ST failures = 0
Surveillance time (hours) = 49 272
1.9 D: 0.0 OREDA phase IV database, [6]
ST: 1.9 Data relevant for conventional displacement level
transmitters.
Observed: Filter:
cST = N/A Inv. Equipment Class = Process Sensors AND
(detection method Inv. Design Class = Level AND
uncertain) Inv. Att. Type process sensor = Transmitter AND
Inv. Att. Level sensing princ. = Displacement AND
Inv. Phase = 4 AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 4
No. of inventories = 17
No. of critical D failures = 0
No. of critical ST failures = 1
Surveillance time (hours) = 530 616
6.17 D: 4.94 OREDA phase III Database, [8]
ST: 1.23 Data relevant for conventional displacement level
transmitters.
Observed: Filter criteria: TAXCOD='PSLE' .AND. FUNCTN='OP' .OR. 'GP'
cD = 100 % No. of inventories = 65
(Calculated for Total no. of failures = 50
transmitters having Cal. time = 1 620 177 hrs
some kind of self-test Note! Only failures classified as "critical" are
included in the failure rate estimates.
arrangement only)
λDU = 1.25 per 106 hrs Exida [15]: Generic level (displacement) transmitter
SFF = 58%
Fail. to obtain signal: T-Book [16]: Level transmitter
2.7
50
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 0.7 per 106 hrs cD = 0.60 λDU = 0.3 per 106 hrs
λS = 1.3 per 106 hrs cS = 0.30 λSU = 0.9 per 106 hrs
r = 0.3
Assessment
The failure rate estimate is the same as in the 2006 handbook, ref. [12], which is based on
OREDA phase III including some expert judgement due to scarce data - with OREDA phase IV
data (no temperature transmitters in OREDA phase V, VI or VII). The distribution between
(undetected) dangerous and safe failures is based on the distribution for pressure and flow
transmitters. The given coverage values for dangerous and safe failures are estimated mainly
based on expert judgement and the same argumentation as for pressure and level transmitters.
The PTIF is entirely based on expert judgements. The estimated r is based on reported failure
causes in OREDA together with expert judgements. A summary of some of the main arguments
is provided in section 3.3.
51
Module: Input Devices
PDS Reliability Data Dossier
Component: Temperature Transmitter
0.0 D: 0.0 OREDA phase IV database, [6]
ST: 0.0 Data relevant for conventional temperature trans-
mitter.
Filter:
Inv. Equipment Class = Process Sensors AND
Inv. Design Class = Temperature AND
Inv. Att. Type process sensor = Transmitter AND
Inv. Phase = 4 AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 4
No. of inventories = 21
No. of critical D failures = 0
No. of critical ST failures = 0
Cal. time = 735 848
D: 5.1 OREDA phase III database, [6]
5.1 Data relevant for conventional temperature trans-
Observed: mitter.
cD = 100 % Filter criteria: TAXCOD='PSTE' .AND. FUNCTN='OP' .OR. 'GP'
(Calculated for No. of inventories = 8
transmitters having Total no. of failures = 7
some kind of self-test Cal. time = 197 808 hrs
arrangement only) Note! Only failures classified as "critical" are included
in the failure rate estimates.
SFF = 63%
Fail. to obtain signal: T-Book [16]: Temperature transmitter
1.27 per 106 hrs
52
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 1.5 per 106 hrs cD = 0.60 λDU = 0.6 per 106 hrs
λS = 2.2 per 106 hrs cS = 0.30 λSU = 1.5 per 106 hrs
r = 0.3
Failure Rate Assessment
The failure rate estimate is the same as in the 2006 handbook, ref. [12], with OREDA phase IV
and V data. The rate of DU failures is estimated assuming 60 % coverage for dangerous failures.
The rate of safe undetected failures is estimated assuming 30 % coverage. The safe failure rate
includes ‘Erratic output’ failures. The argumentation for selecting the coverage values are the
same as above for other transmitters.
The PTIF is entirely based on expert judgements. The estimated r is based on reported failure
causes in OREDA as well as expert judgements. A summary of some of the main arguments is
provided in section 3.3.
53
Module: Input Devices
PDS Reliability Data Dossier
Component: Flow Transmitter
No. of inventories = 4
No. of critical D failures = 1
No. of critical ST failures = 0
Surveillance time (hours) = 197 088
5.70 D: 2.85 OREDA phase IV database, [6]
ST: 2.85 Data relevant for conventional flow transmitters.
Filter:
Observed: Inv. Equipment Class = Process Sensors AND
CD = 0 % Inv. Design Class = Flow AND
CST = 100 % Inv. Att. Type process sensor = Transmitter AND
Inv. Phase = 4 AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. OREDA Phase = 4
No. of inventories = 11
No. of critical D failures = 1
No. of critical ST failures = 1
Surveillance time (hours) = 350 880
2.89 D: 1.24 OREDA phase III, [8], Database PS31__.
ST: 1.65 Data relevant for conventional flow transmitters.
Filter criteria: TAXCOD='PSFL' .AND. FUNCTN='OP'.OR.'GP'
Observed: No. of inventories = 72
CD = 100 % No. of critical D failures = 3
(Calculated including No. of critical ST failures = 4
transmitters having Cal. time = 2 422 200 hrs
Note! Only failures classified as "critical" are included in
some kind of self-test
the failure rate estimates.
arrangement only,)
λDU = 0.9 per 106 hrs 1) Exida [15]. Generic flow transmitter
λDU = 0.7 per 106 hrs 2)
1)
λDU = 0.5 per 106 hrs 3) 2) Measurement type: Coriolis meter
Measurement type: Mag meter
3)
SFF: 60% - 65% Measurement type: Vortex shedding
Fail. to obtain signal: T-Book [16]: Flow transmitter
2.6 per 106 hrs
54
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 3.5 per 106 hrs cD = 0.50 λDU = 1.8 per 106 hrs
λS = 1.5 per 106 hrs cS = 0.30 λSU = 1.1 per 106 hrs
r = 0.4
Failure Rate Assessment
The failure rate estimates are primarily based on OREDA phase III and phase IV data. No data on
catalytic gas detectors in OREDA phase V, VI and VII.
Data from RNNP for the period 2003-2008 for gas detection has also been reviewed. In total
184374 detector tests have been performed during this period, resulting in 1631 failures. Based on
this a λDU = 1.0 · 10-6 can be estimated. It should however be noted that RNNP make no
distinction between IR and catalytic detectors.
The rate between dangerous (D) and safe (S) failures are kept the same as in the 2003 handbook.
The rate of DU failures is estimated assuming a coverage of 50 % for dangerous failures
(somewhat lower than for a process transmitters since the chance of casual operator
detection/observation is smaller). The rate of safe failures is estimated assuming a coverage of
30 %. The D failure rate includes ‘No output’ and ‘Very low output’ failures.
The PTIF is based on expert judgements and is based on the assumption that the detectors are
assumed exposed. The estimated r value is based on observed failure causes for critical detector
failures (40% “expected wear and tear” and 60% “maintenance errors”). A summary of some of
the main arguments is provided in section 3.3.
55
Module: Input Devices
PDS Reliability Data Dossier
Component: Catalytic Gas Detector
λcrit = 5.0 λD = 3.5 per 106 hrs Recommended values for calculation in 2004-edition,
λDU = 1.8 per 106 hrs [13]
λSTU = 0.9 per 106 hrs
Assumed cD = 50%
PTIF = 3·10-4 – 0.1 1)
1)
For large to small gas leaks (large means > 1 kg/s)
λDU = 0.6 per 106 hrs Recommended values for calculation in 2003-edition,
λcrit = 2.3 λSTU = 0.4 per 106 hrs [14]
λD / λST = 2.3 1)
For large to small gas leaks (large means > 1 kg/s)
-4 1)
PTIF = 3·10 – 0.1
0.0 D: 0.0 OREDA phase IV database, [6]
ST: 0.0 Data relevant for conventional catalytic gas detectors.
Filter:
Inv. Equipment class = Fire & Gas Detectors AND
Inv. Att. Sensing principle = Catalytic AND
Inv. Phase = 4 AND
Fail. Severity Class = Critical
No. of inventories = 24
No. of critical D failures = 0
No. of critical ST failures = 0
Cal. time = 420 480
NOO: 3.62 OREDA phase III database, [8]
5.35 SHH: 0.79 Data relevant for conventional catalytic gas detectors.
Sum D: 4.41 More than 97 % of the detectors have automatic loop
test.
SLL: 0.02
VLO: 0.92 Filter criteria: TAXCOD='FGHC', SENSPRI='CATALYTIC'
Sum ST: 0.94 No. of inventories = 2 046
Total no. of critical failures = 263
Cal. time = 49 185 572 hrs
Observed:
CD = 64 %
(Calculated including
detectors having some
kind of self-test
arrangement only)
λDU = 1.75 per 106 hrs Exida [15]: Generic catalytic HC gas detector
SFF = 65%
56
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 2.5 per 106 hrs cD = 0.75 λDU = 0.6 per 106 hrs
λS = 2.2 per 106 hrs cS = 0.50 λSU = 1.1 per 106 hrs
r = 0.4
Failure Rate Assessment
The failure rate estimate is an update of the previous estimate in the 2006 handbook and is based
on previous estimates as well as additional data from OREDA phase VI.
The rate of DU failures has been estimated assuming coverage for dangerous failures of 75%
(based on observations in OREDA phase IV, V and VI). The D failure rate includes ‘Fail to
function on demand’ and ‘No output’ failures. The rate of safe failures has been significantly
increased based on experiences from OREDA phase V and VI. Furthermore, coverage of 50 %
has been assumed for safe failures. The coverage values are given assuming that the detectors
have built-in self-test and monitoring of the optical path. It is then implicitly assumed that the
connected system has the ability to discriminate detected failures without shutting down (e.g. a
3mA signal gives an alarm not a shutdown).
The PTIF is based on expert judgements and is based on the assumption that the detectors are
exposed. The estimated r value is based on observed failure causes for critical detector failures
(40% “expected wear and tear” and 60% “maintenance errors”). A summary of some of the main
arguments is provided in section 3.3.
57
Module: Input Devices
PDS Reliability Data Dossier
Component: IR Point Gas Detector
λcrit = 4.0 λD = 3.3 per 106 hrs Recommended values for calculation in 2004-
λDU = 0.7 per 106 hrs edition, [13]
λSTU = 0.2 per 106 hrs
Assumed cD = 80%
PTIF = 1·10-3 – 6·10-3 1,2)
1)
Range gives values for small to large gas leaks (large gas
leaks are leaks > 1 kg/s)
2)
Average over ventilation type and worst conditions
λcrit = 3.6 λDU = 0.7 per 106 hrs Recommended values for calculation in 2003-
λD / λST = 11 λSTU = 0.1 per 106 hrs edition, [14]
1)
Range gives values for small to large gas leaks (large gas
-3 -3 1,2)
PTIF = 1·10 – 6·10 leaks are leaks > 1 kg/s)
2)
Average over ventilation type and worst conditions
λcrit = 5.7 λD = 1.8 per 106 hrs OREDA phase V-VI database, [6], [8]
λS = 3.9 per 106 hrs Data relevant for IR gas detectors
Observed: Filter:
Inv. Equipment class = Fire & Gas Detectors AND
CD = 70% (Inv. OREDA Phase = 5 OR
Inv. Phase = 6) AND
CS = N/A
Inv. Equipment type = Hydrocarbon gas AND
Inv. Att. Sensing principle = IR
Observed: Filter:
cD = 100 % Inv. Equipment class = Fire & Gas Detectors AND
cST = NA (Inv. Att. Sensing principle = IR OR
Inv. Att. Sensing principle = IR/UV) AND
Inv. Phase = 4 AND
Fail. Severity Class = Critical
No. of inventories = 54
No. of critical D failures = 4
No. of critical ST failures = 0
Cal. time = 1 148 472
λDU = 0.4 per 106 hrs Exida [15]: Generic IR gas detector
SFF = 78%
Ddet: 2.9 Oseberg C, [18]
4.1 Dundet: 1.2 Data relevant for conventional IR gas detectors.
STdet: 0 No. of inventories = 41
STundet: 0 Total no. of failures = 26 (4 critical)
Time = 977 472 hrs
Note! Only failures classified as "critical" are
included in the failure rate estimates.
58
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 2.8 per 106 hrs cD = 0.75 λDU = 0.7 per 106 hrs
λS = 2.2 per 106 hrs cS = 0.50 λSU = 1.1 per 106 hrs
r = 0.4
Assessment
The failure rate estimate is an update of the previous estimate in the 2006 handbook and is based
on previous estimates as well as additional information on IR detectors from OREDA phase VI
(only new data on IR point detectors). It should be noted that data on IR line detectors are scarce,
and therefore experience from IR point detectors has been applied.
As for IR point detectors the rate of DU failures has been estimated assuming coverage for
dangerous failures of 75%, whereas for safe failures coverage of 50 % has been assumed. The
coverage values are given assuming that the detectors have built-in self-test and monitoring of the
optical path. It is then implicitly assumed that the connected system has the ability to discriminate
detected failures without shutting down (e.g. a 3mA signal gives an alarm not a shutdown).
The PTIF is based on expert judgements and is based on the assumption that the detectors are
exposed. The estimated r value is based on observed failure causes for critical detector failures
(40% “expected wear and tear” and 60% “maintenance errors”). A summary of some of the main
arguments is provided in section 3.3.
59
Module: Input Devices
PDS Reliability Data Dossier
Component: IR Line Gas Detector
λcrit = 5.3 λD = 3.3 per 106 hrs Recommended values for calculation in 2004-
λDU = 0.7 per 106 hrs edition [13]
λSTU = 0.6 per 106 hrs
Assumed cD = 80%
PTIF = 1·10-2 – 6·10-2 1,2)
1)
Range gives values for small to large gas leaks (large gas
leaks are leaks > 1 kg/s)
2)
Average over ventilation type and worst conditions
λcrit = 3.6 λDU = 0.7 per 106 hrs Previously recommended values for calculation in
λD / λST = 11 λSTU = 0.1 per 106 hrs 2003-edition [14]
1)
Range gives values for small to large gas leaks (large gas
-2 -2 1,2)
PTIF = 1·10 – 6·10 leaks are leaks > 1 kg/s)
2)
Average over ventilation type and worst conditions
4.1 D: 4.1 OREDA phase IV+V database [6], [4]
ST: 0.0 Data relevant for conventional IR gas detectors.
Observed: Filter:
cD = 100 % Inv. Equipment class = Fire & Gas Detectors AND
cST = N/A Inv. Design Class = Hydrocarbon gas AND
Inv. Att. Sensing principle = PH-EL BEAM AND
Inv. OREDA Phase = 4 + 5 AND
Fail. Severity Class = Critical
No. of inventories = 55
No. of critical D failures = 5
No. of critical ST failures = 0
Cal. time = 1 202 472
60
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 1.2 per 106 hrs cD = 0.40 λDU = 0.7 per 106 hrs
λS = 2.0 per 106 hrs cS = 0.30 λSU = 1.4 per 106 hrs
r = 0.4
Assessment
The failure rate estimate is an update of the 2006 figure which was primarily based on OREDA
phase III as well as some phase V data. The rate of DU failures is estimated still assuming
coverage of 40 % (observed in OREDA incomplete and complete phase III were 29% and 50%,
respectively). The rate of dangerous and safe failures has been slightly decreased based on
observations from failure reviews and later OREDA phases. For safe failures 30 % coverage -
mainly based on OREDA phase III observations as well as expert judgement - has been assumed.
It should be noted that for some type of smoke detectors with more extensive self test, the
coverage may be significantly higher. This must be assessed for each specific detector type.
The PTIF is based on expert judgements and is based on the assumption that the detectors are
exposed. The estimated r value is based on observed failure causes for critical detector failures
(40% “expected wear and tear” and 60% “maintenance errors”). A summary of some of the main
arguments is provided in section 3.3.
λcrit = 3.7 λD = 1.3 per 106 hrs Recommended values for calculation in 2004- and
λDU = 0.8 per 106 hrs 2003-edition, [13], [14]
λSTU = 1.2 per 106 hrs
Assumed cD = 40%
PTIF = 10-3 – 0.05 1)
1)
The range represents different types of fires (smoke/flame)
61
Module: Input Devices
PDS Reliability Data Dossier
Component: Smoke Detector
λDU = 0.3 per 106 hrs Data from review of safety critical failures on
Norwegian onshore plant. Data applicable for optical
smoke detectors
λDU = 0.6 per 106 hrs Data from review of safety critical failures on
Norwegian semi-submersible platform. Data applicable
for optical smoke detectors
λDU = 1.65 per 106 hrs Exida [15]: Generic smoke (ionization) detector
λSU = 3.85 per 106 hrs
SFF = 70 %
62
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 1.0 per 106 hrs cD = 0.40 λDU = 0.6 per 106 hrs
λS = 1.5 per 106 hrs cS = 0.40 λSU = 0.9 per 106 hrs
r = 0.4
Assessment
The failure rate estimate is an update of the figures in the 2006 handbook. The rate of D failures
is estimated assuming coverage of 40% (observed in OREDA phase III incomplete and complete
to be 50% and 36%, respectively). The rate of safe failures is estimated assuming coverage of
40% (previously assumed to be 20%, observed in OREDA (complete) phase III to be
significantly higher).
The PTIF is based on expert judgements given the assumption that the detector is exposed. The
estimated r value is based on observed failure causes for critical detector failures (40%
“expected wear and tear” and 60% “maintenance errors”). A summary of some of the main
arguments is provided in section 3.3.
λcrit = 2.4 λD = 0.9 per 106 hrs Recommended values for calculation in 2004-
λDU = 0.5 per 106 hrs edition, [13]
λSTU = 0.8 per 106 hrs
1)
Assumed cD = 50%
PTIF = 0.05 – 0.5 1)
The range represents the occurrence of different types of
fires (smoke/flame)
63
Module: Input Devices
PDS Reliability Data Dossier
Component: Heat Detector
λDU = 0.5 per 106 hrs Previously recommended values for calculation
λcrit = 2.4 λSTU = 0.75 per 106 hrs in 2003-edition [12]
λD / λST = 0.6 1)
The range represents the occurrence of different types of
1)
PTIF = 0.05 – 0.5 fires (smoke/flame)
No. of inventories = 23
No. of critical D failures = 0
No. of critical ST failures = 0
Surveillance Time (hours) = 723 120
D: 0.82 OREDA phase III database [8]
2.21 SPO: 1.39 Data relevant for conventional heat detectors.
Both rate-of rise (23 %) and rate-compensated
Observed: (77 %) detectors are included. Of the detectors,
cD = 50 % 89 % have automatic loop test, the residual
(Calculated including (11 %) have no self-test feature.
detectors having some
kind of self-test No. of inventories = 865
arrangement only) Total no. of failures = 79
Cal. time = 24 470 588 hrs
Note! Only failures classified as "critical" are
included in the failure rate estimates.
λDU = 1.9 per 106 hrs Exida [15]: Generic heat detector
λSU = 3.6 per 106 hrs
SFF = 65%
64
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 2.7 per 106 hrs cD = 0.70 λDU = 0.8 per 106 hrs
λS = 3.8 per 106 hrs cS = 0.50 λSU = 1.9 per 106 hrs
r = 0.4
Failure Rate Assessment
The failure rate estimate is an update of the previous estimate in the 2006 handbook [12] (primarily
based on OREDA phase III data). The rate of dangerous failures has been slightly reduced as
compared to the 2006 estimate due to input from operational reviews. Coverage for D failures has
been assumed to be 70 % based on expert judgement. The rate of safe failures is estimated
assuming coverage of 50 %. It should be noted that these coverage values assume that the detectors
have built-in self-test and monitoring of the optics.
The PTIF is based on expert judgements and has been updated based on the fact that the detectors
are now assumed exposed. The estimated r value is based on observed failure causes for critical
detector failures (50% “expected wear and tear” and 50% “maintenance errors”). A summary of
some of the main arguments is provided in section 3.3.
λcrit = 2.4 λD = 0.9 per 106 hrs Recommended values for calculation in 2004-edition,
λDU = 0.5 per 106 hrs [13]
λSTU = 0.8 per 106 hrs
Assumed cD = 60%
PTIF = 3·10-4 – 0.5 1)
1)
The range represents the occurrence of different types of fires
(smoke/flame)
65
Module: Input Devices
PDS Reliability Data Dossier
Component: Flame Detector
λDU = 2.1 per 106 hrs Previously recommended values for calculation in
λcrit = 8.3 λSTU = 2.1 per 106 hrs 2003-edition [14]
λD / λST = 1.0 1)
The range represents the occurrence of different types of fires
-4 1)
PTIF = 3·10 – 0.5 (smoke/flame)
No. of inventories = 27
No. of critical D failures = 0
No. of critical ST failures = 1
Surveillance Time (hours) = 1 686 096
D: 3.2 OREDA phase III database [8]
7.2 SPO: 4.0 Data relevant for conventional flame detectors. IR
(52 %), UV (13 %) and combined IR/UV (35 %)
Observed: detectors are included. Of the detectors, 75 % have
cD = 48 % automatic loop test, 3 % have built-in self-test, 15 %
(Calculated including have combination of automatic loop and built-in self-
detectors having some test, the residual (11 %) has no self-test feature.
kind of self-test
arrangement only) No. of inventories = 1 010
No. of failures = 292
Cal. time = 23 136 820 hrs
Note! Only failures classified as "critical" are included in
the failure rate estimates.
λDU = 1.4 per 106 hrs Data from review of safety critical failures on
Norwegian onshore plant. Data applicable for IR flame
detectors
λDU = 0.2 per 106 hrs 1) Data from review of safety critical failures on
Norwegian semi-submersible platform. Data applicable
for IR flame detectors
1)
when assuming one
failure (occurring No. of inventories = 241 detectors
tomorrow) No. of critical DU failures = 0 2)
Cal. time = 4 222 321 hrs 3)
2)
The failure review focused on DU failures, but classification of other
failure modes was also performed; 3 DD and 3 safe failures were also
registered.
3)
Two years of operation
66
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
SFF = 69%
D: 2.0 Oseberg C report [18]
4.5 ST: 2.5 Data relevant for IR flame detectors.
No. of inventories = 162
No. of failures = 30 (18 critical)
Time = 3 978 240 hrs
Note! It is assumed that only failures classified as
"critical" are included in the failure rate esti-
mates.
67
5.1.13 H2S Detector
λD = 1.0 per 106 hrs cD = 0.50 λDU = 0.5 per 106 hrs
λS = 0.3 per 106 hrs cS = 0.30 λSU = 0.2 per 106 hrs
r = 0.4
Failure Rate Assessment
The failure rate estimate is based on OREDA phase V data as well as expert judgement and other
data sources. The rate of DU failures is primarily based on reported “Fail to function on demand”
failures although these failures in OREDA phase V have been reported as degraded instead of
critical failures. The coverage factors for dangerous and safe failures are assumed similar as for
catalytic gas detectors. The same distribution between dangerous and safe failures as for catalytic
gas detectors is also assumed.
The PTIF is based on expert judgements and on the assumption that the detectors are exposed. The
estimated r value is assumed the same as for catalytic gas detectors. A summary of some of the
main arguments is provided in section 3.3.
68
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
69
5.1.14 ESD Push Button
λD = 0.5 per 106 hrs cD = 0.20 λDU = 0.4 per 106 hrs
λS = 0.3 per 106 hrs cS = 0.10 λSU = 0.3 per 106 hrs
r = 0.8
Failure Rate Assessment
The failure rate is based on all listed data sources, also taking into account some expert
judgements. As compared to the 2006 estimate some additional experience from two operational
reviews has been added.
The PTIF as well as the r values are entirely based on expert judgements. A summary of some of
the main arguments is provided in section 3.3.
70
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λDU = 1.2 per 106 hrs Data from review of safety critical failures on
Norwegian onshore plant. Data applicable for
manual initiators / pushbuttons
No. of inventories = 93
1)
No. of critical DU failures = 1
Cal. time = 814 680 hrs 2)
1)
The review focused on DU failures, no failures were classified
as DD or safe.
2)
One year of operation
λDU = 0.2 per 106 hrs 1) Data from review of safety critical failures on
λS = 0.2 per 106 hrs 1) Norwegian semi-submersible platform. Data
applicable for manual initiators / pushbuttons
1)
when adding the
experience from onshore No. of inventories = 203
2)
plant and offshore No. of critical DU failures = 0
installation together Cal. time = 3 556 560 hrs 3)
2)
The failure review focused on DU failures, 1 additional failure
was classified as safe.
3)
λDU = 0.8 per 106 hrs Exida [15]: Generic push button
λSU = 0.2 per 106 hrs
SFF = 20%
71
5.2 Control Logic Units
Below, reliability figures for control logic units are given. Data are given for standard industrial
PLC, programmable safety systems and hardwired safety systems respectively. The following
general assumptions and notes apply throughout section 5.2.1- 5.2.3:
• A single system with analogue input, CPU/logic and digital output configuration is
generally assumed;
• For the input and output part, figures are given for one channel plus the common part of
the input/output card (except for hardwired safety system where figures for one channel
only are given);
• Single CPU / logic part is assumed throughout
• If the figures for input and output are to be used for redundant configurations, separate
input cards and output cards must be used since the given figures assume a common part
on each card;
• If separate Ex barriers or other interface devices are used, figures for these must be added
separately;
• The systems are generally assumed used in de-energised to trip functions, i.e. loss of
power or signal will result in a safe state.
72
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 1.8 per 106 hrs cD = 0.6 λDU = 0.7 per 106 hrs
λS = 1.8 per 106 hrs cS = 0.2 λSU = 1.4 per 106 hrs
r = 0.1
Assessment
The presented failure rates are updated values from the 2006 handbook, [12], where a common
failure rate was presented for input, logic and output. Since no new OREDA data for control
logic has been collected during the latter years, the 2006 figures were based on manufacturer
data as well as judgements made by the project group.
In this new edition of the handbook safety system manufacturers (ABB, HIMA, Kongsberg and
Siemens) have again been asked to provide their “best estimate failure rates” including
percentagewise distribution between the different elements. Based on these estimates as well as
additional judgements, updated failure rates have been provided based on an assumed
distribution between input, logic and output of 15%, 70% and 15% respectively.
The estimated coverage factors, PTIF and r values are based on expert judgements. A summary of
some of the main arguments is provided in section 3.3.
*Note that for control logic units only a PTIF for the CPU is given.
73
Module: Control Logic Units – Standard Industrial PLC
PDS Reliability Data Dossier
Component: Analogue Input
λDU = 0.3 per 106 hrs 1)
Exida [15]: Analogue in - general purpose PLC
λDD = 0.8 per 106 hrs 1) (1oo1)
1)
λSU = 0.2 per 106 hrs 1) Includes one analogue in module and one channel
λSD = 0.9 per 106 hrs 1)
SFF = 84 % (analogue input module + 3 ch’s.)
74
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 8.8 per 106 hrs cD = 0.6 λDU = 3.5 per 106 hrs
λS = 8.8 per 106 hrs cS = 0.2 λSU = 7.0 per 106 hrs
r = 0.1
Assessment
The presented failure rates are updated values from the 2006 handbook, [12], where a common
failure rate was presented for input, logic and output. Since no new OREDA data for control
logic has been collected during the latter years, the 2006 figures were based on manufacturer
data as well as judgements made by the project group.
In this new edition of the handbook safety system manufacturers (ABB, HIMA, Kongsberg and
Siemens) have again been asked to provide their “best estimate failure rates” including
percentagewise distribution between the different elements. Based on these estimates as well as
additional judgements, updated failure rates have been provided based on an assumed
distribution between input, logic and output of 15%, 70% and 15% respectively.
The estimated coverage factors, PTIF and r values are based on expert judgements. A summary of
some of the main arguments is provided in section 3.3.
75
Module: Control Logic Units – Standard Industrial PLC
PDS Reliability Data Dossier
Component: CPU
Filter:
Inv. Equipment class = Control Logic Units AND
Inv. Phase = 4 AND
Fail. Severity Class = Critical
No. of inventories = 71
No. of critical D failures = 103
No. of critical ST failures = 27
Cal. time = 1 733 664
λDU = 1.5 per 106 hrs 1)
Exida: Main processor – general purpose PLC
λDD = 3.7 per 106 hrs 1) (1oo1)
λSU = 0.7 per 106 hrs 1)
1)
λSD = 9.1 per 106 hrs 1) Includes main processor and power supply
76
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 1.8 per 106 hrs cD = 0.6 λDU = 0.7 per 106 hrs
λS = 1.8 per 106 hrs cS = 0.2 λSU = 1.4 per 106 hrs
r = 0.1
Assessment
The presented failure rates are updated values from the 2006 handbook, [12], where a common
failure rate was presented for input, logic and output. Since no new OREDA data for control
logic has been collected during the latter years, the 2006 figures were based on manufacturer
data as well as judgements made by the project group.
In this new edition of the handbook safety system manufacturers (ABB, HIMA, Kongsberg and
Siemens) have again been asked to provide their “best estimate failure rates” including
percentagewise distribution between the different elements. Based on these estimates as well as
additional judgements, updated failure rates have been provided based on an assumed
distribution between input, logic and output of 15%, 70% and 15% respectively.
The estimated coverage factors, PTIF and r values are based on expert judgements. A summary of
some of the main arguments is provided in section 3.3.
*Note that for control logic units only a PTIF for the CPU is given.
77
Module: Control Logic Units – Standard Industrial PLC
PDS Reliability Data Dossier
Component: Digital Output
λDU = 0.2 per 106 hrs 1)
Exida [15]: Digital out - general purpose PLC
λDD = 0.4 per 106 hrs 1) (1oo1)
λSU = 0.1 per 106 hrs 1)
1)
λSD = 0.5 per 106 hrs 1) Includes one digital out low module and one channel
78
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 1.6 per 106 hrs cD = 0.9 λDU = 0.16 per 106 hrs
λS = 1.6 per 106 hrs cS = 0.2 λSU = 1.3 per 106 hrs
r = 0.4
Assessment
The presented failure rates are updated values from the 2006 handbook, [12], where a common
failure rate was presented for input, logic and output. Since no new OREDA data for control
logic has been collected during the latter years, the 2006 figures were based on manufacturer
data as well as judgements made by the project group.
In this new edition of the handbook safety system manufacturers (ABB, HIMA, Kongsberg and
Siemens) have again been asked to provide their “best estimate failure rates” including
percentagewise distribution between the different functional parts. Based on these estimates as
well as additional judgements, updated failure rates have been provided based on an assumed
distribution between input, logic and output of 20%, 60% and 20% respectively.
The estimated coverage factors, PTIF and r values are based on expert judgements. A summary of
some of the main arguments is provided in section 3.3.
*Note that for control logic units only a PTIF for the CPU is given.
79
Module: Control Logic Units – Programmable Safety System
PDS Reliability Data Dossier
Component: Analogue Input
λDU = 0.1 per 106 hrs 1) Exida [15]: Analogue in – generic SIL2 certified
λDD = 0.9 per 106 hrs 1) PLC (1oo1D)
1)
λSU = 0.1 per 106 hrs 1) Includes one analogue in module and one channel
λSD = 1.0 per 106 hrs 1)
SFF = 95 % (analogue input module + 1 channel)
80
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 4.8 per 106 hrs cD = 0.9 λDU = 0.48 per 106 hrs
λS = 4.8 per 106 hrs cS = 0.2 λSU = 3.8 per 106 hrs
r = 0.4
Assessment
The presented failure rates are updated values from the 2006 handbook, [12], where a common
failure rate was presented for input, logic and output. Since no new OREDA data for control
logic has been collected during the latter years, the 2006 figures were based on manufacturer
data as well as judgements made by the project group.
In this new edition of the handbook safety system manufacturers (ABB, HIMA, Kongsberg and
Siemens) have again been asked to provide their “best estimate failure rates” including
percentagewise distribution between the different functional parts. Based on these estimates as
well as additional judgements, updated failure rates have been provided based on an assumed
distribution between input, logic and output of 20%, 60% and 20% respectively.
The estimated coverage factors, PTIF and r values are based on expert judgements. A summary of
some of the main arguments is provided in section 3.3.
81
Module: Control Logic Units – Programmable Safety System
PDS Reliability Data Dossier
Component: CPU
Filter:
Inv. Equipment class = Control Logic Units AND
Inv. Phase = 4 AND
Fail. Severity Class = Critical
No. of inventories = 71
No. of critical D failures = 103
No. of critical ST failures = 27
Cal. time = 1 733 664
λDU = 0.2 per 106 hrs 1)
Exida: Main processor – generic SIL 2 certified
λDD = 2.9 per 106 hrs 1) PLC (1oo1D)
λSU = 0.1 per 106 hrs 1)
1)
λSD = 9.2 per 106 hrs Includes main processor and power supply
1)
SFF = 98.5% (main processor)
= 100 % (power supply)
82
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 1.6 per 106 hrs cD = 0.9 λDU = 0.16 per 106 hrs
λS = 1.6 per 106 hrs cS = 0.2 λSU = 1.3 per 106 hrs
r = 0.4
Assessment
The presented failure rates are updated values from the 2006 handbook, [12], where a common
failure rate was presented for input, logic and output. Since no new OREDA data for control
logic has been collected during the latter years, the 2006 figures were based on manufacturer
data as well as judgements made by the project group.
In this new edition of the handbook safety system manufacturers (ABB, HIMA, Kongsberg and
Siemens) have again been asked to provide their “best estimate failure rates” including
percentagewise distribution between the different functional parts. Based on these estimates as
well as additional judgements, updated failure rates have been provided based on an assumed
distribution between input, logic and output of 20%, 60% and 20% respectively.
The estimated coverage factors, PTIF and r values are based on expert judgements. A summary of
some of the main arguments is provided in section 3.3.
*Note that for control logic units only a PTIF for the CPU is given.
83
Module: Control Logic Units – Programmable Safety System
PDS Reliability Data Dossier
Component: Digital Output
λDU = 0.01 per 106 hrs 1) Exida [15]: Digital out – generic SIL 2 certified
λDD = 0.25 per 106 hrs 1) PLC (1oo1D)
λSU = 0.01 per 106 hrs 1) 1)
λSD = 0.93 per 106 hrs 1) Includes one digital out low module and one channel
84
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
r = 0.8
Assessment
Based on the estimate in the 2006-handbook and input from system vendor (Bjørge Safety
System), a distribution between analogue input, logic and digital output of 40%, 30% and 30%
respectively has been assumed.
The hardwired safety system is assumed to be a fail safe design without diagnostic coverage, i.e.
failures will either be dangerous undetected or they will result in a trip action (SU). Hence, the
coverage for both dangerous and safe failures is assumed to be zero.
The PTIF and r values are based on expert judgements. A summary of some of the main
arguments is provided in section 3.3.
*Note that for control logic units only a PTIF for the logic unit is given.
85
5.2.3.2 Logic
Module: Control Logic Units – Hardwired Safety System
PDS Reliability Data Dossier
Component: Logic
r = 0.8
Assessment
Based on the estimate in the 2006-handbook and input from system vendor (Bjørge Safety
System), a distribution between analogue input, logic and digital output of 40%, 30% and 30%
respectively has been assumed.
The hardwired safety system is assumed to be a fail safe design without diagnostic coverage, i.e.
failures will either be dangerous undetected or they will result in a trip action (SU). Hence, the
coverage for both dangerous and safe failures is assumed to be zero.
The PTIF and r values are based on expert judgements. A summary of some of the main
arguments is provided in section 3.3.
86
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
r = 0.8
Assessment
Based on the estimate in the 2006-handbook and input from system vendor (Bjørge Safety
System), a distribution between analogue input, logic and digital output of 40%, 30% and 30%
respectively has been assumed.
The hardwired safety system is assumed to be a fail safe design without diagnostic coverage, i.e.
failures will either be dangerous undetected or they will result in a trip action (SU). Hence, the
coverage for both dangerous and safe failures is assumed to be zero.
The PTIF and r values are based on expert judgements. A summary of some of the main
arguments is provided in section 3.3.
*Note that for control logic units only a PTIF for the logic unit is given.
87
5.3 Final Elements
5.3.1 ESV/XV
λD = 3.0 per 106 hrs cD = 0.30 λDU = 2.1 per 106 hrs
λS = 2.3 per 106 hrs cS = 0.10 λSU = 2.1 per 106 hrs
λcrit = 5.3 per 106 hrs PTIF = 1 · 10-4 (standard functional testing)
r = 0.5
88
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Assessment
The failure rate estimate is an update of the previous estimate in the 2006 handbook [12]. Data
from OREDA phase V-VII indicates a higher rate of dangerous failures as compared to the
previous estimate. Also, the data (and other sources) indicates a somewhat lower proportion of
safe failures as compared to the 2006 estimate.
Data from RNNP for the period 2003-2008 for riser ESVs has also been reviewed. In total some
6239 valve tests have been performed during this period, resulting in 96 failures. Based on this a
λDU = 1.8 · 10-6 (incl. pilot valve) can be estimated. It should be noted that this only include
failures revealed through functional testing.
As seen there is a relatively big difference between the failure rate indicated by the RNNP data
and the rate obtained from the OREDA phase V-VII data. The data from RNNP include only riser
ESV data which may be one explanation (tighter follow-up of riser valves). The main reason
however is assumed to be the fact that OREDA data includes a large portion of dangerous failures
revealed in-between tests by other detection methods, whereas RNNP only report test data.
The coverage for dangerous failures have been slightly increased to 30% due to information from
OREDA phase V-VII where it appears that a high fraction of dangerous failures (more than 50%)
are detected by operator observation. It should be noted that this is not diagnostic coverage in its
true meaning (e.g. ref. IEC definition) but will however imply that dangerous fault are detected in
between testing. For valves that are never operated except for testing, the coverage should
therefore be lower.
The size of the PTIF will vary depending on the completeness of the functional testing. Here, a
standard functional test where the valve is fully closed but not tested for internal leakage has been
assumed.
The estimated r is based on reported failure causes in OREDA as well as expert judgements. A
summary of some of the main arguments is provided in section 3.3.
89
Module: Final Elements
PDS Reliability Data Dossier
Component: ESV/XV (ex. pilot)
λcrit = 5.4 λD = 2.7 per 106 hrs Recommended values for calculation in 2004-
λDU = 2.0 per 106 hrs edition [13]
λSTU = 2.7 per 106 hrs
Assumed cD = 25%
PTIF = 10-6 - 10-5 1)
1)
For complete and incomplete functional testing
respectively.
λDU = 1.3 per 106 hrs Previously recommended values for calculation in
λcrit = 1.6 λSTU = 0.3 per 106 hrs 2003-edition [14]
λD / λST = 4.3 1)
PTIF = 10-6 - 10-5 1) For complete and incomplete functional testing
respectively.
No. of installations = 13
No. of inventories = 125
No. of critical D failures = 47
No. of critical ST failures = 21
Surveillance Time (hours) = 5 517 120
90
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
91
5.3.2 ESV, X-mas Tree
λD = 1.1 per 106 hrs cD = 0.30 λDU = 0.8 per 106 hrs
λS = 0.9 per 106 hrs cS = 0.10 λSU = 0.8 per 106 hrs
λcrit = 2.0 per 106 hrs PTIF = 1 · 10-4 (standard functional testing)
r = 0.5
Assessment
The failure rate estimate is an update of the previous 2006 estimate [12] (which was based
primarily on OREDA phase III, with some OREDA phase IV data). Additional data from phase
VII on X-mas tree valves indicate a somewhat lower dangerous failure rate than the OREDA
phase III data, but the aggregated exposure time is lesser in phase VII than for phase III.
Data from RNNP for the period 2003-2008 for X-mas tree wing and master valves has also been
reviewed. In total some 29032 valve tests have been performed during this period, resulting in
317 failures. Based on this a λDU = 1.2 · 10-6 can be estimated. It should be noted that this only
include failures revealed through functional testing. Also, note that RNNP data include the entire
valve, i.e. also the pilot valve, and is therefore not directly comparable to the OREDA data where
the pilot valve has been excluded.
Based on new data from OREDA and RNNP, it appears that the rate of dangerous failures may be
somewhat lower than previously assumed. The amount of new OREDA data is however
somewhat scarce and the RNNP data is not directly comparable. The rate of DU failures has
therefore been kept in line with the 2006 estimate.
For similar reasons as the ESV/XV valves the coverage for dangerous failures has been slightly
increased from 25% to 30%. As for ESV/XV’s the proportion of safe failures (as compared to
dangerous failures) have been reduced in line with data from OREDA and other sources.
The size of the PTIF will vary depending on the completeness of the functional testing. Here, a
standard functional test where the valve is fully closed but not tested for internal leakage has been
assumed.
The estimated r is based on reported failure causes in OREDA as well as expert judgements. A
summary of some of the main arguments is provided in section 3.3.
92
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λcrit = 2.1 λD = 1.1 per 106 hrs Recommended values for calculation in 2004-edition
λDU = 0.8 per 106 hrs [13]
λSTU = 1.0 per 106 hrs
cD = 25%
PTIF = 10-6 - 10-5 1)
1)
For complete and incomplete functional testing respectively.
λDU = 0.8 per 106 hrs Previously recommended values for calculation in
λcrit = 1.5 λSTU = 0.5 per 106 hrs 2003-edition [13]
λD / λST = 1.1 1)
PTIF = 10-6 - 10-5 1) For complete and incomplete functional testing respectively.
λcrit = 0.8 λD =0.8 per 106 hrs OREDA phase VII database [6]
Data relevant for x-mas tree production and injection
valves
Observed: Filter:
cD = N/A Inv. Eq. Class = Valves AND
cS = N/A Inv. Phase = 7 AND
(Inv. Att Application = PROD MASTER OR
(Inv. Att Application = PROD WING OR
(Inv. Att Application = PROD SWAB OR
(Inv. Att Application = INJ MASTER) AND
(Fail. Item Failed <> Pilot valve AND
Fail. Subunit Failed <> Control & Monitoring)
93
Module: Final Elements
PDS Reliability Data Dossier
Component: ESV, X-mas Tree Valve (ex. pilot)
No. of inventories = 18
No. of critical D failures = 0
No. of critical ST failures = 1
Cal. time = 902 976
DOP: 0.15 OREDA phase III database [8]
Crit: 7.36 EXL: 1.84 Data relevant for wellhead ESD/PSD valves, main
FTC: 0.77 valve or actuator.
FTO: 0.46
INL: 2.30 No. of inventories = 349
LCP: 1.69 Number of critical failures = 48
PLU: 0.15 Cal. time = 6 518 058 hrs
94
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 2.6 per 106 hrs cD = 0.20 λDU = 2.1 per 106 hrs
r = 0.5
Assessment
The failure rate for blowdown valves is an update of the previous estimate in the 2006 handbook,
[12] based on new data from OREDA phase V and VI as well as data from RNNP.
Data from RNNP for the period 2004-2008 for blowdown valves has been reviewed. In total
some 15392 valve tests have been performed during this period, resulting in 397 failures. Based
on this a λDU = 2.9 · 10-6 (incl. pilot valve) can be estimated. This is in line with the DU estimate
for blowdown valves given in the 2006 PDS handbook.
Data from OREDA phase V-VII on the other hand indicates a lower rate of dangerous (and safe)
failures as compared to the 2006 estimate which was primarily based on OREDA phase IV data.
Recorded data from phase V-VII is however significantly less than for phase IV (approximately
half the surveillance time).
Based on the above, the rate of DU failures has been kept approximately the same as in the 2006
edition whereas the rate of safe failures has been somewhat reduced. The coverage for dangerous
failures have been reduced to 20% since blowdown valves will rarely be operated in-between
tests and therefore few dangerous failures will be detected by operator observation.
The PTIF and the r values are assumed the same as for ESV/XV valves (where the PTIF is given
assuming a normal/average functional testing standard).
Failure Rate References
Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
95
Module: Final Elements
PDS Reliability Data Dossier
Component: Blowdown Valve (ex. pilot)
λcrit = 5.4 λD = 2.7 per 106 hrs Recommended values for calculation in 2006-
λDU = 2.0 per 106 hrs edition [12]
λSTU = 2.7 per 106 hrs
PTIF = 10-4
λcrit = 3.7 λD = 2.7 per 106 hrs Recommended values for calculation in 2004-
λDU = 2.0 per 106 hrs edition [13]
λSTU = 1.0 per 106 hrs 1)
For complete and incomplete functional testing
-6 -5 1)
PTIF = 10 - 10 respectively
λcrit = 2.0 λD = 1.6 per 106 hrs OREDA phase V-VII database, [4] and [6]
λS = 0.4 per 106 hrs Data relevant for blowdown valves.
Note: these data also include the pilot valve
Observed: Filter:
cD = N/A Inv. Equipment class = VALVES AND
cST = N/A Inv. Att. Application = BLOWDOWN AND
Inv. OREDA Phase = 5 - 7 AND
Fail. Severity Class = Critical
No. of inventories = 50
No. of critical D failures = 4
No. of critical S failures = 1
Surveillance Time (hours) = 2 442 984
6.40 D: 5.52 OREDA phase IV database [6]
ST: 0.88 Data relevant for blowdown valves.
Note: these data also include the pilot valve
Observed:
cD = N/A Filter:
cST = N/A Inv. Equipment class = VALVES AND
Inv. Att. Application = BLOWDOWN AND
Inv. OREDA Phase = 4
No. of inventories = 92
No. of critical D failures = 25
No. of critical S failures = 4
Surveillance Time (hours) = 4 532 640
96
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 1.1 per 106 hrs cD = 0.30 λDU = 0.8 per 106 hrs
λS = 1.9 per 106 hrs cS = 0.10 λSU = 1.7 per 106 hrs
r = 0.4
Assessment
The failure rate estimate is an update of the previous 2006 estimate based on new data from
OREDA phase VI and VII as well as other sources. Note that part of the failures reported under
"control and monitoring" (approx. 50%) are included as part of the valve itself. The distribution
between dangerous failures and safe failures has been kept the same as in the previous edition.
The coverage factor for D failures has been estimated to 30%, due to registered detection methods
in OREDA phase IV and V-VII. As for ESV/XV valves this coverage include some manual
observation by operators.
Based on the above and the new data on solenoids, the rate of DU failures have been slightly
reduced as compared to the 2006 estimate whereas the rate of safe failures has been kept the
same.
*The PTIF for pilot valve is included as part of the PTIF for the valve itself.
The estimated r is based on reported failure causes in OREDA as well as expert judgements. A
summary of some of the main arguments is provided in section 3.3.
97
Module: Final Elements
PDS Reliability Data Dossier
Component: Pilot/Solenoid Valve
λcrit = 3.2 λD = 1.3 per 106 hrs Recommended values for calculation in 2004-edition
λDU = 0.9 per 106 hrs [13]
λSTU = 1.3 per 106 hrs 1)
PTIF for pilot valve included in PTIF for main valve.
1)
λDU = 1.4 per 106 hrs Recommended values for calculation in 2003-edition
λcrit = 4.2 λSTU = 1.8 per 106 hrs [14]
λD / λST = 0.7 1)
1) PTIF for pilot valve included in PTIF for main valve.
98
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
99
5.3.5 Process Control Valve
r = 0.6
Assessment
The figures for control valves have been updated as compared to the 2006 handbook, [12]. The
failure rate estimates are based on a “weighted” average of the OREDA phase III – V data.
Included in the λD failures are all ‘fail to close’ (FTC) failures, 50% of the ‘delayed operation’
(DOP) failures and 25% of the ‘fail to regulate’ (FTR) failures. Hence, only the failure modes
assumed relevant for shutdown purposes are included. Included in the safe failures (S) are
‘spurious operation’ and ‘fail to open’ failures as well as 25% of the ‘fail to regulate’ failures (i.e.
we assume that only 50% of the FTR failures are critical with respect to spurious operation or
valve closure). Note that no split has been made between small and large control valves (as was
done in [13] and [14]).
Based on the registered observation method for the relevant failure modes, as well as expert
judgement, coverage for both dangerous and safe failures of 50% has been estimated. It is then
implicitly assumed that the control valve is used in normal operation resulting in a relatively high
coverage.
For some cases (e.g. on some onshore plants) selected control valves may be used solely for
shutdown purposes. In this case the valves will be operated infrequently, resulting in a
significantly lower coverage factor. For control valves used only as shutdown valves, the
coverage is suggested reduced to 20%, giving λDU and λSU estimates of 3.5 per 106 hrs and 2.0 per
106 hrs respectively.
The PTIF and r values are entirely based on expert judgements. A summary of some of the main
arguments is provided in section 3.3.
*Data for control valves are mainly collected for valves in control service and not from
applications where control valves are used for on/off shutdown service. The solenoid valves will
normally not be part of the control function and therefore no solenoid valve failures are registered
under control valves in OREDA. When considering failure rates for control valves used for
shutdown purposes, the failure rate of a solenoid valve should therefore be added.
100
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
PTIF = 10-5
2.9 FTC 0.41 OREDA phase V database [6]
FTO 0.82 Data relevant for process control valves including
FTR 1.23 pilot valve etc. Note! All sizes are included.
LCP 0.41
Filter:
Inv. Equipment class = VALVES AND
(Inv. System = Gas export OR
Inv. System = Gas processing OR
Inv. System = Oil export OR
Inv. System = Oil processing OR
Inv. System = Condensate processing OR
Inv. System = Gas (re)injection OR
Inv. System = Gas production OR
Inv. System = Gas treatment OR
Inv. System = Oil production) AND
Inv. OREDA Phase = 5 AND
Inv. Att. Application = Process Control
No. of inventories = 54
No. of critical failures = 7
Calendar time (hours) = 2 446 080
101
Module: Final Elements
PDS Reliability Data Dossier
Component: Process Control Valve
102
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
For OREDA data only failures classified as ‘fail to open’ are considered as D failures. For safe
failures, the critical failure modes ‘spurious operation’, ‘leakage in closed position’ and ‘fail to
close’ have been included. Note that for relief valves, operational time is used in the failure rate
estimates. Based on all OREDA data from phase IV-VII a weighted dangerous failure rate of 1.9
per 106 hours can be estimated. Similarly, a weighted average for safe failure of 1.0 per 106 hours
can be found.
In the RNNP project, data on PSVs are available for the period 2004-2008. A total of 53347 valve
tests have been performed resulting in 2226 failures. Assuming annual testing, these data gives an
estimated λDU of 4.8.10-6 per hour. On many installations the PSVs are only tested every second
year. Assuming a test interval of 2 years, a λDU of 2.4.10-6 per hour results.
As seen, the data from RNNP gives somewhat higher λDU values than the latest OREDA data.
Since the amount of RNNP data is very extensive, the rate of dangerous failures for PSVs has been
slightly increased as compared to the 2006 estimate. The rate of safe failures has been kept
approximately the same.
The given λDU applies for a fail to open failure within 20% of the set point pressure. If a critical
failure is defined as fail to open at a higher pressure, a reduced failure rate may be applied. For the
failure mode ‘fail to open before test pressure’, a λDU = 1.1.10-6 per hour is suggested (i.e. a 50%
reduction as compared to the rate of failures to open within 20% of the set point , ref [17]).
The PTIF and r values are entirely based on expert judgements. A summary of some of the main
arguments is provided in section 3.3.
103
Module: Final Elements
PDS Reliability Data Dossier
Component: Pressure Relief Valve
λcrit = 1.0 λD = 0.7 per 106 hrs OREDA phase V-VII database [4], [6]
λS = 0.3 per 106 hrs Data relevant for self-acting or self-acting/pilot
actuated relief valves.
Observed:
cD = N/A Filter:
cST = N/A Inv. Equipment class = VALVES AND
Inv. Att. Application = Relief AND
Inv. OREDA Phase = 5 - 7
104
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
r = 0.6
Assessment
The failure rate applies for deluge valves and is based on data from RNNP as well as OREDA
(limited population with only diaphragm type of valves), taking into account some expert
judgements. The coverage for both D and S failures has been assumed to be zero.
In the RNNP project, test data for deluge valves for the period 2004-2008 are available. A total
of 17284 deluge valve tests have been performed resulting in 163 failures. With 6 and 12
monthly testing, these data gives an estimated λDU of 2.2.10-6 and 1.1.10-6 per hours respectively.
The RNNP data is assumed to include both diaphragm and inbal type of deluge valves.
The PTIF and r values for deluge valves are entirely based on expert judgements.
Failure Rate References
Overall
Failure mode
failure rate Data source/comment
distribution
(per 106 hrs)
λcrit = 8.8 λD = 8.8 per 106 hrs 1)
OREDA phase VI, [4]
Observed: Filter:
cD = 0% Inv. Equipment class = Valves AND
cST = N/A Inv. Att. Application = DELUGE AND
Inv. OREDA Phase = 6
1)
The limited population
only includes diaphragm No. of inventories = 43
type deluge valves from one No. of critical fail to open failures = 10
installation. 7 of the No. of critical safe failures = 0
dangerous failures were due Operational time (hours) = 1 130 040
to improper design.
λDU = 4.7 .10-6 / hour OLF 070 (based on PDS-BIP data), [19]
105
5.3.8 Fire Damper
r = 0.7
Assessment
The failure rate applies for fire dampers and is based on data from different installations, taking
into account some expert judgements. The coverage for both D and S failures has been assumed
to be zero.
The PTIF and r values for fire dampers are entirely based on expert judgements.
No. of inventories = 57
1)
No. of critical DU failures = 3
2)
Cal. time = 998 640 hrs
1)
The failure review focused on DU failures, 1 additional failure
was classified as safe.
2)
Two years of operation
106
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λDU = 7.3 .10-6 / hour OLF 070 (based on PDS-BIP data), [19]
107
5.3.9 Circuit Breaker
r = 0.6
Assessment
The failure rate applies for large circuit breakers and is based on the listed data sources, taking
into account some expert judgements. The coverage for both D and S failures has been assumed
to be zero.
The PTIF and r values for circuit breaker are entirely based on expert judgements.
λDU = 0.6 per 106 hrs Exida [15]: Generic motor starter
λSU = 0.9 per 106 hrs
SFF = 60%
108
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
5.3.10 Relay
r = 0.6
Assessment
The failure rate applies for relays and smaller circuit breakers and is based on the listed data
sources, taking into account some expert judgements. The coverage for both D and S failures has
been assumed to be zero.
The PTIF and r values for relay are entirely based on expert judgements.
109
5.3.11 Downhole Safety Valve – DHSV
r = 0.5
Assessment
The updated failure rates for DHSV is based on two main sources:
• internal SINTEF data which gives an estimated λD of 2.0 per 106 hrs, and
• updated test data from RNNP for the period 2003-2008. Here, 25926 valve tests have
been performed, resulting in 764 failures. Assuming an average test interval of 6 month,
this gives an estimated λDU of 6.7 per 106 hrs. If tested annually, the corresponding λDU
becomes 3.4 per 106 hrs
Furthermore, the same distribution between dangerous and safe failures as for topside ESV/XV
valves is assumed. Zero coverage has been assumed both for S and D failures. PTIF and r is based
on expert judgements.
3.4 – 6.7 Fail to close or too high Data from RNNP, [9]
internal leakage rate
λDU = 3.6 per 106 hrs Data from review of safety critical failures on
Norwegian semi-submersible platform.
No. of inventories = 16
1)
No. of critical DU failures = 1
2)
Cal. time = 280 320 hrs
1)
Focus on DU failures. Reporting on other failure types
questionable
2)
Two years of operation
110
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
In the present version of the PDS data handbook, additional subsea data from the new OREDA
2009 Handbook have been utilised, thus providing a much better basis for the suggested values. It
should however be noted that for some equipment groups the population is still limited.
For the subsea equipment, focus has been on dangerous failures and only values for the coverage
factor for dangerous failures cD is specified. Hence, the rate of undetected spurious trip failures,
λSU, is not given. Values for the safe failure fraction (SFF) have been indicated. It should be noted
that the SFF figures are mainly based on the reported failure mode distributions in OREDA
subsea (as well as some additional expert judgements) and will therefore rely on the quality of the
failure reporting in OREDA. Higher (or lower) SFFs than given in the tables may therefore apply
for specific equipment types and this should in such case be documented separately.
Furthermore, specific β and PTIF values for subsea components are not given. As a starting point,
estimates for topside equipment can be used for these unspecified parameters. They should,
however, be assessed on a case by case basis depending on their specific (subsea) application.
Table 11 briefly summarizes the discussion underlying the proposed data for subsea equipment.
For more detailed descriptions about the equipment configurations, the equipment boundaries and
the data, reference is made to the new OREDA 2009 subsea handbook, [3]. For comments or
feedback concerning the OREDA subsea data; contact the OREDA project manager or one of the
participating companies, ref. http://www.oreda.com.
ESD/PSD logic including 15.6 8.0 90 0.8 95 Ref. section 5.2.2. Data for programmable
analogue input and digital topside safety system (single system) referred.
output
* Topside located ESD/PSD node which may
Located topside* communicate with the subsea equipment via
the master control station (MCS).
MCS - Master control 9.4 2.8 60 1.1 88 OREDA Subsea Handbook 2009, [3], Tax. No.
station 5.1. Master control station (25 off, 11 crit.
failures).
Located topside*
Based on reported critical failures in OREDA a
distribution between safe and dangerous
failures of 70% / 30% has been assumed.
Further, a coverage of 60% for dangerous
failures has been assumed.
111
Component λcrit 1) λD 1) cD λDU 1) SFF Reference/comments
(%) (%)
SEM – subsea electronic 9.9 4.0 70 1.2 84 OREDA Subsea Handbook 2009, [3], Tax. No.
module (located in subsea 5.1. Subsea electronic module (461 off, 138
control module, SCM) crit. failures).
112
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Umbilical power/signal 0.51 0.36 80 0.07 86 OREDA Subsea Handbook 2009, [3], Tax. No.
line (per line) 5.1. Static umbilical, power/signal line (407
off, 8 crit. failures)
Subsea isolation valve, 0.52* 0.21 0 0.21 60 OREDA Subsea Handbook 2009, [3], Tax. No.
SSIV (part of subsea 5.4. Valve subsea isolation (149 off, 0 crit.
isolation system, SSIS) failures)
Production master valve, 0.26 0.18 0 0.18 30 OREDA Subsea Handbook 2009, [3], Tax. No.
(PMV) 5.8. Subsea X-mas tree; Valve process
Production wing valve, isolation (2267 off, 19 crit. failures).
(PWV)
113
Component λcrit 1) λD 1) cD λDU 1) SFF Reference/comments
(%) (%)
Chemical injection valve, 0.37* 0.22 0 0.22 40 OREDA Subsea Handbook 2009, [3], Tax. No.
(CIV) 5.8. Subsea X-mas tree; Valve utility isolation
(928 off, 4 crit. failures)
114
6 REFERENCES
116
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
[19] OLF Guideline 070: “Application of IEC 61508 and IEC 61511 in the Norwegian Petroleum
Industry”. The Norwegian Oil Industry Association, rev. 02, 2004.
[20] Angela Summers (2008). IEC Product Approval – Veering Off Course. Article posted
11.06.08 in www.controlglobal.com
[21] Centre for Chemical Process Safety (CCPS): Guidelines for safe and reliable instrumented
protective systems, Wiley, 2007
[22] Béla G. Lipták (Editor): Instrument Engineers Handbook – Process Control and
Optimisation, fourth edition, Taylor & Francis, 2006
[23] Guidelines for follow-up of Safety Instrumented Systems (SIS) in the operating phase.
SINTEF report A8788, Rev. 01, 01.12.2008 (Web:
http://www.sintef.no/project/PDS/Reports/PDS%20Report-
SIS follow up guideline final v01.pdf)
[24] Hauge, S., Lundteigen M.A and Rausand M., Updating failure rates and test intervals in the
operational phase: A practical implementation of IEC 61508 and IEC 61511, ESREL
September 2009, Prague, Czech Republic
117