You are on page 1of 8

Copyright © IF.-\.

C Indu strial Process Control


Systems. Bru ges. Belgium. 1988

AV AILABILITY, RELIABILITY OR
REDUNDANCY ... RELIEVING THE
CONFUSION
K. M. Renner
Illd ustria l S),stems Group ,'vtarketi,lg. Measurex CO/poratio n. One Results Way.
Cupertillo. CA 950[.1. ['SA

lIhstract. Reliability and redundancy are o.u term; used frequently


in discussicns ~ the ability of an in:lustrial process
CCl"ItroI system to witl1stand module or system failure. 'Ihe
~le nature with which these term; are used often
clco:is the real "issue of availability and creates an environment
where expensive decisicns are made, in error, base::l m technical
data of questimable validity. '!his article will differentiate
availability fran reliability and redundancy, eq:ilasize the
ilIportance of the failure definitim, and a.ttline a methodology
for in:lustrial process CCl"ItroI system failure evaluatim.

Keywords. Reliability; Availability; Process Ccntrol systems;


Failure Analysis; Failure Definitim.

nma:ucrICfi While indirectly ackiressin3 the system failure


issue, such statements are a means by which
A process oa'ltrol system IIIJSt perform its plant persamel feel vemors can be made to
duties c:alSistently, as designed, in order to satisfy operatimal requirements - even though
maximize the benefit to be achieved frem the associated expense may not be justified,
investment in its ilIplementatim. 'Ihe advent, nor, red!.JOOant modules the best SjOlutim.
and inc:reasin3ly wide use, of micrq>rooessOr
base::l oa'ltrol syst:aIB, which I"E!I1¥JVe the plant 'Ihe real issue is availability rather than
operator further fran his process, has led to reliability or redundancy. In fact, selective
the growing ~ for the oa'ltrol system redundancy is just a tool used, in certain
to be "solid." 'Ibat is, it has inherent awlicatia'\S, to enhance the reliability of
characteristics which preserve total system the total module or system. However, enhanced
availability and minimize the jecpardy in reliability itself does not necessarily lDIIIIll
which a plant is placed shalld a oa'ltrol enharxled system availability. 'Ihe iDpct of
system failure cxx:ur. 'Ihe process imustry's failure diagrx:sis and repair lIIlSt also be
increased reliance on automation has c:alSidered. It can be shown that aalln:;J
CXI'lSE!Cpf!I'ly increased it's vulnerability to ~iate redundancy will not a'\ly have
oa'ltrol system failure. It is probably safe little positive affect a'! system availability,
to assume that availability ilIportance rut can actually decrease up-time - often at
develqled as an oot:.growth of the space, significant capital expen:iiture.
ruclear, and military needs for "zero"
downtime in applications which were 'Ihe key to adequately aa:b:'essin3 the oa'ltrol
miClcptooes&Or oa'ltrolled. While a process system availability issue is correct up-fr'alt
plant is not q.ti.te a space shuttle, the definitim and plannirp (ref.2,3) which will
realizatia'! that extremely reliable syst:aIB allow the vendor and client to jointly
can be develqled has meant that the "solidity evaluate redundancy, and other tools, as a
factor' has becc:me a key criteria'! in the means of eoc:rxmically meetirg the recpired
evaluatim of a oa'ltrol system's relative oa'ltrol system &eO.Irity and integrity.
merits. '!his factor typically manifests
itself in evaluatim of the "abilities"-
availability, reliability, maintainability and
useability. (ref .1)
No analysis of oa'ltrol system availability can
However, to CXIIIIIJllicate this to prospective be performed until mdes of failure are
vendors, control system functional defined and sillplifyin3 asa.m:pt:ia'\S are
specificatia'\S will often Ca'ltain statements documented. It is critical to urDerstand,
a'\ly relat1n;J to system redundancy. fran a process G oa'ltrol perspective, the
aooeptable failure ocn:litia'\S so that the
e.g. oa'ltrol system can be designed to eoc:rxmically
o "All major oa'ltrol system oc:mpc:nents shall meet the result1n;J criteria. As an exzmple,
be made red!.JOOant to avoid any sin3le failure definitim may take the form as
failure affec:tin:1 plant operatim or follows:
oa'ltrol."
o "'Ihe oa'ltrol system modules shall be o No sin3le point of oa'ltrol system failure
~iately backed up to ensure a shalld inhibit the ability for the plant to
verifiable up-time of 99.5% minim.lm." be shutdown safely.

173
174 K. M. Renner

o UI1sd1eduled plant shutdams due to a which will prCl\l'ide the basis for the
ccntrol system failure shcW.d be limited to availability analysis.
00 IIDre than two per year.
o A plant shut.c:bm is defined as loss of feed
to the plant am product to storage, with HARDolARE ANALYSIS
associated trips of eq.dpnent recpi.rirq
greater than two hrurs to retw:n to steady '!he true a;ntrol system hardware reliability
state cperaticn. analysis :lS based upcn the statistical
o No sirqle ccntrol system failure shcW.d fun1amentals whereby the probability of a
prevent the cperatirq persamel fran CXlI1troI system cperatirq withrut failure, for
aooessirq am evaluatirq plant cperaticnal a specified period of time, is defined as it's
status. "reliability." Or more succinctly,
o Any ccntrol system failure affectirq plant "reliability is the probability of survival".
cperaticn shcW.d be able to be analyzed am
remedied within two hrurs. 'Ihree CCI1Cepts help to relate reliability am
o 0.1rrent am historical plant data shcW.d failure:
rot be lost upcn any ccntrol system
failure. o '!he probability of a given system failure
o '!he CCI1Cept of dooble jec:pardy shcW.d be OCOlZTirq is a OCI\i)inaticn of the
ClCI1Sidered in the failure analysis where individual lIXldule or a::JIt)CXleI1t failure
persamel or envircnnental safety is at probabilities.
risk. o '!he failure rate (rn.mi:Jer of failures per
unit time) typically follows the ''bathtub
SUch a failure definiticn set will allow curve." Le. three distinct failure zcnes
detenn:inaticn of the failure !IDdes to be used exist ~ a product's life which are
in a ccntrol system availability analysis. distirgui.shed by failure frequen:y - the
Note that it is :inportant to document infant IIDrtality zcne, ClCI1Stant zcne, and
qualifyirq assunptic:ns that are made in the wear-o.rt: zcne.
derivirq the definiticn set. By looJtirg at a o A micrc:prooessor based JOOdule, or system,
generic control systeu functicnal flow spen:ls the majority of its life in the
architecture (Fig. 1) it is seen that the ClCI1Stant zcne.
serial method by which JOOdule functic:ns are
organized places a reliaJxle cn minimizirq Therefore, the probability of survival,
sirqle points of failure. '!he loss of any (reliability), for a specific period of time,
link or fun1amental element in the CXlI1troI is related to the failure rate by an
hierarchy will adversely affect system up- expcnential expressicn: (ref.4)
time. Any sirqle element that can fail am
precipitate a plant failure, as defined, is R = e ~t (1)
called a "sirqle point of failure." '!he
reliability and predictability of such where: R = reliability
elements will have a marked inpact cn ~ =failure rate (failures/time)
achievirq high availability. t = specific time period.
am as failure rate is the inverse of
mean time between failure (KI'BF):
Few situatic:ns in marufacturirq in::hJstry
require absolutely m sirqle point of failure. MI'BF = 1/~ (2)
An extreme case walld lead to dual final am R = e (-t/MI'BF) (3)
elements, transducers, tezminatic:ns, etc.
l'bile this is possible, sirqle failure points This expressicn exenplifies the limited
may still exist (e.g. switches fran cn-line to pragmatic use of the tenn "reliability".
back-up), am for !IDst applicatic:ns, this is
rot justified. Le. '!he cne year reliability of a system
with a MI'BF of 5 years is 82\.
'!he key issue here is: How do I minimize Inc::reasirq the MI'BF to 10 years gives a
failure risk by increasirq the reliability of reliability of 90%. In fact, to
the weakest links, thereby increasirq Cl\l'erall increase the reliability to 99\, a
system availability? MI'BF of 100 years is required!

To identify which links affect the plant Significantly more practical sense is
cperaticn the prime points to ClCI1Sider accomplished by analyzing the system
include: availability. Le. the total time a system is
in use, and idle rut capable of beirq used .
o What is defined as a failure? Availability and \ uptime are syl'lCI'li'IInl.
o can the plant be partiticned ccntrol-w1se 'Iherefore:
to minimize failure :ilrpact?
o How quickly am cxnveniently can failures Availability System wime
be diagnosed am remeclied? Systeu uptime + Systeu downtime
o What c:pticnal redun:1an::y or bacIoJp ccntrol
system facilities are available shcW.d they or: A= MI'BF
be deemed necessary? MI'BF + MI'lR (4)
o Is lIXldular cn-line replacement possible?
o Are there critical processes or periods where: A = availability
which req.dre enhaooed ccntrol system MI'BF '" the statistical mean time
reliability? between failures (time)
o What level of self d.iagnostics are MI'lR = Mean Time to Repair - time
available cn the ccntrol system? req.rlred to retw:n the system
o can the prooess be ccntrolled JIBIUally in to cperaticn (time).
the event of a ccntrol system failure?
as: MI'BF = 1/~
By addressirq such q.JeStitrlS fran hardware,
software and cperaticnal perspectives, a ilIplies: A = 1/[1 + ~ x MI'lR] (5)
realistic failure definiticn is OClIPleted,
.'h ailabilit\ . Reliability or Redu n danC\" 175

'Ibis inticates that with realistic values of Note:


system failure rate and mean time to repair, o I/O card failure rates are incluied in the
an approximate system availability can be failure rate of the control module.
calall.ated. o Redurrlant units incl\J:ie the failure rate of
the switdlcNer mechanism (assumes hat-
'nle failure rate of the system or JOOdule is backup's)
calculated simply by adding together
intividual 0CIIpCl"IeI1t failure rates. 'Ibis
Orx::e the system failure rate is calall.ated,
approach is based Q'l the ocn;ervative
ClSSUllptiQ'l that all 0CIIpCl"IeI1ts are equally all that is needed is the Mean Time to Repair
(MI'IR) in order to estimate system hardware
critical to the functiQ'l of the rodule, and availability •
prcduoes a pessimistic nodule failure rate.
'nle effect of JOOdule failure Q'l the SUCXlE!SS of
Ml'IR is a measure of the time taken to
the system is then analyzed and a SUCXlE!SS recognize a failure, diagnose, analyze, and
logic diagram is produced for the cx:rrplete replace/repair to brin;J the system (and hoE!rDe
system. plant) back to the status it was in prior to
failure. Here, the inportance of the failure
algebraically; definitiQ'l again is cq:parent. 'nle Ml'IR of the
control system alooe may be just the time to
~m = 1: (~c X ~c') (6) diagoose and replace a sin;Jle boal:d or
0CIIpCl"IeI1t. Parts may be Q'l site; at a local
where: service center; or guaranteed 12 hc:ur
~m = module failure.rate (10-6 delivery; or even installed as warm spares.
failuresjha.tr) System maintainability will enhar¥::e diagmsis
~c = 0CIIpCl"IeI1t failure rate and repair ~le parts delivery is deperdent
~c '= redundant 0CIIpCl"IeI1t failure rate upon locatiQ'l. Naturally, m::lSt critical
parts, with higher failure rates, WQlld be
'Ibis procedure is best illustrated usin;J an 8lCpeCted to be easily and quickly a<DeSSible.
exanple. Figures 2 and 3 represent boo sinple Of more inportance, however, is the inpact the
control system coofiguratioos (System A and control system failure has Q'l the cpmitin:.J
System B). 'nle differeooe between A and B is plant. If the failure, as defined, leads to a
the ad:litiQ'l of a redundant central processor loss of productiQ'l for 8 hc:urs ~le the plant
for control JOOdule U and a redundant is returned to steady state, then the ~
CXIIIII.D'licatioos network. Ml'IR as a result of control system failure
JIIJSt include this. It beoanes cq:parent that a
'nle followi.n;J assunptioos are made: trade-off develcps between failure definitiQ'l,
failure rate, Ml'IR and maintainability. 'nle
o Failure definitiQ'l yields the follOlrlin;J more easily and rapidly a control system can
failure recpirement.s: be repaired and the more quickly the inpact Q'l
- Loss of CXIIIII.D'licatioos network the plant is resolved, the less E!IlPJasis needs
- Loss of control module U to be placed Q'l reductiQ'l in failure rate to
- Loss of cpmitors ocnsole U and *2 acrueve cx:rrparable availability.
o I/O card,lterminatiQ'l loss: In our exanple, let's assume a Ml'IR of 4 hc:urs
- 2 analog ootpJt cards (16 points) for the failures identified.
- 1 digital ootpJt card (16 points)
- 1 digital irpJt card (16 points) 'nlen: ~ = 1/[1 + IfiA x ~l
- 1 analog irpJt card (64 points) =~

Hence the system failure rates for A and B can and:


be calall.ated.
'nle ad:litiQ'l of redundant CXIIIII.D'licatiQ'l links
~A = ~ocmnlink + ~contro1U + and primary processor, has increased the
system availability to essentially loot.
(~opcan*l x ~opcan*2)
If w assume, however, that the control and
~B = (~ocmnlink x ~ocmnlink') + (~controU1 x I/O for the plant oould be clistrib.rt:ed between
control module U and control rodule '2, in
~lU) + ( ~opcanU x ~opcan*2) system A, such that failure of both control
calall.atiQ'l of intividual 0CIIpCl"IeI1t failure modules WQlld be defined as a failure, then
the availability of system A oould be
rates requires failure rate data for boal:d
level modules. r:ata is available either fron increased to essentially 10()\ also, at
the manufacturer or fun:iarnental m::xiels minimal, i f any, ad:liticnal e>cpen&e. 'lhe
provided by refeI'el'XX!S such as the U.S. redundant control module U prtXlEISSOr WQlld
Military Hambook MIIrHDBK-217D (ref.S). SUCh not be neoessazy.
models result in failure rate predictioos for
all elect:rati.c CXllpcnents ...nile allowi.n;J for rurther, if the Ml'IR was redooed to 1 hc:ur
differences in envircnnent, teqlerature, !ran 4 hc:urs, a similar ircrease in Systan A
quality factor, stress and cx:rrplexity. availability results.
Utilizin; this data, failure rates are
calall.ated for first boal:d level (eg. CRJ lt1ile the hardware analysis can be relatively
boal:d, ur;dem boal:d, serial I/O boal:d) then objective within the limits of the failure
unit level (eg. uninterruptable power SlWly, definition, failure rate accuracy and
mic:roc:x:mplter unit, I/O IlIlltiplexer unit, assunptiCl'lS made, it will be seen that
cpmitor ocnsole unit) cx::tl\XA1E!llts. Orx::e this q.W.itative and subjective cx::tl\XA1E!llts JIIJSt be
data is available, it is a relatively sinple factored to the total availability "ecp!tim".
What this analysis shows is that the
procedure to OCJli)ine failure rates in a manner
relationships between availability,
reflectinq the failure definitiQ'l and system reliability, HIm' and Ml'IR shc:W.d be fUlly
architecture. Table I SUlIIlIarizes the understood before any control system
calall.atiQ'l for systems A and B usin;J "solidity" specificatim is att.arpted.
representative failure rate data .
176 K. ~I . Re nner

'!HE SOFlWARE ANALYSIS ClR'licaticns level), will play a large role in


oart:rol system availability. '!be ability to
All major micrcprocessor based process oart:rol fully test is often rtt ClR'rcpriate in an on-
system; have varyin; levels of software that line oart:rol system hen::e quality 1Il.Ist be
is nq.ri.red for suooessful c:peratien. '!be designed-in and built-in to the software.
reliability of such software is an :inp:>rtant
issue in determining control system Of eq.Jal :inp:>rtanoe is the possibility of
availability, however, its true ~ct is changes made in ale level of software
difficult to quantify and often affectin; the reliability of an::Jt:her level.
urxJerestimated. An extreme case 1oUJ.l.d be software 'fixes' made
in ale level, as a result of a 'b.Jg' in
~ articles have defined methods for an::Jt:her level, which is later updated with a
enharx:in; software reliability with III.ldl. software 'patch' that contaminates the
EIIPlasis placed en adherence to rigorrus original 'fix'. '1hese issues are rtt trivial
structured progranming starDards, correct and require ilIplementatien of strict software
plannin;, detailed testin; and substantial develc:pnent, c:peration and dlange starDards.
documentation. However, III.ldl. of the In fact, prd:>lems such as these are III.ldl. of
perfoz:manoe data has been derived fraIl the drivin; force behind Fm otIIplter oart:rol
otIIplter ClR'licaticns in data processin;, system validation requirements in the
office autanatien and transacticnal processin; Fhannaceutical in:lustry. Not only 1Il.Ist the
which are rtt totally indicative of process software ~ct be evaluated, but also the
oart:rol system c:peratien. ~ity measurement ~ct on plant c:peration and pro:luct quality.
has been based en an d:>jective to produce Requirements liJce these separate the on-line
"software without defects that 1oUJ.l.d cause the control system fraIl the mainframe data
system to step ClCIIl>letely or to produce processin; otIIplter.
unaooeptable results" (ref.6). ~le these
fundamentals are still ClR'licable to process Generally, in redundant processors, ~ies of
oart:rol system;, :inp:>rtanoe sha.lld be placed the sarre software will be runnin;; hen::e if a
en the interacti.en of software JOOdules, at software fault occurs in the on-line
variws levels, within the real time plant processor, there is a good dlaJy:,e the sarre
c:peraticnal enviramlent and their ~ct en fault will occur upon swit.c:halrer to the
plant c:peratien. backup. '!his places EIIPlasis on the
requirement, for both software and hardware,
Generally, process oart:rol system software can that redundant systems be adequately exercised
be divided into foor hierarc:hi.cal levels as to ensure their integrity.
shown in Figure 4.
'!be ClCIIl>lexity of the software reliability
o Microoode is the basic micrcprocessor chip issue and the factors influerx:in; it, result
level machine instructien set (eg. Intel in a detailed, objective, software
8086)
availability analysis bein; practically
inpossible to perfonn. Any results 1oUJ.l.d be
o OperatiIp svstEm is the CFU system amitrary and assume wide limits of error. In
software. (eg. IR: \'MS) the process control system. ClR'lication,
software reliability is typically addressed
o Real Tllne Q:lntrol Code is the real time indirectly by ClR'lyin; rigid c:peration,
system software which facilitates on-line documentation and security starDards. In
process oart:rol and sharin; of system appl ications where substantial software
resources. develc:pnent is a necessity, a dedicated seni-
offline develqment and testin; system is
o Arplicatioos Code is user (or vermr) often used thereby reducin; a major source of
written software specific to the process on-line oart:rol system failure potential.
oart:rol ClR'licatien.
The crucial analysis, however, is the
As ale progresses fraIl level I t.hrcu:Jh level relationship between software failure and the
IV the followin:] ClR'lies: overall oart:rol system failure definition. It
is often difficult, early in a control system
o Software c:peraticnal history decreases project, to predict the software that will be
o N\.mi)er of ClR'licaticns decreases
runnin; and what it will be doin; - especially
o Testin;J time decreases at the ClR'licaticns code level. '!be software
o F'l:'eI:peB:y of changes and 1IIXiificaticns that is predictable and can be analyzed (ie.
iJx:reases levels I, ll, and Ill) sha.lld be evaluated
o N\.mi)er of unfoun:1 ''J:u;Js'' in::reases for:
'!his ilIplies that overall software reliability o ~ticnal history
decreases durin; the progressien to ..mere the
o OX:umentatien
applicaticns code is the rost vulnerable. o ~ity testin; procedures
'!his is :inp:>rtant in the plant oart:rol o Failure modes
envircnDent because it is the real time
o Redun::lant ~rtunities
oart:rol and ClR'licatien software which is rost
aooessible to the user and rost critical to As ClR'lication specific software is develcped
the plant c:peratien. "Software which ~ is
it sha.lld be rigorrusly tested and doaJmented,
liJce a car which ~ • • • it is best left if possible, prior to on-line c:peratien.
alale." However, differin; ClR'licaticns and
typical plant changes require that Features such as a "DIy Run" mode enhance the
mdificaticns be made, and rost frequently at probability of removin; ''J:u;Js'' without
levels III and IV. SUbjectively, changes may jeopardizin; plant c:peratien. Every effort
shoold be made to produce software modules
hardly ever be made to proven microcode, while
which, cnoe proven, can be used in repeatable
c:harx]es may be made yearly to the c:peratin;
system, JIalthly to a oart:rol system source ClR'licaticns .
code and even daily to ClR'licaticns code. '!be
ease and sillplicity with which such changes Reliability and redI.In::lan:y, as ClR'lied to
can be made by a user, (especially at the software, are therefore tenns which require
:h ailability, Reliabilit\, or Red u nd ancy 177

fUn:Jamental analysis, however, the nature of warm) shcW.d be recognized. Any redundant
software and the tools available, determine carp::I'leJlt shoold be pericxlically mc.ni.tored for
that arrt analysis is largely subjective. health while in backup 1OOde. In practice,
Practical ilrprovements to software reliability most redundant systems are subject to CXIIIID'l
are limite:i to the ecan::mic and organizaticnal. IOOde failures which limit the potential
effort recpired to create and enforce detailed availabili ty. such failures are not
deYel.q:ment, t.esti.n;J and qJeratial standards. necessarily hardware faults, but can be
'!he susoeptibility of the OCI'ltrol system due relate:i to qJeratirq ernrircrrnent, software
to software failure is best evaluate:i al this prci>lems or maintenance procedures.
basis.
By relievi1q the oc:nfusial associate:i with the
terms availability, reliability, and
redundan:::y and stressirq the need for thorc:u:Jh
failure definitial, the fc:mrlatial exists for
Develq:ment of a process OCI'ltrol system joint userjven:Jor develq:ment of process
specificatial by the iroustrial user is often OCl'ltrol system architectures to meet specific
anticipate:i with fear. By urrlerstand.irg the plant requirements for SE!alrity, integrity and
key qJeraticnal. factors associate:i with a safety .
OCI'ltrol system for his plant, Il'l.Idl. of the
cq:prehensial can be eliminate:i. In this
respect, control system availability
requirements must be defined with an
cq:preciatial of the relative iIIpacts of
reliabili ty (both hardware and software) and 1. ~ity Ccrttrol Harrlbook. (1951). '!he
maintainability • Abilities. M:&raw--Hill.
2. Renner, KeYyn M. (1985). Develq:ment of an
Prior to arrt analysis of availability, a Autanatial Plan for a Rlannaoeutical
failure definitial DIJSt be resolved anr:rgst Manlfacturirq Plant. Hlarmaoeutical
plant eI'X1ineerirq and qJeratirq persamel . Tgfulology .
nlls definitial fonns the basis for evaluatirq 3. Renner, KeYyn M. (1987). Management
the OCI'ltrol system architecture for ~imJm strategy . for ~lannirq Plant Autanatial.
process uptime. (Figure 5). Food Ernl1lgeI'llP·
4. Hamilton, Paul R. (1987) . Reliability,
'!he ad:litial of redundant ~ is a Availability, and Fault Tolerance.
CXIIIID'l cq:proach taken to enhancirq OCl'ltrol ~.
system availability. While generally 5 . Department of Defense Militazy Harrlbook,
SllCXlE!SSful, careful evaluatial of econcmic and 2170. (1982). Reliability Prediction of
qJeraticnal. justificatial shoold precede such Electrcru.c Equiprent.
a decisial. LimitatialS which can often 6. Jones, T. capers. (1986). Progran1tIirg
detract fran backup systems (both hot and Productivity. l'tGraw--Hill.

System Failure Rate ExaItple

M::xlule ~

~U 24.0
576 x 10-6 576 x 10-6

~*2 24.0

CcmnL.i.nk 0.25
0.25 6.25 x 10-8

CcmnL.i.nk ' 0.25

CcrttrolU 116.2
116.2 1.51 x 10-2

Ccrttrol U' 130.0

Total System Hardwre


Failure Rates: 116.45 0.0157

q; = Failure rate (10-6 failuresjhc:ur).


178 K. M. Renner

Operator
~ce

-- tIona Network

~viaory
PrOC888Ol'

RegUatory
Procesaor

VO
Termlnatlone

VP Converter.
Tranerrittera

FIl8I Control
EJementa

Fig. 1. CONTROL SYSlEM FUNCTIONAL HIERARCHY


Availability, Reliability or Redundancy 179

Oper&to.. Operato..
Coneole 11 Coneole 12

o • I I -0
I I
Control Control
Koclule 12

.
Koclule 11

•n
• ,• ,•
a j~ j~
n 1r 1r ,
"etd I/O "etd I/o

Fig. 2. System A Architecture

Operato.. Operato..
CoDeole 11 Coneole 12

Control Control
Koclule 11 Koclule 12

"etd I/O "etd I/O

Fig. 3. System B Architecture


180 K. M. Renner

Ft,. 11 . EvaluaUIlI Control System Availability

You might also like