You are on page 1of 30

Worst Case Scenarios never happen do they?

The challenge of accounting for ‘atypical’ scenarios – the work of the EPSC ‘Scenarios Group’.

Richard Gowland – Technical Director EPSC.


• When we identify a hazardous scenario we: • Assess the severity scale • find the initiators • Identify the barriers and mitigation • Manage with a degree of success

• When we fail to identify a scenario • Bad things can happen!


Rare major accidents which change the world of Process Safety
• Events which we did not predict
– The event itself – The phenomenon or ultimate consequence which we ‘got wrong’

• Because:
– It never happened yet or before – It happened before but we did not know – It happened before and we decided that we were well protected

• Sometimes called ‘black swan’ events


Atypical Scenarios and NATECH Hazards • Concept of ‘atypical scenarios’ – An event which cannot always be captured by standard risk analysis processes and common Hazard Identification (HAZID) techniques. • Includes Worst case Scenarios and Scenarios initiated or affected by Natural phenomena (NATECH events) 4 .

” • This is not just about typical incidents caused by process control or human error – there is no reason to forget Natural events as initiators.Stating the problem • SRA report from N. Paltrinieri – “Conclusions: … the hazard factors which led to the major accidents of Toulouse (2001) and Buncefield (2005) could have been identified by experts and inspectors if early warnings had been heard and considered in the HAZID process. • Concept of true ‘atypical scenarios’ – An event which cannot always be captured by standard risk analysis processes and common Hazard Identification (HAZID) techniques. 5 .

Atypical Scenarios • Events we did not imagine • Events when we under-estimated consequence – Leading to using inadequate or to few independent degrees of protection • Events where we considered the initiators to be much lower in frequency than ‘real world’ • Events where the initiating event disabled all the protection – the common cause failure was the initiating event itself • Events where all protection failed – Managing the protection 6 .

– A mixture 7 .Maintaining a ‘sense of vulnerability’ ‘Uncertainty’ Consequence Analysis/Credible Scenarios • Hazard and Risk Management can be – Rule based – Risk Based – ….

We have different methods of Identifying hazards – ‘HAZID’ • • • • HAZOP What If? FMEA Check lists – Pro active (what might go wrong?) – Reactive (applying learning from incidents) We choose one or more to help us to find hazards 8 .

level etc. Temperature. phase.Taking one of the favourites as an example – HAZOP of a ‘node’… • Starts with describing the parameters for normal operation (Flow.) • Lists possible deviations from normal parameters • Lists possible causes • Lists consequences • Lists ‘safeguards’ ( may be prevention or mitigation) • Requires decision – tolerate or make a change? Questions: Are the Hazards identified dominated by ‘credible’ events Do multiple failures or NATECH issues get covered? 9 . Pressure.

Consequences • Major accident history seems to tell us that: – we may be able to predict the ‘deviations’ but we missed or underestimated the consequence. – We missed some crucial initiators – Or other ‘assumptions’ or ‘guarantees’ were not correct • • • • Phenomena chosen was wrong? (Buncefield) Event not seen as credible? (Texas City) Protection systems not available (Buncefield. Bhopal) Protection systems not adequate for event (Fukushima) 10 .

The Bow Tie is becoming an area for study 11 .

Can we take advice? • video Known Knowledge Known Unknown Awareness Known Known Known Unknown Unknown Unknown Known Unknown Unknown “Known/unknown” table from the statement of Donald Rumsfeld relating to the absence of evidence linking the government of Iraq with the supply of weapons of 12 mass destruction to terrorist groups .

• Known – Unknown – HAZOP and other techniques • Unknown – Known – Ensuring known events are not forgotten or consigned to the box – they should have known better/it cannot happen here. • Unknown – Unknown – What if? – What else? – If found .consigned to the ‘incredible/too difficult’ box 13 .Fitting our tools to this matrix • Known – Known – Design standards. Checklists etc.

Known Knowns • We know what can go wrong and we plan to prevent it • Relies on: – ‘Corporate Memory’ – Company or industry standards – deterministic legislation 14 .

Known Unknowns • Something which our procedures tell us to consider • We attempt to use HAZOP or other structured brainstorming techniques to predict 15 .

Unknown Knowns • Things which some of us know but not everyone • Things we have forgotten • Failures in corporate memory – e.g barriers which were placed in software. but 10 years later a programmer did not understand purpose and ‘cleaned up’ the operating program – removing the protection. 16 .

‘Unknown Unknowns’ • The things we don’t know that we don’t know: • Candidates to examine • Buncefield • Texas City • Fukushima • From the viewpoints: – The event phenomenon (Fire/explosion) which were discounted – The initiators we think cannot be predicted • Unusual weather • Tsunami • But – are they really ‘unknown unknowns’? – is this just an excuse? • After all. the native Australians were well aware of the existence of black swans… 17 .

18 .What can we do? • Enforce the protection for ‘known knowns’ • Find a way to move ‘known unknowns’ (atypical scenarios) into the ‘known known’ field – At least we need to add ‘atypical’ scenarios to our risk studies • Deal with the ‘learnings’ and corporate memory to convert ‘unknown knowns’ into ‘known knowns’ • We are left with ‘unknown unknowns’ –where we may be vulnerable and perhaps ill equipped.

determines) . Note that EPA 19 RMP REQUIRES that VCE is modelled for ALL Flammables (F.P.Unknown Unknowns? • Is it too easy to say that an event really was an unknown unknown? – And as a result we can be excused for omitting it from our study or failing to protect adequately? • Take Buncefield: Large leaks of gasoline leading to a Vapour Cloud Explosion had occurred at least 7 times during the last 50 years: • • • • • • • Houston Tx 1962 Baytown Tx 1977 Newark NJ 1983 Napoli 1985 (It) St Herblain (Fr) Jacksonville (Fl) 1993 Laem Chabang (Thai) 1999 • Conclusion: This was not an ‘unknown unknown’ –even if detonation had not occurred there would have been a deflagration – which is bad enough….

20 .

Unknown Unknowns? • Is it too easy to say that an event really was an ‘unknown unknown’? – And as a result we can be excused for omitting it from our study or failing to protect adequately? – Take Texas City: • 80 significant hydrocarbons releases in 2 years • AMOCO Safety regulations stated that ‘New Blow Down Stacks which vent hydrocarbons to the atmosphere are not permitted’ • Formally stated plans to replace atmospheric blow down stack had been in place 10 years but continually deferred • Conclusion: – This was not an ‘unknown unknown’ – A quick and dirty LOPA study would reveal major gaps in protection and common cause failure problems 21 .

7 M 22 . Later TEPCO increased the protection for an event of 5.‘Unknown Unknowns’ ? Fukushima – was it an ‘unknown unknown’?: • The Indian Ocean Tsunami (12/2004) produced waves up to 15 M high • There are early warning systems • Japan’s methodology for assessing tsunami risks was behind international standards • Studies in the early 2000s had indicated that the hazards of Tsunamis had been underestimated and recommendations were not followed up by TEPCO • Fukushima protection was designed for a 1960s tsunami clos to the Chilean coast (3.1 M).

Fukushima – was it an ‘unknown unknown’?: • IAEA recommended Best Practice is to take account of and allow for the scale of a possible tsunami event occurring once in 10.000 years • Historical documents indicated that since 1498 there have been 12 tsunamis with amplitude of more than 10 metres and 6 which were more than 20 metres – this means that there could have been 6 events in 500 years which would have overwhelmed all the control and protection systems at Fukushima • A simplified cost/benefit analysis for reducing the frequency of catastrophe from 1 per 1000 years to 1 per 100. there is no enforced 23 requirement to retrofit protection .000 years would indicate that added protection would be very cost effective • In the event of a deficiency.

•The Indian Ocean Tsunami (Dec 26 2004) produced waves up to 15 Metres high •There are early warning systems 24 .

Worst Cases • Are we imaginative enough? • Do our Hazard Identification tools cover everything we need? .what about worst cases? • Are we sure that the WORST CASE has never happened? (Buncefield) • Are we sure that the worst initiating event has never happened? (Fukushima) • Is our thinking dominated by ‘predictable’ initiators? • When we protect for ‘predictable’ or ‘credible’ scenarios – what is the added cost for protection against the Worst Cases? 25 .

• When we identify a hazardous scenario we: • Assess the severity scale • find the initiators • Identify the barriers and mitigation • Manage with a degree of success • When we fail to identify a scenario • Bad things can happen! 26 .

Do I decide that I cannot do anything except minimise the frequency (e. safeguards and outcomes in each line of study. one cause.. • Do I routinely consider worst cases which can occur because of multiple failures of protection systems? • If a worst case (or close to worst case) can be prevented by safeguards and protective layers do I actually study this properly? • Is it possible to imagine the worst case and then work backwards to see what has to happen (alignment of the holes in the ‘Swiss cheese’)? REVERSE THE HAZOP – starting with Worst Case Consequences and working backwards to find what has to be true for the event to occur? 27 . • Concentrating on this approach might be limiting our imagination about the worst cases.g. by testing and inspecting) and add in mitigation. • Are worst case scenarios dominated by catastrophic failures of containment? • If a worst case scenario is about catastrophic failures …. its primary output is about credible events since it really focuses on a process deviation.A personal view: I express the possibility that if we go through a HAZOP.

management failure. multiple alarms leading to a halt in the process control system) • Protecting against the worst case is not always ‘too difficult’ or too expensive 28 .g. human factors) • Do we address the conditions or events which would disable several safeguards simultaneously? (e. flood.g.Conclusion • Can do better in predicting and managing atypical and worst case scenarios: • There are actually very few ‘unknown unknowns’ – Don’t rely too much on HAZOP – Structure your PHA to list worst cases – Use the Bow Tie • Worst Case should not be confined to the emergency plan – It can be imagined – What can initiate it? – What has to fail for it to happen and are there common causes – fast track to disaster? – When we assess the safeguards: How rigorous is the evaluation of effectiveness (e. power failure. relief and containment systems.

And • A cost benefit analysis using historical data – Performance – Failure or initiator frequencies – Known events and their consequences For an adequate degree of protection or means of avoidance For the three examples (Tx. Buncefield. technology and resources 29 . Fukushima) indicates where we need to spend our time.

Thank you 30 .