You are on page 1of 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/333702930

Crisis Management Analysis of The BP Oil Spill

Research · June 2019

CITATIONS READS

0 2,223

1 author:

Dhaimaan Mahmud
University of Birmingham
4 PUBLICATIONS   2 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Operations Management View project

International Business Strategy View project

All content following this page was uploaded by Dhaimaan Mahmud on 11 June 2019.

The user has requested enhancement of the downloaded file.


1
Contents

Introduction 3

Preconditions 5

Trigger 7

Crisis 8

Post-Crisis 9

Summary 12

Reference 13

2
Introduction

On the 20th April, crew members on the Deepwater Horizon Rig (situated in the Gulf of Mexico),

began preparing to temporarily discard a well it had drilled for extraction. The events that

unfolded led to 11 fatalities and had long-lasting implications on the environment. Regarded as

the largest oil leak in US history, the BP Oil Spill has been a major point of interest for academics,

environmentalists and prominent stakeholders in the oil and gas industry (Mejri and De Wolf,

2013). This paper includes detailed analysis as to how and why the crisis surfaced. . Insights are

also provided on the remediation strategies employed by BP and opinion given on whether the

company has learned from this disaster.

To aid analysis, Figure 1 outlines the relevant theories and models utilised in this paper:

Figure 1: Relevant Theories

Relevant to several sections within this paper, Reason (1990) developed a logical

model that examines 'human error' predicated on two factors: person error and

system error. Person error reflects the combination of unsafe actions, errors and

regulatory violations undertaken by frontline staff (crew members of the oil rig, in
The Reason Model
this example). System error is attributed to the mistakes of high-level management,

system designers and organisations as a whole. The aforementioned errors are

categorised in four ways: (1) organisational influence; (2) unsafe supervision; (3)

precondition for unsafe acts; and (4) active failures.

This model can be applied to the failure of complex systems within the BP oil rig,

which are analysed as a crisis preconditions. Perrow (1984) developed this theory

through an empirical study which he conducted on crises such as Three Mile


The Normal
Accident Theory Island and multiple aircraft crashes. Perrow (1984) unequivocally states that
(NAT)
accidents in complex systems are “normal”. Within his study, he illustrates that

although accidents in such systems are unexpected, they are also

“incomprehensible”.

3
It is widely understood by academics of crisis management that disasters happen

because of incremental events over a period of time. Turner (1978) defines this

concept as the ‘Incubation Period’. This period consists of risks which are known
Incubation Period
but are neglected, thus leading to a disaster. The relationship between

preconditions and the incubation period are discussed within this paper.

This concept is discussed within the 'Trigger' section of this paper. Stein (2004)

believes that the critical period immediately follows the incubation period (Turner,

Critical Period 1976). Specifically, the critical period begins at the point of trigger and ends when

the crisis occurs. This paper provides important insights into this particular

moment of the BP crisis.

This model will be utilised in the Post-Crisis section. Beer and Nohria (2000)

developed two organisational change approaches, Theory E (Hard Approach) and

Theory O (Soft Approach). Theory E predominantly focuses on the heavy use of


Theory E and
economics incentives, restructuring, and the need to maximise shareholder value.
Theory O
Theory O focuses more on the organisations capabilities, and how best to develop

them. This paper examines to what extent these approaches were adopted in the

aftermath of the crisis.

4
Preconditions

By examining organisational influences (a core component of the Reason Model) one can see that

the Macondo Well project became overly complex in terms structure. This undoubtedly

contributed to BP being ill equipped in dealing with a potential crisis. The project was stratified

into different groups with several reporting lines. The extra layers of bureaucracy meant that

communications became convoluted and it is believed to have contributed to incidents of

misinformation. For instance, BP's engineering team failed to share all risk information to their

colleagues, employees and contractors (National Commission, 2010). This supports Turner's

(1976) incubation period concept as BP management knew of the risks but failed in

communicating them to the relevant party. In addition to this, team leaders were also in frequent

conflicts, acting in their own self-interests and increasing the risk profile of the project as

information was potentially withheld.

BP also made poor recruitment decisions, appointing inexperienced managers who were

incapable of undertaking crucial decisions that increased the risk profile of the project (National

Commission, 2010).

When examine the Reason Model's facet of ‘unsafe supervision’ we see several situations where

the correct protocol was not undertaken by BP across all levels of hierarchy. The accumulation of

unsafe supervisory action had resulted in risk levels substantially increasing. Not only were risks

increasing, but they were also incrementally becoming more aggressive in nature. For instance,

one of the first acts of unsafe supervision is illustrated when BP neglected its responsibility of

ensuring safety protocols were carried out after the completion of the Macondo Well (National

Commission, 2011). This was a major mistake on BP’s part, violating safety protocols which may

have identified the issues present with the cementing of the well. Should these issues have been

identified sooner, the likeliness of the crisis happening would potentially be slim. In addition to

this, there was also very little supervision during and after works were carried out. This can be

attributed to the aforementioned organisational restructuring which created much confusion

regarding who was accountable for the assurance of safety.

5
Moving on to the ‘preconditions for unsafe act’ component, we see a number of latent errors

which easily widen the scope for unsafe acts to be made. One of which is the frequent ‘last

minute decisions’ made by BP such as; changing system design to use lower quality mechanical

components; changing procedures multiple times; waiting too long to carry out crucial tests

(National Commission, 2010). This illustrates the lack of improper training, more specifically for

safety and compliance raising the question as to what BP's motives were for these decisions.

What BP failed to understand is that when one crucial ‘last minute’ change is made, this would

most likely result in more ‘last minute’ changes following, causing delays throughout the project.

Essentially, due to BP’s cost-cutting decision at the expense of training staff and using lower

quality components, crucial signs were missed due to the safety checks either being overlooked

or misread.

This leads us onto arguably one of the more crucial precondition which set up the unsafe actions

which follows in the trigger section; the misinterpretation of pressure tests. The National

Commission (2011) report suggests that there was a lot of confusion about the pressure tests and

what they mean. As the crew members misinterpreted the readings, they were under the false

notion that everything was going well. This illustrates the lack of knowledge management within

BP whereby they have failed to properly instil a diffusion of knowledge within the organisation. It is

very much possible that should BP have trained and facilitated an open knowledge network within

their organisation, crew members would have correctly interpreted the readings. After this, they

could have taken the necessary precautionary measures whilst having sufficient time to exercise

safety protocols.

Utilising Perrow's (1984) NAT, specifically where two or more failures interact to produce

unexpected outcomes, we see that even if BP had provided crew members with the correct

training and knowledge, a crisis was still possible owing to many errors that were present in BP's

infrastructure. Specifically, the BP (2010) investigation report outlines that within the safety

measures there were several issues, mainly with the failure of valves. This valve was the most

crucial safety mechanism, located in the blowout preventer - however, this failed due to having a

flat battery and a faulty switch. Should crew members have responded adequately, unfortunately

this crisis would have still occured. The US Chemical Safety Board (2014) also states that BP took
6
many shortcuts which caused this disaster. BP (2010) also provide evidence for which the BOP’s

maintenance was not upheld, stating a “lack of robust Transocean maintenance management

system”. We can deduce from this that BP had a serious systemic safety issue, whereby safe-

guarding maintenance was not upheld and lacked basic due diligence.

Trigger

Whilst the incubation period is a key moment in the timeline of this crisis, whats important to

understand is the concept of the critical period as defined by Stein (2004). He defines this as the

the brief period of time in which a crisis unfolds after the triggering event. The distinction between

incubation period and critical period must be addressed as there is a significant difference

between these periods. The Bhopal disaster provides a suitable example for this distinction,

where a precondition in the incubation period was poor safety management (Tachakra, 1986).

However, the critical period began when there was a gas leak. Stein (2004) stipulates that ignoring

information during the incubation period may not necessarily lead to crisis albeit ignoring

information during the critical period always does.

Utilising this method of analysis we see for this crisis the critical aspect began when crew

members operating a mechanical separator made the poor decision to divert on-coming mud into

the separator instead of away from the rig (National Commission, 2010). Shortly before this

decision was made, larger quantities of mud and gas was already seeping onto the rig floor to the

detriment of the separator. Ignoring this crucial information led the crew to divert into the

separator in order to preserve gas. This led to the separator becoming overwhelmed and the rig

quickly became engulfed in liquid gas. Had crew members acknowledged the volume of mud-gas

mixture, through common sense it can be deduced that the better option is to divert away.

The Reason Model is again applicable as the above situation can be categorised as an ‘unsafe

act’. In support of this, BP's (2010) internal investigation team also reported that if the crew

members had diverted the mud away from the rig in the first instance, this would have safely

vented the majority of the gas without placing stress on the separator. One has to question, again,

the motivation of BP and its staff as to why such poor decisions were made.

7
Crisis

Based on precondition, incubation and critical period events, one can understand as to how

and why this crisis happened, particularly when factoring in the length of time preconditions

were left unchecked. This was for a period of approximately 6 months (National Commission,

2011), from November 2009 (when Hurricane Ida caused significant damage to the rig) until

April 2010 (when the crisis happened).

As Reason (1990) suggests, the accumulation of all categories of error eventually lead to an

accident or crisis. Thus, 6 months of unchecked errors would be sufficient time for the

preconditions to incubate and then manifest into a critical period. At this point, the crisis

was inevitable because the standard safety protocols were in breach: the separator had

failed and large volumes of flammable gas were at uncontrollable levels.

The crisis escalated even further because of poor planning and mis-handling of the subsequent

steps that unfolded. For instance, BP’s use of chemical dispersants in order to break down the

oil into droplets created much controversy. The National Academy of Sciences has criticised

the use of these dispersants citing a proliferation of damage to the environment (Offshore

Technology, 2017).

In addition, the decision to use the Emergency Disconnect System (EDS) was a sizeable

mis-step as the system was passive and inoperable. The implications of this caused

explosions on the rig, and led to it eventually capsizing, spilling an estimated 3.2 million

barrels into the sea. The environmental, social and economic impact of this disaster are well

documented and the implications long lasting. A few of these implications consist of; $2.5bn

cost to the US fishing industry; $23bn cost to tourism, and nearly 5,000 wildlife fatalities.

(Jarvis, 2010).

We can deduce again, that BP continue to show evidence of a serious number of systemic

issues pertaining to recruitment, training and safety maintenance. Should BP have maintained

8
the safety of their mechanisms at the very least, the rig would have been disconnected thus

significantly reducing the volume of oil spilled into the Gulf.

Post-Crisis

BP’s initial response to the crisis failed catastrophically. They committed a number of mistakes in

their communication which further extended the damages caused. BP were slow to acknowledge

the problem of the leaking well, and were also slow to suitably acknowledge the immediate

stakeholders affected (i.e. the families of the victims). Further to this, BP made a number of

insensitive statements downplaying the extent of the damage by addressing it as “modest” (Mejri

and De Wolf, 2013). To make matters worse, the abdication of responsibility by BP executives

blaming other parties involved made the company appear callous, damaging their reputation

further. BP also made many promises of full environmental restoration which in fact goes against

their internal spill plan (WPSU, 2017). This indicates that BP may not have focused strongly on its

corporate social responsibility. Additionally, there may have been competing priorities within the

business which may have caused this to be overlooked.

In addition to this, the poor levels of coordination and cooperation prevalent in the preconditions

were still present in the post crisis stage. There was confusion between Transocean, Coast Guard,

the rig salvage company and other organisations involved as to who is responsible in conducting

the firefighting operations (National Commission, 2010). It became quickly apparent that BP did

not have adequate contingency plans in place. In fact, former CEO Tony Hayward stated that BP

“were not prepared” (WPSU, 2017).

Despite a variety of mechanical and technical issues, a number of underlying management failures

have been surfaced which we can learn from in order to prevent future crises. Firstly, BP

displayed recurring evidences of poor staff management which resulted in a number of errors and

delays. Some examples are; the appointment of incompetent employees to management roles;

and the appointment of tasks which requires specialism to unspecialised employees (Chief

Counsel's Report, 2011). Coupled with lax supervision, operations carried out within the project

9
were progressively reducing in quality and safety. A key learning point is that a more robust

recruitment and staff deployment process is required with regular reviews of performance and

competency.

Secondly, we learn that inadequate training can nurture and proliferate a crisis. If employees are

not trained to identify worrying signs or situations, and how to best-respond in such cases, the

damage cannot be reduced let alone eliminated. Ultimately, BP placed undue reliance on

employees despite their failure to provide adequate training in order for them to be effective (Chief

Counsel's Report, 2011).

Furthermore, poor communication resulted in a knowledge gap within the organisation. This is

partly due to inadequate training but also down to the culture of the organisation. We learn that

nurturing a culture of leadership may eradicate this self-interest driven performance and in fact

drive knowledge diffusion across the organisation.

Finally, we learn the importance of having an adequate contingency plan (Mejri and De Wolf,

2013). Although some could argue that the Macondo Well was a ‘Black Swan’ as there were so

many elements which went wrong simultaneously, the crisis could still have been contained. As

crises are often unpredictable, it is essential to establish a strong contingency system with the

vision to reduce impact as efficiently as possible.

By applying The Theory E & Theory O model (Beer and Nohria, 2000) in observing changes

post crisis, one can ascertain whether the lessons from this disaster have been learnt.

BP claim that the organisation is working on enhancing training and development, more

specifically with regards to safety. This relates to Theory O’s focus on building the culture, attitude

and behaviour of employees which in turn would improve safety and reduce risk of a similar crisis.

Further to this, BP held workshops to identify what stakeholders expect from BP which

encourages participation from the bottom up (Offshore Technology, 2017).

10
We also see the use of Theory E’s process whereby BP set up the “Gulf of Mexico Research

Initiative (GMRI)” in which they provide funding to scientists at gulf-state universities (Offshore

Technology, 2017). The aim is to develop improved responsiveness to crisis and better traceability

of environmental spills. This move aligns with the predication of Theory E which states that

challenges cannot be overcome without a cohesive plan which inspires confidence amongst

stakeholders.

11
Summary

The aim of this paper was to provide a detailed analysis of the 2010 BP Oil Spill, detailing key

factors within precondition, trigger, crisis and post-crisis events that propagated the disaster.

Following this analysis, the lesson findings of this paper can be summarised into four broad

categories; 1) importance of communication; 2) importance of training and development; 3)

importance of supervision; and 4) importance of contingency plans.

The theoretical frameworks used in this paper provide crucial retrospective analysis, identifying

key issues, risks and learning points. For instance, employing the Reason Model has identified

inadequate training as one causation of the crisis. The employment of Theory O prompts a move

to improve the organisations “software” i.e. the employees. This is illustrated by BP’s decision to

enhance training and development (e.g. GMRI).

However, these models are limited because of their retrospective design, providing insights after

the events have happened. From an academic perspective, these models are fit for purpose.

However, from a commercial perspective, more concurrent risk appraisal methods are required to

maintain risk profile stability. General management best practices such as contingency planning,

built in project checks and backstops should enhance the identification and mitigation of risks

and errors. BP appear to have included some of these measures.

In the authors opinion, BP have learned some harsh lessons and have paid for these mistakes

(e.g. fines, selling of assets, internal restructuring). In doing so, the business has become more

lean. There are also signs of traction in terms of restoring reputational damage. For instance, the

restoration projects such as the ‘Lightning Point’ in Alabama and the GMRI (Offshore Technology,

2017) which is now a major research hub, nurturing extraordinary talented researchers and

scientists, providing industry changing insights and suggestions. The author believes the

restoration of BP’s reputation will be a longer process due to the litigation it faces. However, if the

company follows its restoration plan then the signs for improvement are present.

12
Reference

• Beer, M. and Nohria, N. (2000). Breaking the Code of Change. Administrative Science
Quarterly, 46(4).

• BP (2010). Deepwater Horizon Accident Investigation Report. [online] BP. Available at:
https://www.bp.com/content/dam/bp/pdf/sustainability/issue-reports/
Deepwater_Horizon_Accident_Investigation_Report.pdf

• Chief Counsel's Report (2011). Macondo: The Gulf Oil Disaster. National Commission.

• Dekker, S. and Pruchnicki, S. (2013). Drifting into failure: theorising the dynamics of
disaster incubation. Theoretical Issues in Ergonomics Science, 15(6), pp.534-544.

• Griggs, J. (2011). BP GULF OF MEXICO OIL SPILL. [ebook] Available at: https://
www.eba-net.org/assets/1/6/14_57_bp_gulf_of_mexico.pdf

• Jarvis, A. (2010). BP oil spill: Disaster by numbers. [online] The Independent. Available
at: https://www.independent.co.uk/environment/bp-oil-spill-disaster-by-
numbers-2078396.html

• Mejri, M. and De Wolf, D. (2013). Crisis communication failures: The BP Case Study.
International Journal of Advances in Management and Economics, [online] 2(2), pp.
48-56.

• Mejri, M. and De Wolf, D. (2013). Crisis Management: Lessons Learnt from the BP
Deepwater Horizon Spill Oil. Business Management and Strategy, 4(2), p.67.

• National Commission. (2010). National Commission on the BP Deepwater Horizon Oil


Spill and Offshore Drilling | The Gulf Spill. [online] Available at: http://www.iadc.org/
archived-2014-osc-report/response/response-actions-dispersants.html

• National Commission (2011). The Gulf Oil Disaster and the Future of Offshore Drilling.

• Offshore Technology. (2017). Cleaning up: is enough being done to progress oil spill
technologies?. [online] Available at: https://www.offshore-technology.com/features/
featurecleaning-up-is-enough-being-done-to-progress-oil-spill-technologies-5708501/

• Perrow, C. (1984). Living with high-risk technologies. New Jersey: Princeton University
Press.

• Reason, J. (1990). Human Error. Cambridge: Cambridge University Press.

• Stein, M. (2004). The critical period of disasters: Insights from sense-making and
psychoanalytic theory. Human Relations, 57(10), pp.1243-1261.

• Tachakra, S. (1986). The Bhopal Disaster. Prehospital and Disaster Medicine, 2(1-4), pp.
217-220.

• Turner, B. (1978). Man-Made Disasters. 1st ed. Wykeham Publications.

13
• US Chemical Safety Board. (2014). 1st ed. Explosion & Fire at the Macondo Well.

• WPSU (2017). Case Study: BP Oil Spill. [online] Pagecentertraining.psu.edu. Available


at: https://pagecentertraining.psu.edu/public-relations-ethics/ethics-in-crisis-
management/lesson-1-prominent-ethical-issues-in-crisis-situations/case-study-tbd/

14

View publication stats

You might also like