You are on page 1of 16

INSIGHT

REPORT

THE
DEEPWATER
HORIZON:
LEARNINGS FROM A
LARGE-SCALE DISASTER

© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM


TABLE OF CONTENTS
THE DEEPWATER HORIZON:
LEARNINGS FROM A LARGE-SCALE DISASTER

Introduction.........................................................................................1

The Deepwater Horizon Story and Causes of the Disaster...................2

Impacts of the Disaster........................................................................4

Organizational and Leadership Failures................................................5

Organizational and Leadership Solutions.............................................9

Summary of Important Lessons: What Can You Do?........................ 12

Sources............................................................................................ 13

About the Authors / Disclaimer / About Intelex.............................. 14

The Deepwater Horizon: Learnings from a Large-Scale Disaster

© Copyright 2019 Intelex Technologies Inc.

intelex.com
1 877 932 3747
intelex@intelex.com

@intelex /intelextechnologies /intelex-technologies /intelexsoftware

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM
“The well blew out because a number of separate risk factors, oversights, and outright
mistakes combined to overwhelm the safeguards meant to prevent just such an event
from happening. But most of the mistakes and oversights at Macondo can be traced
back to a single overarching failure - a failure of management. Better management by
BP, Halliburton, and Transocean would almost certainly have prevented the blowout
by improving the ability of individuals involved to identify the risks they faced, and to
properly evaluate, communicate, and address them. A blowout in Deepwater was not
a statistical inevitability.
National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling

INTRODUCTION
The Deepwater Horizon disaster unfolded on the evening of April 20, 2010. The Deep-
water Horizon, a drilling rig owned by Transocean and operated by British Petroleum
(BP) in collaboration with Halliburton, was located approximately 40 miles off the coast
of Louisiana at the Macondo Prospect, a drilling site that was being prepared for hy-
drocarbon extraction. A series of technical and human errors resulted in the release of
hyrdrocarbons onto the rig floor at 9:45 p.m. At 9:49 p.m., the hydrocarbons ignited
and turned the massive Deepwater Horizon into an inferno that only subsided when
the rig was completely destroyed and sank in nearly 5,000 feet of water on April 22. 11
workers were killed in the explosion and 16 were severely injured. The well site, which
was damaged by the initial failure of the BOP (blowout preventer) to contain the flow
of hydrocarbons up the wellbore and by the subsequent explosion, continued to vent
into the Gulf of Mexico until it was temporarily capped on July 15. The well was then
permanently closed off by a relief well drilled to a depth of 18,000 feet on September
19. During that period, approximately five million barrels of oil flowed into the Gulf of
Mexico and caused significant damage to the fragile ocean ecosystem of the region.
This article is about what organizations can learn from this disaster. This is not an
article meant only for those in the petroleum industry or in large organizations. The
Deepwater Horizon incident is a disaster that is far more than the sum of a series of
mechanical failures; it is a tragedy that resulted from a combination of technological,
organizational, and cultural failures that aggregated over time, unnoticed and ignored,
until they resulted in the catastrophic events that claimed 11 lives, contaminated the
waters of the Gulf of Mexico, and cost both BP and the coastal communities billions
of dollars. Instead, this article is about what every organization can learn from the
cultural failures that the Deepwater Horizon demonstrates so dramatically.

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster 1


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM
THE DEEPWATER HORIZON STORY
AND CAUSES OF THE DISASTER
The mechanical causes of the Deepwater Horizon disaster are well-documented and
do not require extensive examination here. Those who want to dig more deeply into the
details should consult the report from the National Commission (2011). Nevertheless,
it is worth providing a brief summary of the events leading up to the explosion, as
detailed in the commission report, to help contextualize the discussion that follows.
Underwater drilling, particularly in deep water, is a difficult and often treacherous task.
The Macondo Prospect lies at approximately 18,000 feet below sea level, which requires
a remarkable amount of engineering precision and caution both to ensure the safety of
the crew and to preserve the integrity of the valuable hydrocarbon commodity. When
drilling into the seabed to reach the deposit, the crew circulates drilling mud into and
out of the well to cool the drill. The drilling mud also serves the crucial purpose of
containing the naturally occurring pressure that threatens to push hydrocarbons up the
well and past the drill. Hydrocarbons entering the well before they can be contained
and controlled is known as a “kick”, and the mud must therefore have enough integ-
rity to serve as both a lubricant for the drill and a barrier to keep the hydrocarbons
down until the pressure can be contained, without being so dense and heavy that it
fractures the rock and damages the integrity of the deposit. Drilling mud is made to
detailed specifications to prevent the buildup of nitrogen bubbles that could damage its
integrity and lead to uncontrolled
FIGURE 1
kicks. Crews therefore constantly
monitor the circulation of mud to
ensure that its pressure and den-
sity is consistent with the many
roles it must perform.
As the crew digs the well, they
encase it in a series of steel tubes
and secure the tubes in cement to
control the pressure and prepare
the site for long-term extraction.
Together, the steel casing and the
cement allow the crews to control
the pressure from the deposit and
prevent it from pushing hydrocar-
bons up the well and onto the rig.
If the cement and casing fail, the
final line of defense is the BOP that
sits on the ocean floor, which can
use a mechanism called the “blind
shear ram” to sever and seal the
well. Figure 1 shows a typical BOP.

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster 2


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM
At 9:01 on the day of the disaster, drill-pipe pressure increased, which happened at a
subtle rate that went unnoticed by the crew, despite the fact that increasing pressure
was evidence that a kick was in progress. This continued until 9:39, at which point
drill pressure suddenly decreased. What ought to have been a positive sign was, in
fact, an indication that the lighter hydrocarbons were pushing the drilling mud up the
well as they made their way up to the surface. At 9:40, drilling mud, acting as the
prelude to the hydrocarbons’ appearance, began spewing onto the rig floor. At 9:45,
the expanding hydrocarbons hit the surface of the rig like what one witness described
as “a 550-ton freight train hitting the rig floor.” (National Commission 2011). The first
explosion occurred at 9:49. The BOP, which was both damaged by the first explosion
and unreliable due to neglected maintenance, failed to activate the blind shear ram
and sever the well.

CAUSES
In the most general terms, the most significant technical failure was the BOP, which
failed to perform the task for which it was designed. However, a series of decisions
that ran contrary to best practice contributed significantly. In particular, decisions such
as using one long tube in the well instead of the safer but more expensive practice
of using multiple tubes, using six centralizers to cement the well instead of the
recommended 21, substituting seawater instead of mud to contain well pressure, and
using both a lower volume of mud and a cement slurry that was prone to becoming
porous and allowing gas to escape, all contributed to providing the inputs into a kick
that the BOP was eventually unable to contain. (Mills 2015) Reader and O’Connor
(2014) have summarized the key stages of the disaster and their causes in Table 1.

TABLE 1
KEY STAGE CAUSE
The cement barrier Errors in conducting and interpreting the negative-pressure test, creating
used to isolate the the belief that the job had been successful.
hydrocarbon zone at Errors in the design of the cementing process.
the bottom of the well
The use of an inappropriate foam cement slurry designed to seal the well.
from the annular space
failed. Design of the temporary abandonment which resulted in overly high
levels of pressure being placed on the cement job.
Hydrocarbons entered Failure of the cement job integrity.
the well and travelled Errors in monitoring and interpreting real-time data displays showing
up the riser. signs of a kick.
Hydrocarbons on the Hydrocarbons were not contained, and diesel generators ingested and
rig floor ignited. released them onto deck areas where ignition was possible.
Deck areas lacked automatic fire and gas detection systems, resulting in
equipment in potential ignition locations not being shut down.
The Blowout Preventer The cables linking the emergency disconnect system (EDS) and the BOP
(BOP), used to seal the were damaged by the fire.
well and prevent the Failures in the maintenance of the BOP (possibly of the batteries)
uncontrolled flow of prevented activation of emergency automatic system for shearing the drill
hydrocarbons towards pipe and sealing the wellbore.
the rig, did not activate.

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster 3


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM
IMPACTS OF THE DISASTER
The environmental impact of the accident is difficult to measure. From a purely quan-
titative perspective, it seems straightforward: approximately 5 million barrels of oil
and 250,000 metric tons of natural gas escaped the Macondo well and spread across
approximately 930 km2 of the Gulf of Mexico. (Fisher 2016) Initial containment efforts
were likely hampered by underestimating the hydrocarbon discharge, which was esti-
mated at 1,000 to 5,000 barrels a day immediately after the disaster but was actually
much higher, ranging from 50,000 to 70,000 barrels a day. (Joye 2015)
Assessing the damage to the ecosystem itself is considerably more challenging. The
deepwater continental slope in the Gulf of Mexico where the blowout occurred is one
of the most complex and least-understood ecosystems in the world. (Fisher 2016)
The extreme depth means that exploring it and collecting samples is difficult, which
results in a lack of baseline data. The best estimate of the disaster suggests that at
least half of the discharged hydrocarbons have settled in the deepwater, which will
result in damage to the ecosystem and a decline in the resiliency of the species that
live in it. The recovery of the ecosystem will likely take several decades or longer, since
the low ambient temperatures lower the metabolic rates of its inhabitants, resulting
in longer lifespans and lower population turnover to replenish and renew species
contaminated by the spill. (Fisher 2016)
The Gulf Coast economy suffered considerable disruption as a result of the oil spill.
Damage to fish and wildlife had a significant impact on the seafood industry, resulting
in layoffs along the entire seafood supply chain. (Grattan 2011) The economic disrup-
tion also had a significant psychological impact, with clinical studies showing a rise
in clinical depression, psychological distress, and symptoms of post-traumatic stress
disorder (PTSD). (Grattan 2011)

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster 4


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM
ORGANIZATIONAL AND LEADERSHIP FAILURES
Disasters can provide powerful insights about how to organize and manage high-risk
ventures. Despite their universal capacity for creating tragedy, disasters have a diverse
variety of origins. Bozeman’s (2011) model for technology-embedded organizational
disaster identifies four disaster archetypes with varying ratios of human-technology
interaction as follows:
• High Human/High Technology (HH/HT)
• High Human/Low Technology (HH/LT)
• Low Human/High Technology (LH/HT)
• Low Human/Low Technology (LH/LT)
These categories allow us to discern the possible organizational lessons we can learn
from different disasters. At one end of the spectrum, LH/LT disasters are “Acts of God”
disasters that are essentially unpredictable and, in most cases, unpreventable. At the
other end of the spectrum, HH/HT disasters suggest large-scale failures to integrate
human agency and technology across an entire organization. Bozeman uses these
categories to provide examples of the origins of various disasters, as summarized in
Table 2:

TABLE 2
HH/HT HH/LT LH/HT LH/LT
Deepwater Corps of Engineers 1937 Texas Gas Hurricane Katrina
Horizon Everglades Dredging Explosion Flooding
Space Shuttle Enron Financial Concorde Disaster 2004 Indian Ocean
Challenger Scandal Tsunami

Deepwater Horizon fits into the category of high human decision-making and high
human-technology interaction, which therefore makes organizational culture the pri-
mary catalyst for the disaster. Bozeman identifies the role of hierarchy in mediating
decision-making, redundant technology systems creating a false sense of security,
the suppression of dissent, and the diffusion and misunderstanding of risk as the
features that tie Deepwater Horizon to other HH/HT disasters like the Space Shuttle
Challenger disaster. (Bozeman 2011)
Reader and O’Connor (2014) have examined the disaster from a systems theory per-
spective that combines the elements of two distinct areas: 1) non-technical skills (NTS),
such as decision-making, situational awareness and risk perception, teamwork, and
leadership, and 2) safety culture, such as workplace pressure, organizational learning,
etc. Table 3 on page 6 summarizes Reader and O’Connor’s analysis.

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster 5


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM
TABLE 3
NTS PROBLEMS SAFETY CULTURE PROBLEMS
Cognitive/ Social/Team Production vs. Organizational Regulation Human Factors
Individual Factors Factors Safety Pressure Learning Engineering
Decision-making Teamwork. For Workers feared Previous Regulators Inadequate staff
based on poor example, poor punishment for incidents lacked expertise training, fatigue
situational awareness communication of reporting unsafe were not used and avoided from long shifts,
of risk. For example, decision-making conditions that as learning reaching poor safety
short-term profitability and risk from could slow opportunities. conclusions manuals, and
emphasized over management meant progress on the For example, that would poorly designed
long-term viability of operational staff project. regulators increase costs interfaces for
the well. were unaware did not share or decrease human-computer
of important findings about production. interaction.
environmental risks. similar previous
Decision-making Leadership. For incidents on
based on inaccurate example, executive other rigs.
situational awareness leaders visiting the
of technical rigs for a safety
processes. For inspection focused
example, too many on slips and falls
demands made on instead of critical
workers’ attention safety infrastructure.
resulted in missing
signals of an imminent
blowout.

Reader and O’Connor use a systems-based approach to counter the prevailing root-
cause analysis model of the Deepwater Horizon. In doing so, they highlight that the
disaster has no single cause to which the entire event can be attributed. Instead,
systematic and organizational factors such as regulations, safety culture, communi-
cation, profit-based thinking, and third-party contractors all contributed to a culture
that misunderstood and intentionally ignored risk over time. (Reader & O’Connor 2014)
While addressing each of these factors individually might have done little to prevent
the disaster, in aggregate, they created a complex web of potential for disaster in
which large factors such as the failure of the BOP and smaller factors such as poorly
documented procedures were contributors. Barry Turner termed this the “incubation
period,” in which the steady accumulation of events occurs in tandem with a broad
organizational failure to recognize and identify the danger. (Turner 1997)
The evidence from the National Commission Report, as well as much of the research
summarized here, suggests that BP, and to a lesser degree Transocean and Hallibur-
ton, emphasized maximizing profit and saving time over managing risk and protecting
safety. An analysis of public speeches given by BP executives, including Chief Exec-
utive Tony Hayward, prior to the disaster show that BP’s public language emphasized
safety in its brand messaging, but rarely did so in the more substantive details of its
operations and priorities. (Amernic 2017) In Hayward’s 18 public speeches prior to his
2010 AGM speech, which took place just days before the Deepwater Horizon disaster,

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster 6


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM
the word “safety” appears only 17 times, often only in secondary positions relative to
more vital themes such as efficiency and cost-cutting. Hayward’s predecessor, Lord
Browne of Madingley, mentioned “safety” only 56 times in his 125 public speeches
between 1997 and 2007. (Amernic 2017) This is in stark contrast to the Baker Report,
which was the final report of the commission investigating BP’s Texas City Refinery
Explosion on March 23, 2005, which resulted in 15 fatalities and 180 injuries. The Baker
Report, which found BP safety culture lacked reporting and organizational learning,
discouraged negative reporting and corrective action, lacked focus in controlling major
hazards, and lacked effective safety leadership, uses the term “safety culture” 387
times in its analysis of BP’s culpability in the disaster. (Amernic 2017) Table 4 shows
the National Commission’s summary of some of the decisions, and their perceived
justification, that contributed directly to the explosion:

TABLE 4
Decision Was There A Less Decision-maker
Less Risky Time Than
Alternative Alternative?
Available?
Not waiting for more centralizers Yes Saved time BP on shore
of preferred design
Not waiting for foam stability test Yes Saved time Haliburton
results and/or redesigning slurry (and perhaps
BP on shore)
Not running cement evaluation Yes Saved time BP on shore
log
Using spacer made from Yes Saved time BP on shore
combined lost circulation
materials to avoid disposal issues
Displacing mud from riser before Yes Unclear BP on shore
setting surface cement plug
Setting surface cement plug Yes Unclear BP on shore
3,000 feet below mud line in (Approved
seawater by MMS)
Not installing additional physical Yes Saved time BP on shore
barriers during temporary
abandonment procedure
Not performing further well- Yes Saved time BP (and perhaps
integrity diagnostics in light Transocean) on
of troubling and unexplained rig
negative pressure test results
Bypassing pits and conducting Yes Saved time Transocean
other simultaneous operations (and perhaps
during displacement BP on rig)

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster 7


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM
The emphasis on saving time and money over safety and managing risk manifested
along a wide spectrum of organizational failures. At the one end, there are easily
identifiable failures that result from simple negligence and disregard for the safety of
workers on the rig. For example, many of the third-party contractors on the rig worked
with severely compartmentalized information that was not routinely shared with other
teams or distributed from executives to workers further down the hierarchical chain.
(Mills 2015) Both BP and Transocean lacked standardized procedures that would
have allowed them to share best practices from their experiences on other rigs, and
Transocean crews complained that the safety manuals were poorly organized, difficult
to understand, and made no distinction between mandatory and optional procedures,
the result of which was that workers often had to make decisions for which they had
little context relating to risk and to the safety of the crew. (National Commission 2011)
According to the National Commission, 46 percent of the crew members surveyed
after the disaster feared leadership reprisal for reporting on these failures and the
unsafe conditions they created. (National Commission 2011)
The cultural neglect of risk at BP coincided with a broader neglect of risk in the U.S.
petroleum industry. While there is a general focus on controlling risk, there is no specific
regulatory mechanism that requires it beyond Environmental Impact Assessments.
This is in contrast to, for example, Norwegian and UK regulations that are primarily
risk-based. (Skogdalen 2011) Deepwater prospects like Macondo have a significant
number of risk influence factors (RIF) that present unusual difficulties, such as complex
casing, high pressure and temperatures, unusual and unknown geographic forma-
tions, and personnel who lack experience in these conditions. These prospects have
narrow drilling windows and require unusually complex and difficult drilling operations
in which minor mistakes and rework can lead to millions of dollars in cost overrun. In
these conditions, risk management becomes extremely important. (Skogdalen 2011)
Human and organizational factors (HOF) play an important role in determining those
elements that contribute to the ability of human cognition to manage major hazard risks
effectively. Skogdalen (2011) identifies six primary HOFs, as summarized in Table 5:

TABLE 5
HOF DESCRIPTION
Work practice The complexity of the given task, how easy it is to make mistakes, best practice/
normal practice, checklists and procedures, silent deviations, and control activities.
Competence Training, education, both general and specific courses, system knowledge, etc.
Communication Communication between stakeholders in the process of plan, act, check, and do.
Management Labour management, supervision, dedication to safety, clear and precise delegation
of responsibilities and roles, and change management.
Documentation Data-based support systems, accessibility and quality of technical information, work
permit systems, safety job analysis, and procedures (quality and accessibility).
Work schedule Time pressure, work load, stress, working environment, exhaustion (shift work), tools
aspects and spare parts, complexity of processes, man-machine-interface, and ergonomics.

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster 8


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM
Post-accident analysis reveals that BP, Transocean, and Halliburton regularly underes-
timated risks in favor of operational decisions that maximized time and cost reduction
over safety. (Mills 2015) For example, BP failed to conduct an adequate risk analysis
and management of change (MOC) for late changes to the design of the well and
the drilling procedures. In another instance, BP’s decision to change its approach to
installing the lockdown sleeve, which secures the wellhead and production casing,
was the catalyst for a number of other decisions that resulted in compromised well
integrity. There is no evidence that BP engaged in any formal risk analysis for any
of these decisions. (Skogdalen 2011) Instead, decision makers avoided engaging in
identifying and mitigating risks, which resulted in late decision making taking place in
isolation and without adequate communication, equipment, or important procedural
documentation for the various crews working on the rig.

ORGANIZATIONAL AND LEADERSHIP SOLUTIONS


Drilling rigs are high reliability organizations (HRO). HROs are organizations in which
behavior at all levels, from leadership to workers, operate as a system that integrates
human-technology interaction with a rigorous approach to identifying the potential for
failure, mitigating failures that do occur, and learning from every incident to enhance the
overall level of organizational knowledge. HROs are complex systems that typically op-
erate 24-hours-a-day and tolerate no downtime, and in which failures have the potential
to aggregate over time in diverse parts of the system and require complex approaches to
mitigation and prevention. The complex level of human-technology interaction in an HRO
can put enormous burdens on the behavior and cognition of the crew, which means that
processes and systems must always exist to prevent humans being the weakest points
in a system that results in cataclysmic failure. The organizational theory supporting the
concept of HROs derives from the aviation industry, in which everything from maintenance
to air traffic control operates continuously and tolerates no downtime or systems failure.
BP leadership neglected its fundamental duty by fostering, and even promoting, at-risk
safety behavior on Deepwater Horizon, which was a direct contribution to the disaster.
In response to the Deepwater Horizon disaster, the Industry of Oil and Gas Produc-
ers proposed the adoption of the Crew Resource Management methodology (CRM)
(IOGP 2014). CRM proposes rigid standardized work procedures that are still flexible
enough to react to and accommodate fluctuating situations and crises. Alavosius et al
(2017) summarizes CRM as a chain of behaviors in which all levels of the organization,
including leadership, utilize resources to perform the following consecutive tasks:
1. Plan a work process
2. Brief everyone on roles/functions
3. Monitor the process as it occurs
4. Detect and report deviations from the plan
5. Communication corrections from the top down
6. Adjust actions as needed
7. Debrief at important moments (at significant change or conclusion of work), and
8. Learn to refine the human-machine interface.

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster 9


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM
CRM works to reduce human error and promote the sort of behaviors that maintain an
effective balance between efficiency of the operation and crew safety. CRM promotes
six core skill sets, as summarized in Table 6:

TABLE 6
CORE SKILL SET DESCRIPTION
Communication Effective team communication can be badly impeded on an oil rig. In
addition, adherence to rigid management hierarchies can discourage open
communication. CRM aims to facilitate communication horizontally among
crew members and vertically between workers and management to create
effective interlocking behavior. It acknowledges and uses the informal
communication networks that characterize self-organizing social systems.
Situational Situational awareness (SA) is the ability to monitor elements in the
Awareness environment, comprehend their meanings, and project their significance
into the future. Shared SA among crew members relies upon both formal
and informal communication.
Decision Making Decision making relies on the technical and interpersonal competencies
of the leader. Leaders are frequently confronted with several options in
any situation, and they must have the experience and flexibility to adapt to
the information as they receive it from experts and act on it in a way that
acknowledges the complexity of the situation while addressing risk.
Teamwork Effective teamwork must break down communication barriers to facilitate
coordination. These barriers can result from management hierarchies
and from trust issues. Interlocking behavioral contingencies (IBCs) occur
when the behavior of one individual in a group becomes connected
to and dependent on that of another, which then produces a group
pattern of behavior that has a powerful impact on outcomes. These
metacontingencies represent the coordinated behavioral system of the
group, which is the foundation of effective teamwork. This coordinated
behavior needs to be flexible and adaptive at both the individual and group
level to respond to rapidly changing conditions.
Management Increasing automation can reduce human workloads in some areas.
of limits of However, leadership must be sensitive to the possibility that automation
crew members’ will increase demands on humans in other areas. Automated workflows
capacities can impose unreasonable demands on human workers that remain prone
to fatigue and stress. (Alavosius 2016) Workers are also prone to habitual
behavior that accepts high levels of risk because they have been able
to operate under those conditions for extended periods of time without
consequence, a phenomenon known as normalization of deviance. CRM
ensures that a person’s ability to maintain vigilance for extended periods of
time is relieved, not exacerbated, by automation.
Leadership Leadership sets the tone for organizational culture. When leadership
values safety and culture, the entire organization will follow. By establishing
clear goals and communicating them to the organization, leadership can
promote alignment. Leadership maintains awareness of the competencies
of the group and demonstrates SA by responding to changes in the group
situation and ensuring continued cohesion and direction.

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster 10


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM
CRM is therefore an approach to orchestrating group behavior in HROs through a
combination of standard operating procedures (SOP) and an awareness of human
behavior. According to the Industry of Oil and Gas Producers (IOGP), CRM is an
effective development and application of nontechnical (i.e. soft cultural) skills that
improve the safety and efficiency of well operation teams. (IOGP 2014) IOGP has de-
veloped a proposed CRM curriculum for well operations teams to guide the industry
in implementing this method.
Effective CRM provides crew members with the cognitive toolkit to react to crises
and maintain situational awareness, particularly in the face of technological failure.
Skogdalen (2011) has suggested that offshore crews are too confident in the ability of
their technological safeguards to prevent failure and are effectively incapacitated when
they encounter isolated or chains of technology failures in these systems. Confidence
in these barriers could also lead to the general neglect of maintaining the operation
of these mechanisms, which was a direct influence on the failure of the BOP on the
Deepwater Horizon.
Organizations must also recognize that learning and imagination are vital elements of
a healthy safety culture. They must incorporate experience and leanings from sub-
units within the organization and from other organizations, and disseminate them
to teams at every level, to expand their knowledge base and their ability to react to
unanticipated situations. (Flournoy 2011)

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster 11


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM
SUMMARY OF IMPORTANT LESSONS:
WHAT CAN YOU DO?
The Deepwater Horizon was an HRO measuring 396 by 256 feet, weighing 32,588 tons,
and housing a crew of 146, with a cost of approximately $560 million. However, the
lessons that the Deepwater Disaster holds can apply to any organization of any size.
Perhaps the most important lesson is that organizations, particularly those that
consist of complex systems, will not tolerate prioritizing profit, cost-cutting, and
saving time at the expense of system integrity. Failing to manage safety procedures
and processes in an organization will inevitably lead to tragedy at some point in the
future. In complex systems, small failures will aggregate with connected failures in
other parts of the system and eventually grow into crises. Every organization must
therefore be sensitive to minor deviations and fluctuations that provide evidence of
cascading failure. Weick (2015) has coined the term mindful organizing to describe an
approach to safety in which strict standard operating procedures (SOP) complement
dynamic situational awareness and improvised organizing in a fluctuating environment.
Mindful organizing is therefore a fundamental safety priority for every organization.
An extension of mindful organizing is organizational learning, in which lessons and
data from previous incidents provide analytics that allow management to make da-
ta-based decisions and identify both action and inaction that have contributed to the
aggregation of system failure. Health and safety systems that track incidents and
system errors are vital components to providing the organizational knowledge that
helps a crew anticipate and prevent disaster. As was shown in Table 3, organizational
learning was a significant failure on the Deepwater Horizon.
Additionally, the same automation that provides vital data for better decision-mak-
ing also gradually increases the cognitive burden on people. Data from connected
sensors can provide a powerful basis for data-based decision making, but it can also
provide an overwhelming amount of information that causes cognitive confusion and
stress for workers, which leads to risky safety behavior. Workers require clearly defined
processes and procedures to understand the scope of their roles and the situational
context in which they perform them.
Finally, safety culture is an organizational responsibility that begins with lead-
ership. Everyone in an organization is responsible for committing to a safe working
environment that balances efficiency and productivity with a sensitivity to the needs
and skills of human workers. Leadership sets the tone for organizational culture, both
in its public brand and its internal decision-making, and is therefore responsible for
ensuring that the culture fosters effective risk management and safety understanding.

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster 12


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM
SOURCES
Aiena, B. J., Buchanan, E. M., Smith, C. V., & Schulenberg, S. E. (2016). Meaning, resilience, and traumatic stress after the Deepwater
Horizon oil spill: A study of Mississippi coastal residents seeking mental health services. Journal of clinical psychology, 72(12),
1264-1278.
Amernic, J., & Craig, R. (2017). CEO speeches and safety culture: British Petroleum before the Deepwater Horizon disaster. Critical
perspectives on accounting, 47, 61-80.
Alavosius, Mark P., Houmanfar, Ramona A., Anbro, Steven J., Burleigh, Kenneth, & Hebein, Christopher. (2017). Leadership and
Crew Resource Management in High-Reliability Organizations: A Competency Framework for Measuring Behaviors. Journal of
Organizational Behavior Management, 37(2), 142-170.
Antonsen, Stian, Nilsen, Marie, & Almklov, Petter G. (2017). Regulating the intangible. Searching for safety culture in the Norwegian
petroleum industry. Safety Science, 92, 232-240.
Bawden, D., & Robinson, L. (2009). The dark side of information: overload, anxiety and other paradoxes and pathologies. Journal
of information science, 35(2), 180-191.
Bozeman, B. (2011). The 2010 BP Gulf of Mexico oil spill: Implications for theory of organizational disaster. Technology in Society,
33(3-4), 244-252.
Bye, Rolf Johan, Rosnes, Ragnar, & Røyrvik, Jens Olgard Dalseth. (2016). ‘Culture’ as a tool and stumbling block for learning: The
function of ‘culture’ in communications from regulatory authorities in the Norwegian petroleum sector. Safety Science, 81, 68-80.
Fisher, C. R., Montagna, P. A., & Sutton, T. T. (2016). How did the Deepwater Horizon oil spill impact deep-sea ecosystems?.
Oceanography, 29(3), 182-195.
Flournoy, Alyson C. (2011). Three Meta-Lessons Government and Industry Should Learn from the DP Deepwater Horizon Disaster
and Why They Will Not. 38 B.C. Envtl. Aff. L. Rev. 281, 281-303, http://scholarship.law.ufl.edu/facultypub/271.
Grattan, L. M., Roberts, S., Mahan Jr, W. T., McLaughlin, P. K., Otwell, W. S., & Morris Jr, J. G. (2011). The early psychological
impacts of the Deepwater Horizon oil spill on Florida and Alabama communities. Environmental health perspectives, 119(6), 838.
IOGP. (2014). Crew Resource Management for Well Operations Teams. Well Experts Committee. Training, Competence & Human
Factors Task Force. International Association of Oil & Gas Producers.
Joye, S. B. (2015). Deepwater Horizon, 5 years on. Science, 349(6248), 592-593.
Joye, S. B., Bracco, A., Özgökmen, T. M., Chanton, J. P., Grosell, M., MacDonald, I. R., & Passow, U. (2016). The Gulf of Mexico
ecosystem, six years after the Macondo oil well blowout. Deep Sea Research Part II: Topical Studies in Oceanography, 129, 4-19.
Kongsvik, Trond, Gjøsund, Gudveig, & Vikland, Kristin M. (2016). HSE culture in the petroleum industry: lost in translation? Safety
Science, 81, 81-89.
Mills, R. W., & Koliba, C. J. (2015). The challenge of accountability in complex regulatory networks: The case of the Deepwater
Horizon oil spill. Regulation & Governance, 9(1), 77-91.
National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling. (2011). Deep Water: The Gulf Oil Disaster and
the Future of Offshore Drilling.
Reader, Tom W., & O’Connor, Paul. (2014). The Deepwater Horizon explosion: non-technical skills, safety culture, and system
complexity. Journal of Risk Research, 17(3), 405-424.
Skogdalen, J. E., & Vinnem, J. E. (2011). Quantitative risk analysis offshore—human and organizational factors. Reliability Engineering
& System Safety, 96(4), 468-479.
Turner, B. A., & Pidgeon, N. F. (1997). Man-made disasters. Butterworth-Heinemann.
Weick, Karl E., Sutcliffe, Kathleen M. (2015). Managing the Unexpected: Sustained Performance in a Complex World. Hoboken:
John Wiley & Sons.

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster 13


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM
ABOUT THE AUTHORS
SCOTT GADDIS
Scott Gaddis is Vice President, Global Practice Leader, Safety and Health at Intelex Technologies.
He has over 25 years in EHS leadership experience in heavy manufacturing, pharmaceuticals and
packaging industries. Before joining Intelex, Scott served as Vice President of EHS for Coveris High
Performance Packaging, was Executive Director of EHS at Bristol-Myers Squibb, and was Global Leader
for Occupational Safety and Health at Kimberly-Clark Corp.

GRAHAM FREEMAN
Graham Freeman is a technical writer and researcher. He is a Content Specialist at Intelex.

Disclaimer
This material provided by the Intelex Community and EHSQ Alliance is for informational purposes only.
The material may include notification of regulatory activity, regulatory explanation and interpretation,
policies and procedures, and best practices and guidelines that are intended to educate and inform you
with regard to EHSQ topics of general interest. Opinions are those of the authors, and do not necessarily
reflect the opinion of Intelex. The material is intended solely as guidance and you are responsible for any
determination of whether the material meets your needs. Furthermore, you are responsible for complying
with all relevant and applicable regulations. We are not responsible for any damage or loss, direct or
indirect, arising out of or resulting from your selection or use of the materials. Academic institutions can
freely reproduce this content for educational purposes.

About Intelex
Intelex Technologies is a Toronto, Canada-based provider of Environmental, Health & Safety, and Quality
(EHSQ) Management and workflow software for organizations of all sizes. The company is a leader in
software-as-a-service solutions and serves customers from across a wide range of industries, located
around the world. The Intelex platform is a mobile solution and provides integrated tools for front-line
EHSQ professionals. We can be found at www.intelex.com.

INSIGHT REPORT | The Deepwater Horizon: Learnings from a Large-Scale Disaster 14


© INTELEX TECHNOLOGIES INC. | 1 877 932 3747 | INTELEX.COM

You might also like