Professional Documents
Culture Documents
Machine Learning and Artificial Intelligence As A Complement To Condition Monitoring in A Predictive Maintenance Setting
Machine Learning and Artificial Intelligence As A Complement To Condition Monitoring in A Predictive Maintenance Setting
This paper was prepared for presentation at the SPE Oil and Gas India Conference and Exhibition held in Mumbai, India, 9-11 April 2019.
This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents
of the paper have not been reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect
any position of the Society of Petroleum Engineers, its officers, or members. Electronic reproduction, distribution, or storage of any part of this paper without the written
consent of the Society of Petroleum Engineers is prohibited. Permission to reproduce in print is restricted to an abstract of not more than 300 words; illustrations may
not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright.
Abstract
In recent years, the oil and gas industry has gained greater operational efficiencies and productivity by
deploying advanced technologies, such as smart sensors, data analytics, artificial intelligence and machine
learning — all linked via Internet of Things connectivity. This transformation is profound, but just starting.
Leading offshore E&P operators envision using such applications to help drive their production costs to as
low as $7 per barrel or less. A large North Sea operator among them successfully deployed a low-manned
platform in the Ivar Aasen field in December 2016, operating it via redundant control rooms — one on the
platform, the other onshore 1,000 kilometers away in Trondheim, Norway. In January 2019, the offshore
control room operators handed over the platform's control to the onshore operators, and it is now managed
exclusively from the onshore one. One particular application — remote condition monitoring of equipment
— supports a proactive, more predictive condition-based maintenance program, which is helping to ensure
equipment availability, maximize utilization, and find ways to improve performance. This paper will explain
the use case in greater detail, including insights into how artificial intelligence and machine learning are
incorporated into this operational model. Also described will be the application of a closed-loop lifecycle
platform management model, using the concepts of digital twins from pre-FEED and FEED phases through
construction, commissioning, and an expected lifecycle spanning 20 years of operations. It is derived from
an update to a paper presented at the 2018 SPE Offshore Technology Conference (OTC) that introduced the
use case in its 2017-18 operating model, but that was before the debut of the platform's exclusive monitoring
of its operations by its onshore control room.
Technology's increasing role in North Sea E&P holds lessons for the world
Despite deployments of sophisticated technologies, such as smart sensors, data analytics, artificial
intelligence and machine learning, all linked via Internet of Things (IoT) connectivity, the oil and gas
industry is just starting to realize the gains in efficiencies and productivity that these technologies can
provide. Other benefits can include significantly lower total cost of ownership, both capital and operational
expenses, and reduced health, safety, and environmental (HSE) concerns.
Data is the key to these benefits, of course. But, even today, data remains a largely untapped asset by E&P
operators around the world. One application with huge potential upside is the remote condition monitoring
2 SPE-194590-MS
(CM) of E&P production machinery and the use of CM to support a more proactive, even predictive,
condition-based maintenance (CBM) model. This paper provides insights to a remote CM-enabled CBM
model being used on a North Sea platform operating in the Ivar Aasen field off Norway's coast.
What's especially unique about this CBM's CM approach is the creation of high quality data by a method
of time-stamping of data events. This is an "on-change" data-sampling technique that is event-driven by
anomalies in equipment performance monitoring data. This data is sourced via the CM system's linkages
with electrical, instrumentation, control and telecom (EICT) built into the platform's topsides structure. With
time-stamping of events in the CM data streams, the platform's control room staff can identify the sequences
of events more precisely, helping them to diagnose issues faster and with greater certainty.
Notably, the platform's owner-operator decided early in its pre-FEED stage to sole-source the EICT. Its
EICT partner provided the company the means to employ a digital twin concept, which is essentially a
virtual proxy of the platform's physical assets. By employing digital twins across the remaining FEED,
construction, commissioning, and early operation stages that preceded first oil (produced in December
2016), the company's time savings helped to accelerate its time to first oil and, consequently, its cash flow
by $90 million.
By deploying greater digitalization capabilities and the use of asset virtual twins in its North Sea
platform's EICT, including the use of artificial intelligence (AI) for CM/CBM, the company has made
quantum progress in its strategy — as it has shared with the world's oil and gas investment community — to
reduce its per-barrel costs of offshore production to under $7. That way, shareholders can be assured of its
profitability in nearly every market pricing scenario. In addition, the Ivar Aasen's successful digitalization
give the company a baseline model for other planned E&P projects, helping to compress pre-FEED and
FEED stages, saving significant capital costs and time, while substantially reducing project risk.
The Ivar Aasen field in the North Sea holds an estimated 186 million barrels of oil equivalent. Conditions
for its offshore E&P operators are extreme: weather and sea conditions are among the world's harshest, with
storms and penetrating sea air and its corrosive salt content taking their toll on equipment and infrastructure.
Sitting on a sea floor about 110 meters below the ocean's surface, the company's platform draws from
seven wells. Total daily production is approximately 60,000 barrels, the facility's maximum output, which
is forecast to continue for a minimum of 20 years. The platform is sited 180 km off the Norwegian coast
and is 725 km south of the Arctic Circle.
When production began in December 2016, the company operated the platform using two redundant
control rooms. One was aboard; the other was onshore, 1,000 km distant in Trondheim, Norway. This
arrangement was the platform's standard operating procedure through from startup through 2018. During
this time, the owner-operator and EICT supplier continuously improved the CM-based CBM concept. They
also thoroughly tested the remote control capabilities as well as the onshore control room's competency in
remotely monitoring and operating the platform both effectively and, most importantly, safely.
In January 2019, after the offshore control room had proven itself fully capable of meeting the company's
high standards of performance and safety, the onboard control room was deactivated, and exclusive
monitoring and operational control was given to the Trondheim facility. As a result, the company will start
to realize significant savings in costs and risks that come with staffing an offshore platform, especially so
far from land and in the North Sea. In case the platform-based control room is needed for future use, it
remains ready for activation.
bidding out the EICT functionality to different vendors. Sole-sourcing also would eliminate the numerous
and diverse EICT interfaces that engaging multiple suppliers would spawn. Overall, project risk would also
be reduced.
Not only did the operator sole-source its EICT package, but it also did so in the project's pre-FEED stage,
so the supplier could participate in the earliest discussions and assist in helping to drive digitalization in
design right from the start as this is crucial for the result. It chose an EICT supplier with relevant expertise,
plus a broad range of integrated solutions in the areas of electrification, automation, and digitalization. The
supplier's decades of experience in oil and gas deployments around the world, particularly the complex and
rigorous demands of offshore operations.
The contractor's lead team offshore oil and gas experts worked out of Oslo, Norway, but construction
took place in three key locations: the chassis, in Sardinia, Italy; the 15,000-ton topside deck, in Singapore;
and living quarters, in Stord, Norway. More than 5,000 people were involved across these locations. With
so many subcontractors, multiple project schedules, and interdependencies, project execution required
extremely precise coordination by the prime EICT contractor. This task also included the synchronization
and integration of software and hardware packages from 20 third-party suppliers.
EICT project work engineering was also done and coordinated worldwide: EICT basic design, in Oslo:
HMI (human-machine interface) systems, in India; field instrumentation, in Germany. The prime contractor
was also responsible for the fabrication and delivery of the platform's low-voltage motors and switchgear.
This equipment was built in a Norwegian factory and shipped to Singapore for topsides installation.
• UPS: Performance/batteries
Valves
• PSV monitoring
• Valve leakages
• Partial stroke
4 SPE-194590-MS
• Shutdown analysis
Rotating equipment
• Transmitter monitoring
• Computer monitoring
• Computers
Process
• Heat exchanger
While the digital twin approach did save time and enhance project quality, it also provided considerable
cost and time savings through enhanced coordination of system interfaces. In addition, the digital twin
model facilitated the level of integration and auto-generation of both the project's automation, digital, and
tools domains through greater standardization efforts.
The platform's fully-integrated EICT package was built using a proven process control system and
programming code libraries refined and drawn from years of use in oil and gas applications. Included were
electrical distribution systems, field instrumentation, and telecommunications. The EICT software stack
was programmed using the OSIsoft PI Asset Framework. This application gathers real-time operational
data from hundreds of sensors on the platform's rotating equipment, electrical equipment, instrumentation,
automation controls, valves, and process monitors. (Refer to the sidebar for a listing of monitored functions.)
Platform data is transmitted by means of redundant, highly secure links via a fiber-optic network on the
North Sea's floor that most North Sea E&P platform operators use. Cybersecurity was and remains a top
concern, of course. That's why a defense-in-depth model, based ISO 27001 and 27002 and ISA/IEC 62443
data security standards, the world's most rigorous, was employed.
The digital twin concept enabled the operator to attain operational stability of the platform and first
oil substantially sooner than a traditional, multivendor approach would allow. Weeks, if not months, were
saved. Exhaustive onshore factory-acceptance testing of all third-party systems before final assembly also
helped to save time.
SPE-194590-MS 5
The sole-source EICT approach made training much simpler, too. Operations and maintenance personnel
didn't need to learn the different interfaces from many vendors, just one. In addition, virtual commissioning
used a process simulator that helped to train personnel much earlier versus waiting for the physical
infrastructure to be commissioned. Also, loop-tuning functionality in the digital twin simulations helped to
determine high quality control-loop parameters in a test environment before the platform's start-up.
Given this platform's complexities — globally sourced design, engineering, construction, deployment
and commissioning — the actual savings in time are hard to measure. Nonetheless, it is conservative to
estimate that at least 30 days were saved compared to a traditional project approach that would not have
sole-sourced the EICT nor used the digital twin concept. As such, the company fast-tracked its time-to-cash
and to breakeven on its massive investment. So, by starting production a month faster than a conventional
project approach, the company was able to pull in $90 million in accelerated cash flow, based on a 60,000-
bpd production capacity at $50 per barrel.
Going forward, the company anticipates being able to further reduce the platform's total cost of ownership
(TCO) significantly by using the digital twins of its various physical assets to continually improve their
performance and capabilities. A large part of this is the result of following a first principle in the EICT
design: standardize all platform components, modules, and systems to the greatest extent possible. This
way, spare parts requirements and maintenance expenses are expected to be dramatically lower.
Appendix I illustrates some of the many dashboard displays accessible by the Trondheim control room
staff. The KPI data that are shown come from hundreds of CM telemetric signals streaming or batched
from the platform's EICT systems. These are analyzed using analytic software provided by the EICT prime
contractor. The KPI data — from both specific signals and their combinations — are matched to baseline
equipment operating signatures.
The analytics software, powered by AI/ML pattern-recognition capabilities, are used to uncover
deviations. These could represent anomalous asset behaviors or be the result of changing environmental
conditions. They might also be a sign that the parameter set points need adjusting. Whatever the case, this
is where operator and OEM human experience, expertise, and judgement are needed.
For example, Figure 1 shows a data graph with the functionality of a deep LSTM (Long Short-Term
Memory) autoencoder. This network has been trained to detect anomalies in a machine using 29 analogue
sensors. The autoencoder is able to differentiate between anomalies and healthy signals, this is done by
training the model on healthy data only. This results in a model that is unable to reproduce the anomaly part
of the signal, allowing a healthy KPI to be calculated by comparing the input of the model with the output.
Figure 1—Data graph from an AI-enabled LSTM auto-encoder algorithm used for CM/CBM.
One of the main benefits of using an LSTM auto-encoder over other anomaly detection algorithms is that
it not only detects anomalies, but that it is also able to specify exactly where in the data set the anomaly is
and what its healthy state should be. Here is an explanation of the graphic subplots:
Subplot 4: Difference/delta
The difference between the input and output of this signal (and the other 28 signals) can be used to calculate
a healthy KPI profile for a machine's parameters. An increase in this value will indicate a faulty state of the
given signal and can be used to trigger an alert to human operators for further investigation.
The data graphs in Appendix II provide another illustration of how KPI data can be used to re-set a KPI's
operational parameters should environmental conditions change require doing so. One data graph shows
the analytical output of a valve's discharge pressure compared to the parameters of its expected flow. The
deviation triggered an operator alert, which spurred root-cause research by the Trondheim control room
operators. Their investigation combined the analytical findings based on the AI/ML algorithms with human
intelligence, seasoned with veteran operator experience.
Ultimately, the investigation revealed that the valve's normal operating parameter set points had to be re-
set, and a control room technician was able to change them remotely. This parameter modification will make
the CM/CBM model's algorithms "smarter," so human operators can recognize abnormal valve behaviors
in the future with much greater precision. The event and associated activities were then labeled and noted
in maintenance logs for reference, if later needed.
Part of the EICT prime contractor's added value was and is its vast library of baseline operating signatures
for all its rotating equipment models. That's in addition to its large global base of the same, equipment
deployed in different geographies and industries around the world. This information is invaluable in helping
to develop and teach AI/ML algorithms their initial "lessons." It also provides a knowledge base to be tapped
for diagnostics and effective mitigation and remediation of operating issues.
controller (PLC) pushes the data out via a gateway to the data storage database data storage — on change
only. Because the PLC features a deadband, it can assess whether any data changes are substantial enough
to be of note and, if so, time-stamp the data and sent it through.
In addition, data values get time-stamped as near to their source as possible. Ideally, that occurs at the
equipment location on the platform or onshore well site. If not, otherwise the first PLC receiving the signal
does the time-stamping. In this way, the time-stamping is done with as much precision as the logic cycle
time of the PLC, in micro/milliseconds.
Time-stamping also allows for specific platform equipment data from the PLCs to be sent through the
gateway for processing and storage versus having data pre-selected, as traditional polling requires. That's
because it takes tremendous computing power to poll the 100,000+ objects on a typical platform in cycles of
1–5 seconds. It also reflects why polling-based systems require users to pre-select specific tags for polling.
Otherwise, they risk overloading their SCADA or other distributed control system networks.
Another on-change time-stamping feature is time synchronization across all sensing units. This can help
the platform control room staff to spot variations as well as associated events and be able to note when they
occurred (e.g., first, second, third and so on). They can also distinguish correlation and causation, so they
can find the root cause or causes of anomalous behaviors in the various operating equipment.
Time-stamping provides operators the means to diagnose problems much more quickly. Time-stamped
data lets control room operators apply analytics with greater precision to draw better insights into deviant
machine behaviors. Of course, faster problem resolutions can minimize production disruptions or even
prevent them, if the early-warning signs can be addressed sooner.
Another plus: Sometimes data can vary from expectations due to a shift in environmental conditions but
still within the functional parameters originally set according to the baseline signatures of the various KPIs.
With time-stamped data, better decisions can be made as to whether or not to mitigate or remediate an issue
— or keep running the subject equipment until the next planned platform outage. Sometimes, too, set points
of deployed equipment may need to be reset, and time-stamped data can help in making those decisions, as
well. (See Appendix II describing this latter situation.)
The greater precision of time-stamping compared to conventional polling techniques can help operators
evaluate specific problems with actuators or other sensed components to prevent catastrophic failures. Take,
for example, an open/close valve in one of an offshore platform's systems. Although technicians aboard the
platform can physically assess a problematic valve's performance to identify and prevent its failure, doing
so is more difficult with remote CM — especially if conventional data polling methods are used.
Why? Because the data samples would not be accurate enough to conduct a valve performance evaluation
remotely. Time-stamping addresses this issue. A related reason is that valves may only move every few
months, so the most revealing data to detect an actual or impending fault would only be generated when the
valve moves. Put another way, the data captured using conventional sampling would be similar to evaluating
the performance of a door while it is open or shut, but not while it's moving.
Finally, time-stamping's precision as an integral part of the operator's remote CM/CBM system helps the
company comply with strict Norwegian regulations governing the maintenance the oil and gas metering
system aboard the platform. This system helps the company calculate the revenue split for project partners
and government taxes, which is why calibration is mandated each year.
With the system having 280 instruments, manual calibration used to take nearly a year of one engineer's
time. Today, with CM/CBM deployed, regulators have signed off on the extra precision of the metering
system's remotely monitored instrumentation, allowing the company to perform manual calibration every
three years, instead of annually. This frees an engineering resource for more valuable tasks. The resource
savings might only be small compared to the overall operating expenses, but when an offshore E&P operator
is looking to drive production costs under $7 a barrel, they look for gains wherever they can.
10 SPE-194590-MS
How CM/CBM can provide remote diagnostic services for offshore rotary
equipment
An offshore (and onshore) CM/CBM model can incorporate remote diagnostic services (RDS) for rotating
equipment. These assets are among the most complex and expensive in the E&P toolkit and include
compressor trains, either single units or entire fleets in different geographic locales. Because they are such
complex machinery, their utilization rates and availability are critical to production performance. And,
though compressor trains are designed, engineered, and built to be reliable and fault-tolerant, downtime
can cause costly process disruptions and potentially undermine health, safety, and environmental (HSE)
compliance.
Implementing a CM/CBM model for a compressor starts with programming the KPI signatures of a
healthy profile into its digital twin, along with all the required algorithms to support that healthy status.
Then, the KPI operating data, as previously explained, can be monitored and compared with the digital
twin's baseline signature to identify deviations that could turn into problems. By preventing trips and forced
outages via early detection of potential faults and preventive remediation, compressor availability could be
boosted by as much as three percent annually — equivalent to approximately 11 days each year.
This extra uptime not only reduces forced outages and their subsequent costs, but over a compressor train's
expected 20-year lifecycle, it can lower the equipment's total cost of ownership (TCO). What's more, another
TCO advantage can arise from the extended component lifecycles via more proactive, even predictive,
maintenance approaches that employs AI-enabled ML technologies.
What follows explains the data paths and workflows for RDS conducted in a CM/CBM model and using
IEC 62443 and other rigorous cybersecurity standards. First, the compressor train data is sent fully encrypted
from the well site to a monitoring facility, typically owned and operated by the OEM providing the RDS.
It could also be tied into an owner-operator's remote-control room.
Either way, the data gets decrypted on arrival. It's then anonymized and placed in parallel redundant
databases. The latter sit behind next-generation firewalls, which deliver added security. Backup and disaster
recovery are also provided. Next, the data is run through an advanced analytical agent programmed
to evaluate compressor performance. This agent compares the compressor's digital twin performance
parameters with the incoming KPI operating data. It then uses data analytics and pattern recognition software
to sift through time-stamped events in the data, searching for anomalous behaviors.
To help monitoring engineers visualize the data, the software produces heat maps that show
the compressor performance curves in addition to other data-graph representations. These enhance
understanding of the KPIs and their interrelations as well as provide operating insights into specific
compressors. If deviations are detected, the OEM's compressor experts can be alerted, to look into root
causes, investigating not only the compressor but its entire train and its driver, whether a steam or gas turbine
or an electric motor.
Should a likely cause be found, the OEM alerts the compressor's operator with a risk assessment and
recommended actions. Some cases may require immediate remediation to avoid a forced outage. Other cases
may allow the issue to be resolved during a planned outage. To minimize downtime, if spares are needed,
replacement parts can be taken from inventory. And, if they are unavailable, they can be dispatched in time
to arrive for the remediation service activity.
With RDS incorporated into a CM/CBM model the deep OEM compressor train knowledge and remote
support joins together the operator's extensive onsite operational experience in a true collaboration. For
example, in remediating a potential or actual problem, the OEM engineers in the RDS monitoring centers can
securely gain online access to the compressor train's HMI. They can then watch and advise the maintenance
staff on the proper procedures by way of a web conference or phone call — in effect, "over-the-shoulder"
engineering support. This support can reduce mean-time-to-repair (MTTR) cycles, minimizing outages and
their costs.
SPE-194590-MS 11
Acronyms
AI : Artificial Intelligence
CM : Condition Monitoring
CBM : Condition-Based Maintenance
E&P : Exploration & Production
EICT : Electrical, Instrumentation, Control and Telecom
ESP : Electrical Submersible Pump
FEED : Front-End Engineering Design
HSE : Health, Safety, and Environment
IoT : Internet of Things
ML : Machine Learning
SCADA : Supervisory Control and Data Acquisition
References
Gruss, Alec. Artificial Intelligence: Steps to Transforming Offshore E&P for Vastly Improved Business Outcomes.
Presented at the Offshore Technology Conference (OTC) 2018. 30 April–3 May 2018
Brandes, Jürgen. Harnessing Data Effectively to Develop a Low-Manned Platform in a Remote, North Sea Operating
Environment. Presented at the Offshore Technology Conference (OTC) 2018. 30 April–3 May 2018
12 SPE-194590-MS
Appendix I
Representative Dashboards for Offshore Remote Condition Monitoring
Alarm Reports
Alarm Management is an alarm report based on KPIs. The application will show pre-calculated 10-minute
slices of alarm statistics against KPI limits. The KPIs include average alarm rate, peak, average standing
alarms, average shelved alarms, top alarms, alarm priority distribution, operator actions (acknowledge/
shelve).
SPE-194590-MS 15
Running Hours
Running hours is a dashboard that keeps track of the running hours and minutes of pumps, motors,
feeders, aggregators, fans, compressors, and other connected equipment. If can be set to issue maintenance
notifications if equipment has been running for a specific set number of hours or has started and stopped
for a specific set number of times.
16 SPE-194590-MS
Shutdown Verification
Shutdown Verification displays verification of ESD, PSD or F&G shutdowns as well as effects and
connected feedback objects. The Cause and Effect application is based on cause and effect sheets. Users
select a folder and all cause and effect sheets in this folder are loaded into the application. The cause-and-
effect sheets employ a predefined template. Users may subscribe to notifications from the applications, so if
a shutdown is detected, the user immediately gets an email with the Shutdown Verification report attached.
SPE-194590-MS 17
The Blocked, Override, Suppress (BOS) report displays all block, override, and suppress events in a given
time frame. The overview section shows a summary of what is active now and how the duration of how long
equipment has had the given state.
18 SPE-194590-MS
Valve Monitoring
The Valve Monitoring application dashboard shows all events for operating valves. The prerequisite is that
the valve has one or more limit switches. Valve events are calculated in the PLC were possible, otherwise
PIMAQ will calculate activation time (command → valve leaves start position), travel time (valve leaves
start position → valve reaches final position) and total time (activation time + travel time) based on
state timestamps. Automatic configuration is a feature that activates when new valves in the historian
are detected and added to the application automatically. Limit values are fetched from PLC, but may be
overwritten in the application. It is also possible to add limits not used in the PLC, such as the minimum
travel time (for larger valves that should not move too quickly).
SPE-194590-MS 19
Appendix II
Analytics Data Graph
Data graph shows the analytical output of a valve discharge pressure versus its expected flow parameters.
This prompted an investigation that combined human intelligence, including veteran operator experience,
with the analytical findings, based on AI's machine-learning algorithms. The investigation determined that
the set points defining the valve's normal operating parameters needed adjusting, so they were remotely
changed as a result. This parameter adjustment will help make the CBM model "smarte"r and enable
operators to identify abnormal valve behaviors more accurately in the future. This phenomenon was then
labeled and noted in maintenance logs for future reference. the parameter was decided to need a slight
change as depicted on the next page.
20 SPE-194590-MS
Data graph shows the unit's parameters after being remotely adjusted, with the flow now within new
parameters’ bounds.