You are on page 1of 20

SPE-194590-MS

Machine Learning and Artificial Intelligence as a Complement to Condition


Monitoring in a Predictive Maintenance Setting

Stig Settemsdal, Siemens

Copyright 2019, Society of Petroleum Engineers

This paper was prepared for presentation at the SPE Oil and Gas India Conference and Exhibition held in Mumbai, India, 9-11 April 2019.

This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents
of the paper have not been reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect
any position of the Society of Petroleum Engineers, its officers, or members. Electronic reproduction, distribution, or storage of any part of this paper without the written
consent of the Society of Petroleum Engineers is prohibited. Permission to reproduce in print is restricted to an abstract of not more than 300 words; illustrations may
not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright.

Abstract
In recent years, the oil and gas industry has gained greater operational efficiencies and productivity by
deploying advanced technologies, such as smart sensors, data analytics, artificial intelligence and machine
learning — all linked via Internet of Things connectivity. This transformation is profound, but just starting.
Leading offshore E&P operators envision using such applications to help drive their production costs to as
low as $7 per barrel or less. A large North Sea operator among them successfully deployed a low-manned
platform in the Ivar Aasen field in December 2016, operating it via redundant control rooms — one on the
platform, the other onshore 1,000 kilometers away in Trondheim, Norway. In January 2019, the offshore
control room operators handed over the platform's control to the onshore operators, and it is now managed
exclusively from the onshore one. One particular application — remote condition monitoring of equipment
— supports a proactive, more predictive condition-based maintenance program, which is helping to ensure
equipment availability, maximize utilization, and find ways to improve performance. This paper will explain
the use case in greater detail, including insights into how artificial intelligence and machine learning are
incorporated into this operational model. Also described will be the application of a closed-loop lifecycle
platform management model, using the concepts of digital twins from pre-FEED and FEED phases through
construction, commissioning, and an expected lifecycle spanning 20 years of operations. It is derived from
an update to a paper presented at the 2018 SPE Offshore Technology Conference (OTC) that introduced the
use case in its 2017-18 operating model, but that was before the debut of the platform's exclusive monitoring
of its operations by its onshore control room.

Technology's increasing role in North Sea E&P holds lessons for the world
Despite deployments of sophisticated technologies, such as smart sensors, data analytics, artificial
intelligence and machine learning, all linked via Internet of Things (IoT) connectivity, the oil and gas
industry is just starting to realize the gains in efficiencies and productivity that these technologies can
provide. Other benefits can include significantly lower total cost of ownership, both capital and operational
expenses, and reduced health, safety, and environmental (HSE) concerns.
Data is the key to these benefits, of course. But, even today, data remains a largely untapped asset by E&P
operators around the world. One application with huge potential upside is the remote condition monitoring
2 SPE-194590-MS

(CM) of E&P production machinery and the use of CM to support a more proactive, even predictive,
condition-based maintenance (CBM) model. This paper provides insights to a remote CM-enabled CBM
model being used on a North Sea platform operating in the Ivar Aasen field off Norway's coast.
What's especially unique about this CBM's CM approach is the creation of high quality data by a method
of time-stamping of data events. This is an "on-change" data-sampling technique that is event-driven by
anomalies in equipment performance monitoring data. This data is sourced via the CM system's linkages
with electrical, instrumentation, control and telecom (EICT) built into the platform's topsides structure. With
time-stamping of events in the CM data streams, the platform's control room staff can identify the sequences
of events more precisely, helping them to diagnose issues faster and with greater certainty.
Notably, the platform's owner-operator decided early in its pre-FEED stage to sole-source the EICT. Its
EICT partner provided the company the means to employ a digital twin concept, which is essentially a
virtual proxy of the platform's physical assets. By employing digital twins across the remaining FEED,
construction, commissioning, and early operation stages that preceded first oil (produced in December
2016), the company's time savings helped to accelerate its time to first oil and, consequently, its cash flow
by $90 million.
By deploying greater digitalization capabilities and the use of asset virtual twins in its North Sea
platform's EICT, including the use of artificial intelligence (AI) for CM/CBM, the company has made
quantum progress in its strategy — as it has shared with the world's oil and gas investment community — to
reduce its per-barrel costs of offshore production to under $7. That way, shareholders can be assured of its
profitability in nearly every market pricing scenario. In addition, the Ivar Aasen's successful digitalization
give the company a baseline model for other planned E&P projects, helping to compress pre-FEED and
FEED stages, saving significant capital costs and time, while substantially reducing project risk.
The Ivar Aasen field in the North Sea holds an estimated 186 million barrels of oil equivalent. Conditions
for its offshore E&P operators are extreme: weather and sea conditions are among the world's harshest, with
storms and penetrating sea air and its corrosive salt content taking their toll on equipment and infrastructure.
Sitting on a sea floor about 110 meters below the ocean's surface, the company's platform draws from
seven wells. Total daily production is approximately 60,000 barrels, the facility's maximum output, which
is forecast to continue for a minimum of 20 years. The platform is sited 180 km off the Norwegian coast
and is 725 km south of the Arctic Circle.
When production began in December 2016, the company operated the platform using two redundant
control rooms. One was aboard; the other was onshore, 1,000 km distant in Trondheim, Norway. This
arrangement was the platform's standard operating procedure through from startup through 2018. During
this time, the owner-operator and EICT supplier continuously improved the CM-based CBM concept. They
also thoroughly tested the remote control capabilities as well as the onshore control room's competency in
remotely monitoring and operating the platform both effectively and, most importantly, safely.
In January 2019, after the offshore control room had proven itself fully capable of meeting the company's
high standards of performance and safety, the onboard control room was deactivated, and exclusive
monitoring and operational control was given to the Trondheim facility. As a result, the company will start
to realize significant savings in costs and risks that come with staffing an offshore platform, especially so
far from land and in the North Sea. In case the platform-based control room is needed for future use, it
remains ready for activation.

How greater digitalization in EICT supports CM


To enable CM first, then CBM using CM, the platform's owner-operator made the strategic decision to
digitalize its operations as much of today's most advanced technologies could make possible.
Of course, a platform's EICT capabilities provide the foundation for digitalization, and the operator
may want to avoid the inherent procurement, integration, and support challenges that would come with
SPE-194590-MS 3

bidding out the EICT functionality to different vendors. Sole-sourcing also would eliminate the numerous
and diverse EICT interfaces that engaging multiple suppliers would spawn. Overall, project risk would also
be reduced.
Not only did the operator sole-source its EICT package, but it also did so in the project's pre-FEED stage,
so the supplier could participate in the earliest discussions and assist in helping to drive digitalization in
design right from the start as this is crucial for the result. It chose an EICT supplier with relevant expertise,
plus a broad range of integrated solutions in the areas of electrification, automation, and digitalization. The
supplier's decades of experience in oil and gas deployments around the world, particularly the complex and
rigorous demands of offshore operations.
The contractor's lead team offshore oil and gas experts worked out of Oslo, Norway, but construction
took place in three key locations: the chassis, in Sardinia, Italy; the 15,000-ton topside deck, in Singapore;
and living quarters, in Stord, Norway. More than 5,000 people were involved across these locations. With
so many subcontractors, multiple project schedules, and interdependencies, project execution required
extremely precise coordination by the prime EICT contractor. This task also included the synchronization
and integration of software and hardware packages from 20 third-party suppliers.
EICT project work engineering was also done and coordinated worldwide: EICT basic design, in Oslo:
HMI (human-machine interface) systems, in India; field instrumentation, in Germany. The prime contractor
was also responsible for the fabrication and delivery of the platform's low-voltage motors and switchgear.
This equipment was built in a Norwegian factory and shipped to Singapore for topsides installation.

Using the "digital twin" concept to reach first oil sooner


By combining advanced software tools with traditional CAD and CAE software applications, the EICT
supplier's engineering groups could link the project's FEED and commissioning stages far more closely. In
fact, they could perform some of the latter virtually using the digital twin of a physical asset, such as rotating
equipment. This way, they could focus on addressing key issues much sooner, even months sooner, when
problems are easier and less costly to solve. This also reduced the potential for expensive change orders
later that could disrupt the project's many interdependent, complex schedules.

Remotely monitored parameters at the Ivar Aasen North Sea platform


Electrical equipment

• UPS: Performance/batteries

• Transformers: Winding temp/hot spot/load

• VSD: Temperatures/fans/max-min current

• HV Switchgear: Gas condition/unbalance/temperatures/currents

• LV Switchgear: Earth fault/phase loss/current unbalance/wear

• HV/LV Breaker: Unbalance/breaking current/number of operations

Valves

• Valve monitoring (movement/operations)

• PSV monitoring

• Valve leakages

• Partial stroke
4 SPE-194590-MS

• Shutdown analysis

• Control valve monitoring

Rotating equipment

• Vibration / temperature monitoring

• Performance monitoring (pumps/compressors)

Instrument and automation

• Instrument diagnosis (Hart)

• Transmitter monitoring

• ICSS diagnosis: loops, modules, power supplies, servers, etc.

• Computer monitoring

• Network monitoring: switches/cables

• Cabinet monitoring: temperatures/MCBs/power supplies/fans

• Telecom systems monitoring

• Computers

Process

• Control loop monitoring

• Heat exchanger

While the digital twin approach did save time and enhance project quality, it also provided considerable
cost and time savings through enhanced coordination of system interfaces. In addition, the digital twin
model facilitated the level of integration and auto-generation of both the project's automation, digital, and
tools domains through greater standardization efforts.
The platform's fully-integrated EICT package was built using a proven process control system and
programming code libraries refined and drawn from years of use in oil and gas applications. Included were
electrical distribution systems, field instrumentation, and telecommunications. The EICT software stack
was programmed using the OSIsoft PI Asset Framework. This application gathers real-time operational
data from hundreds of sensors on the platform's rotating equipment, electrical equipment, instrumentation,
automation controls, valves, and process monitors. (Refer to the sidebar for a listing of monitored functions.)
Platform data is transmitted by means of redundant, highly secure links via a fiber-optic network on the
North Sea's floor that most North Sea E&P platform operators use. Cybersecurity was and remains a top
concern, of course. That's why a defense-in-depth model, based ISO 27001 and 27002 and ISA/IEC 62443
data security standards, the world's most rigorous, was employed.
The digital twin concept enabled the operator to attain operational stability of the platform and first
oil substantially sooner than a traditional, multivendor approach would allow. Weeks, if not months, were
saved. Exhaustive onshore factory-acceptance testing of all third-party systems before final assembly also
helped to save time.
SPE-194590-MS 5

The sole-source EICT approach made training much simpler, too. Operations and maintenance personnel
didn't need to learn the different interfaces from many vendors, just one. In addition, virtual commissioning
used a process simulator that helped to train personnel much earlier versus waiting for the physical
infrastructure to be commissioned. Also, loop-tuning functionality in the digital twin simulations helped to
determine high quality control-loop parameters in a test environment before the platform's start-up.
Given this platform's complexities — globally sourced design, engineering, construction, deployment
and commissioning — the actual savings in time are hard to measure. Nonetheless, it is conservative to
estimate that at least 30 days were saved compared to a traditional project approach that would not have
sole-sourced the EICT nor used the digital twin concept. As such, the company fast-tracked its time-to-cash
and to breakeven on its massive investment. So, by starting production a month faster than a conventional
project approach, the company was able to pull in $90 million in accelerated cash flow, based on a 60,000-
bpd production capacity at $50 per barrel.
Going forward, the company anticipates being able to further reduce the platform's total cost of ownership
(TCO) significantly by using the digital twins of its various physical assets to continually improve their
performance and capabilities. A large part of this is the result of following a first principle in the EICT
design: standardize all platform components, modules, and systems to the greatest extent possible. This
way, spare parts requirements and maintenance expenses are expected to be dramatically lower.

Condition-based maintenance: More proactive and more cost-savings


In choosing a CM-enabled CBM approach right at the start of the project, early in its FEED phase,
engineering teams could collaborate in bringing together many diverse and advanced technologies. Among
them were: smart sensors; real-time data collection and time-stamping; long-distance, IoT connectivity
using fiber optics; artificial intelligence and machine learning via advanced analytics; and many others.
By combining these technologies in new ways, they were able to go far beyond just doing maintenance
cheaper and faster. Instead, they applied them to enable both CM and CBM, so the operator could perform
maintenance better and smarter. In other words, they envisioned and brought to the platform a more
proactive and even predictive maintenance model, enhanced by AI and ML. It's one that can reduce costs and
production disruptions by preventing equipment failures, and moreover predicting and fixing them through
mitigation and remediation.
For offshore platforms, CM-enabled CBM is a practical and much needed maintenance model. That's
particularly true in the North Sea. The marine environment constantly exposes topsides infrastructure
and equipment to highly penetrating and extremely corrosive salt air. And that doesn't count the heavy
mechanical stresses that all platform machinery, especially rotating equipment, must also endure in E&P
operations. It's what makes maintenance even more critical to platform productivity, asset availability, and
ultimately, profitability.
CBM can be performed according to monitored equipment's operational condition, not according to set
schedules as maintenance has conventionally been done. If an asset is operating inside its normal parameters
and shows no signs of behavioral anomalies, then maintenance intervals can be extended until abnormal
behavior patterns indicate maintenance is needed. This will substantially reduce the costs of labor and parts.
On the other hand, CM could suggest possible trouble ahead of a planned shutdown, so a control room's
technicians and engineers can start investigating root causes. They can then dispatch preventive maintenance
work orders for the particular equipment. This can help avoid the costs and consequences of unplanned
downtime or production slowdowns. Either way, CBM is helping the Ivar Aasen platform operator to
perform maintenance in timely planned methods, reducing expensive helicopter transports, equipment
downtime, and production disruptions.
6 SPE-194590-MS

AI and machine learning, a brief overview


To understand how AI works in a CBM model, it helps to know more about AI itself and an application of
it called machine learning (ML). In short, AI is the name given to software algorithms that can be taught to
do the kinds of work that typically requires the human intellect and learning abilities. It draws from such
disciplines as logic, statistics, and probabilities, as well as neuroscience and psychology.
One AI example is pattern recognition. It can involve finding data exceptions and difference, such
as platform equipment performance anomalies, for example. Other uses include discovering patterns in
images, speech, or music. One popular web app helps users to identify the name of a song by letting
their smartphones "listen" to a few measures of it. AI powers this app, as it does various online inference
engines that recommend products and services to their customers. Yet another example are the AI-based
fraud detection capabilities that credit card companies use to uncover unusual spending patterns in their
cardholder accounts.
Although AI has been envisioned for centuries and operational for decades, it hasn't been until recent
years that diverse tech trends have converged to make AI practical. Among those trends are the huge
reductions in costs and space required for powerful, parallel-processing computers, once known as
supercomputers. Now available via the web as cloud-based high-performance computing on a pay-as-you-
go basis. Then there's super-fast and secure broadband connectivity. Increasingly, it is wireless and has
enabled "things to talk to things" — the so-called Internet of Things (IoT).
In addition, data and software technologies have advanced dramatically, enabling "smart," self-calibrating
sensors to relay the KPIs of operating offshore platform equipment to next-level systems and control rooms,
even onshore ones like the Ivar Aasen platform control room 1,000 km away. Even more, with the cloud-
based platforms available today, offshore E&P operators can access advanced analytics applications with
AI algorithms. The latter can analyze incoming data for anomalies that effectively provide an early-warning
system in the monitored assets.
This kind of AI application that is known as machine learning (ML). Using statistical analysis and
increasing amounts of data as evidence, the algorithm can determine and report the probability of a
hypothetical event occurring.
In turn, as it applies to CM, this ML capability is linked to neural network models to, in effect, "learn"
the highly dynamic, relational behaviors of complex system and their components. Once AI-based, ML
algorithms are authored and deployed, they first must be trained with baseline operating data, such as a
rotating equipment's vibrational signature, as factory-tested, then "fueled" with actual operating data to
become ever smarter and more capable at their work assignments.

How AI/ML-enabled CM and CBM works


In January 2019, after deactivating the platform's onboard control room, the Ivar Aasen platform operator's
onshore control room assumed exclusive responsibility for the 24×7 operation its offshore assets. Staff
members use an identical control room to the one on the platform but located in Trondheim, Norway, a city
of 185,000 about 500 km north of Oslo.
That's where their technical experts make use of visual dashboards on desktop PCs that graphically
display detailed information about the CM parameters listed in the sidebar earlier in this paper. Depending
on their role-based access privileges, various company stakeholders can view these displays, enabling them
to support the CBM model with oversight and engineering support.
In addition, the operator's remote operating model gives other OEM suppliers access to their respective
CM data from their Ivar Aasen platform equipment assets. This aids them in providing support for the
platform's maintenance work and any needed diagnostics and over-the-shoulder equipment maintenance or
repair on the platform itself.
SPE-194590-MS 7

Appendix I illustrates some of the many dashboard displays accessible by the Trondheim control room
staff. The KPI data that are shown come from hundreds of CM telemetric signals streaming or batched
from the platform's EICT systems. These are analyzed using analytic software provided by the EICT prime
contractor. The KPI data — from both specific signals and their combinations — are matched to baseline
equipment operating signatures.
The analytics software, powered by AI/ML pattern-recognition capabilities, are used to uncover
deviations. These could represent anomalous asset behaviors or be the result of changing environmental
conditions. They might also be a sign that the parameter set points need adjusting. Whatever the case, this
is where operator and OEM human experience, expertise, and judgement are needed.
For example, Figure 1 shows a data graph with the functionality of a deep LSTM (Long Short-Term
Memory) autoencoder. This network has been trained to detect anomalies in a machine using 29 analogue
sensors. The autoencoder is able to differentiate between anomalies and healthy signals, this is done by
training the model on healthy data only. This results in a model that is unable to reproduce the anomaly part
of the signal, allowing a healthy KPI to be calculated by comparing the input of the model with the output.

Figure 1—Data graph from an AI-enabled LSTM auto-encoder algorithm used for CM/CBM.

One of the main benefits of using an LSTM auto-encoder over other anomaly detection algorithms is that
it not only detects anomalies, but that it is also able to specify exactly where in the data set the anomaly is
and what its healthy state should be. Here is an explanation of the graphic subplots:

Subplot 1: Reference & anomaly signal


The reference signal is the original, healthy signal. The anomaly signal is the same signal as in this injected
an artificial anomaly to validate the AI/ML model. This (along with another 28 KPI signals) are fed into
the model.
8 SPE-194590-MS

Subplot 2: Model input & output


This subplot shows how the model affects the signal. Red is input; green is output. Note that the output of
the model is very close to the original healthy version of the signal. In other words, the model is only able
to reproduce the healthy representation of the signal.

Subplot 3: Reference & output


This plot shows how accurately the model can reproduce the healthy state of the signal. The reference plot
is the original, healthy version of the signal. Note that this signal is not fed into the model in this case.
The predicted plot is the output of the model, when it is fed the anomaly signal. Note that the difference
between these two are small.

Subplot 4: Difference/delta
The difference between the input and output of this signal (and the other 28 signals) can be used to calculate
a healthy KPI profile for a machine's parameters. An increase in this value will indicate a faulty state of the
given signal and can be used to trigger an alert to human operators for further investigation.
The data graphs in Appendix II provide another illustration of how KPI data can be used to re-set a KPI's
operational parameters should environmental conditions change require doing so. One data graph shows
the analytical output of a valve's discharge pressure compared to the parameters of its expected flow. The
deviation triggered an operator alert, which spurred root-cause research by the Trondheim control room
operators. Their investigation combined the analytical findings based on the AI/ML algorithms with human
intelligence, seasoned with veteran operator experience.
Ultimately, the investigation revealed that the valve's normal operating parameter set points had to be re-
set, and a control room technician was able to change them remotely. This parameter modification will make
the CM/CBM model's algorithms "smarter," so human operators can recognize abnormal valve behaviors
in the future with much greater precision. The event and associated activities were then labeled and noted
in maintenance logs for reference, if later needed.
Part of the EICT prime contractor's added value was and is its vast library of baseline operating signatures
for all its rotating equipment models. That's in addition to its large global base of the same, equipment
deployed in different geographies and industries around the world. This information is invaluable in helping
to develop and teach AI/ML algorithms their initial "lessons." It also provides a knowledge base to be tapped
for diagnostics and effective mitigation and remediation of operating issues.

Unique time-stamping tracks data "on-change" and sequences performance


events
To gain greater and more precise insights into performance issues, the platform employs an efficient "on-
change" event-driven data collection technology that provides time-stamping of performance-related events.
It's an exclusive feature of the EICT contractor's monitoring software and used to support the platform's
CM-based CBM model. As such, it ensures data quality and helps control room staff in their analyses of
abnormalities to establish correlations spanning different equipment groups.
Before explaining time-stamping and its value in CM/CBM, a review of conventional CM data sampling
is in order. Typically, the latter involves polling sensor data or process controller data at different time
intervals, such as seconds, minutes or hours, depending on the function that a sensor is monitoring. While
this methodology might identify one anomaly, it can overlook other ones. And, if multiple events are
recorded in a single polling sample, their sequence can be difficult to discern, much less any cause and
effect between or across events.
Time-stamping is different. It's a more sophisticated data acquisition protocol, which contextualizes the
sensor data in controller software. Rather than polling a sensor's data, the equipment's programmable logic
SPE-194590-MS 9

controller (PLC) pushes the data out via a gateway to the data storage database data storage — on change
only. Because the PLC features a deadband, it can assess whether any data changes are substantial enough
to be of note and, if so, time-stamp the data and sent it through.
In addition, data values get time-stamped as near to their source as possible. Ideally, that occurs at the
equipment location on the platform or onshore well site. If not, otherwise the first PLC receiving the signal
does the time-stamping. In this way, the time-stamping is done with as much precision as the logic cycle
time of the PLC, in micro/milliseconds.
Time-stamping also allows for specific platform equipment data from the PLCs to be sent through the
gateway for processing and storage versus having data pre-selected, as traditional polling requires. That's
because it takes tremendous computing power to poll the 100,000+ objects on a typical platform in cycles of
1–5 seconds. It also reflects why polling-based systems require users to pre-select specific tags for polling.
Otherwise, they risk overloading their SCADA or other distributed control system networks.
Another on-change time-stamping feature is time synchronization across all sensing units. This can help
the platform control room staff to spot variations as well as associated events and be able to note when they
occurred (e.g., first, second, third and so on). They can also distinguish correlation and causation, so they
can find the root cause or causes of anomalous behaviors in the various operating equipment.
Time-stamping provides operators the means to diagnose problems much more quickly. Time-stamped
data lets control room operators apply analytics with greater precision to draw better insights into deviant
machine behaviors. Of course, faster problem resolutions can minimize production disruptions or even
prevent them, if the early-warning signs can be addressed sooner.
Another plus: Sometimes data can vary from expectations due to a shift in environmental conditions but
still within the functional parameters originally set according to the baseline signatures of the various KPIs.
With time-stamped data, better decisions can be made as to whether or not to mitigate or remediate an issue
— or keep running the subject equipment until the next planned platform outage. Sometimes, too, set points
of deployed equipment may need to be reset, and time-stamped data can help in making those decisions, as
well. (See Appendix II describing this latter situation.)
The greater precision of time-stamping compared to conventional polling techniques can help operators
evaluate specific problems with actuators or other sensed components to prevent catastrophic failures. Take,
for example, an open/close valve in one of an offshore platform's systems. Although technicians aboard the
platform can physically assess a problematic valve's performance to identify and prevent its failure, doing
so is more difficult with remote CM — especially if conventional data polling methods are used.
Why? Because the data samples would not be accurate enough to conduct a valve performance evaluation
remotely. Time-stamping addresses this issue. A related reason is that valves may only move every few
months, so the most revealing data to detect an actual or impending fault would only be generated when the
valve moves. Put another way, the data captured using conventional sampling would be similar to evaluating
the performance of a door while it is open or shut, but not while it's moving.
Finally, time-stamping's precision as an integral part of the operator's remote CM/CBM system helps the
company comply with strict Norwegian regulations governing the maintenance the oil and gas metering
system aboard the platform. This system helps the company calculate the revenue split for project partners
and government taxes, which is why calibration is mandated each year.
With the system having 280 instruments, manual calibration used to take nearly a year of one engineer's
time. Today, with CM/CBM deployed, regulators have signed off on the extra precision of the metering
system's remotely monitored instrumentation, allowing the company to perform manual calibration every
three years, instead of annually. This frees an engineering resource for more valuable tasks. The resource
savings might only be small compared to the overall operating expenses, but when an offshore E&P operator
is looking to drive production costs under $7 a barrel, they look for gains wherever they can.
10 SPE-194590-MS

How CM/CBM can provide remote diagnostic services for offshore rotary
equipment
An offshore (and onshore) CM/CBM model can incorporate remote diagnostic services (RDS) for rotating
equipment. These assets are among the most complex and expensive in the E&P toolkit and include
compressor trains, either single units or entire fleets in different geographic locales. Because they are such
complex machinery, their utilization rates and availability are critical to production performance. And,
though compressor trains are designed, engineered, and built to be reliable and fault-tolerant, downtime
can cause costly process disruptions and potentially undermine health, safety, and environmental (HSE)
compliance.
Implementing a CM/CBM model for a compressor starts with programming the KPI signatures of a
healthy profile into its digital twin, along with all the required algorithms to support that healthy status.
Then, the KPI operating data, as previously explained, can be monitored and compared with the digital
twin's baseline signature to identify deviations that could turn into problems. By preventing trips and forced
outages via early detection of potential faults and preventive remediation, compressor availability could be
boosted by as much as three percent annually — equivalent to approximately 11 days each year.
This extra uptime not only reduces forced outages and their subsequent costs, but over a compressor train's
expected 20-year lifecycle, it can lower the equipment's total cost of ownership (TCO). What's more, another
TCO advantage can arise from the extended component lifecycles via more proactive, even predictive,
maintenance approaches that employs AI-enabled ML technologies.
What follows explains the data paths and workflows for RDS conducted in a CM/CBM model and using
IEC 62443 and other rigorous cybersecurity standards. First, the compressor train data is sent fully encrypted
from the well site to a monitoring facility, typically owned and operated by the OEM providing the RDS.
It could also be tied into an owner-operator's remote-control room.
Either way, the data gets decrypted on arrival. It's then anonymized and placed in parallel redundant
databases. The latter sit behind next-generation firewalls, which deliver added security. Backup and disaster
recovery are also provided. Next, the data is run through an advanced analytical agent programmed
to evaluate compressor performance. This agent compares the compressor's digital twin performance
parameters with the incoming KPI operating data. It then uses data analytics and pattern recognition software
to sift through time-stamped events in the data, searching for anomalous behaviors.
To help monitoring engineers visualize the data, the software produces heat maps that show
the compressor performance curves in addition to other data-graph representations. These enhance
understanding of the KPIs and their interrelations as well as provide operating insights into specific
compressors. If deviations are detected, the OEM's compressor experts can be alerted, to look into root
causes, investigating not only the compressor but its entire train and its driver, whether a steam or gas turbine
or an electric motor.
Should a likely cause be found, the OEM alerts the compressor's operator with a risk assessment and
recommended actions. Some cases may require immediate remediation to avoid a forced outage. Other cases
may allow the issue to be resolved during a planned outage. To minimize downtime, if spares are needed,
replacement parts can be taken from inventory. And, if they are unavailable, they can be dispatched in time
to arrive for the remediation service activity.
With RDS incorporated into a CM/CBM model the deep OEM compressor train knowledge and remote
support joins together the operator's extensive onsite operational experience in a true collaboration. For
example, in remediating a potential or actual problem, the OEM engineers in the RDS monitoring centers can
securely gain online access to the compressor train's HMI. They can then watch and advise the maintenance
staff on the proper procedures by way of a web conference or phone call — in effect, "over-the-shoulder"
engineering support. This support can reduce mean-time-to-repair (MTTR) cycles, minimizing outages and
their costs.
SPE-194590-MS 11

Conclusion: E&P digitalization today and tomorrow


In the Ivar Aasen use case, the CM-enabled CBM approach — now operating from a single onshore control
room 1,000 km away from the offshore platform — has been proven to enhance reliability, availability,
and utilization of its monitored assets. Importantly, operational safety has been increased by minimizing
platform maintenance staff and the associated risks and costs of flying them to and from the platform.
What's more, a CM/CBM model helps lower TCO further by optimizing spare-part inventories, with
fewer just-in-case parts. Conventional inventory approaches would tie up capital that can now be freed for
more productive, value-adding purposes.
Despite the platform operating offshore, the "as-is" digital twin is always being updated and upgraded,
with the addition of new features, functionality, algorithms, and capabilities. Spanning the entire platform,
its equipment, and its processes, the platform's digital twin, which was developed and refined during FEED,
construction, and commissioning phases, will remain the physical facility's virtual counterpart for the latter's
entire lifecycle, an expected 20 years. And during this time, engineers with ideas to improve operations can
test them on the platform's digital twin before committing them to deployment.
Going forward, the platform's digital twin also offers the operator a template to prototype new offshore
platforms quickly and at far less cost than this first one. All it takes is time and effort to tailor the base
model's engineering to variations in requirements, both operational and environmental, such as varying
depths, different seabed topologies, and wellhead numbers. Templates like these can reduce E&P project
cycle times and project risks dramatically. Then, once a new platform is operating, its own digital twin can
be used to improve asset utilization and availability.
Today, the digitalization technologies described in this paper, such as AI-enabled ML and CM/CBM and
the digital twin concept, are poised to transform the E&P industry and its operations, both offshore and
onshore. Early adopters will gain a competitive advantage. Tomorrow, digitalization will be a competitive
necessity.

Acronyms
AI : Artificial Intelligence
CM : Condition Monitoring
CBM : Condition-Based Maintenance
E&P : Exploration & Production
EICT : Electrical, Instrumentation, Control and Telecom
ESP : Electrical Submersible Pump
FEED : Front-End Engineering Design
HSE : Health, Safety, and Environment
IoT : Internet of Things
ML : Machine Learning
SCADA : Supervisory Control and Data Acquisition

References
Gruss, Alec. Artificial Intelligence: Steps to Transforming Offshore E&P for Vastly Improved Business Outcomes.
Presented at the Offshore Technology Conference (OTC) 2018. 30 April–3 May 2018
Brandes, Jürgen. Harnessing Data Effectively to Develop a Low-Manned Platform in a Remote, North Sea Operating
Environment. Presented at the Offshore Technology Conference (OTC) 2018. 30 April–3 May 2018
12 SPE-194590-MS

Appendix I
Representative Dashboards for Offshore Remote Condition Monitoring

Daily Operations Report

This information can be exported to Excel or PDF templates.


SPE-194590-MS 13

Daily Operations Report continued


14 SPE-194590-MS

Alarm Reports

Alarm Management is an alarm report based on KPIs. The application will show pre-calculated 10-minute
slices of alarm statistics against KPI limits. The KPIs include average alarm rate, peak, average standing
alarms, average shelved alarms, top alarms, alarm priority distribution, operator actions (acknowledge/
shelve).
SPE-194590-MS 15

Running Hours

Running hours is a dashboard that keeps track of the running hours and minutes of pumps, motors,
feeders, aggregators, fans, compressors, and other connected equipment. If can be set to issue maintenance
notifications if equipment has been running for a specific set number of hours or has started and stopped
for a specific set number of times.
16 SPE-194590-MS

Shutdown Verification

Shutdown Verification displays verification of ESD, PSD or F&G shutdowns as well as effects and
connected feedback objects. The Cause and Effect application is based on cause and effect sheets. Users
select a folder and all cause and effect sheets in this folder are loaded into the application. The cause-and-
effect sheets employ a predefined template. Users may subscribe to notifications from the applications, so if
a shutdown is detected, the user immediately gets an email with the Shutdown Verification report attached.
SPE-194590-MS 17

Block, Override and Suppress

The Blocked, Override, Suppress (BOS) report displays all block, override, and suppress events in a given
time frame. The overview section shows a summary of what is active now and how the duration of how long
equipment has had the given state.
18 SPE-194590-MS

Valve Monitoring

The Valve Monitoring application dashboard shows all events for operating valves. The prerequisite is that
the valve has one or more limit switches. Valve events are calculated in the PLC were possible, otherwise
PIMAQ will calculate activation time (command → valve leaves start position), travel time (valve leaves
start position → valve reaches final position) and total time (activation time + travel time) based on
state timestamps. Automatic configuration is a feature that activates when new valves in the historian
are detected and added to the application automatically. Limit values are fetched from PLC, but may be
overwritten in the application. It is also possible to add limits not used in the PLC, such as the minimum
travel time (for larger valves that should not move too quickly).
SPE-194590-MS 19

Appendix II
Analytics Data Graph

Data graph shows the analytical output of a valve discharge pressure versus its expected flow parameters.
This prompted an investigation that combined human intelligence, including veteran operator experience,
with the analytical findings, based on AI's machine-learning algorithms. The investigation determined that
the set points defining the valve's normal operating parameters needed adjusting, so they were remotely
changed as a result. This parameter adjustment will help make the CBM model "smarte"r and enable
operators to identify abnormal valve behaviors more accurately in the future. This phenomenon was then
labeled and noted in maintenance logs for future reference. the parameter was decided to need a slight
change as depicted on the next page.
20 SPE-194590-MS

Data graph shows the unit's parameters after being remotely adjusted, with the flow now within new
parameters’ bounds.

You might also like