You are on page 1of 6

Three Actions You Should Take

(But Probably Do Not)


With Your Process Data

JUNE 2016

Canary Labs Store, Visualize, and Analyze


info@canarylabs.com Your Process Data
(814) 793-3770
Canary Labs Jeff Knepper
info@canarylabs.com Executive Director of Sales
(814) 793-3770 jknepper@canarylabs.com

Three Actions You Should Take


(But Probably Do Not) With Your Process Data

The Canary data historian, like most other historians, holds more analytical potential today than just a decade
ago. In recent years, data historians have become process knowledge powerhouses, transforming operations
and fundamentally changing the way time-series data is interpreted. Despite historian technology becoming
more robust, few companies have taken the necessary steps to actualize their data historian’s full potential.

Thirty-one years ago, the Canary enterprise historian was built around an extremely fast and secure proprietary
database that streamlined process data storage. Although it reliably recorded time-series data the number of
personnel that were able to benefit from the plant historian was restricted due to poor visualization
technologies, network constraints, and lower CPU performance. Technological advancements have since eroded
these limitations and resulted in both a broader user base as well as an increase in functionality. The Canary
data historian now additionally features Axiom trending software, alarm and notification packages, reporting
suites with integrated Excel add-ins, asset management functionality, and a variety of connectors allowing data
to flow in or out of the historian as needed.

Although system functionality has greatly increased, few end users have taken the necessary steps to fully
leverage these new capabilities. Most likely, regardless of which data historian used, you are also failing to do
so, and it isn’t entirely your fault. Despite industry performance, most engineers, supervisors, and operators are
either working double-time to meet spikes in demand, or are handling duties outside their typical job description
to reduce cost. The bottom line? You are probably too busy to ensure that you are doing everything you should
with the golden-nuggets currently hiding inside your process historian.

By implementing these three best practices, you can begin to better apply your historian’s capabilities and
identify at-risk equipment, increase efficiency, and lessen downtime.

Use Alarming as an Asset Management Tool


Alarming has become a standard tool available in data historian offerings, but
are you maximizing its potential? Most companies leverage alarming software
as a notification service, setting tag limits and receiving text or email alerts if
that limit is reached. Does this sound familiar? If so, you can liken this
approach to making a grocery run in a Ferrari 458 Speciale, painfully inching
along at 35 miles per hour the entire way. Will it get you to the market and
back home? Sure, but you will never appreciate all the performance of its 570
horsepower V8. Similarly, the Canary alarming software will certainly notify you
of a high/low limit event, but only using it in this application would neglect its
powerful asset management capabilities. Take your asset management to the
next step by following this best practice.

First identify your asset and the group of tags that will serve as performance
indicators. For instance, if you wanted to manage a compressor, you may
monitor 10 to 20 points including vibrations, temperatures, flows, and
pressures.
Ensure each tag is clearly labeled so you can easily identify and relate the tag to the asset. Then establish what
the normal operational thresholds are for each data point. Note, these will probably be considerably “tighter”
than typical notification points. Focus less on critical values and more on ideal operating values. With the
Canary software, you can now set alarm boundaries at the top and bottom of these ideal thresholds for each
data point. You can also create logic rules within your alarm. For instance, you may only be worried about
crankcase vibration if it reaches a certain level and maintains that level for more than 5 seconds. Or you may
only want to alarm a temperature reading if a pressure reading is beyond a certain limit. Finally, define the
relationship between tag alarms and asset alarms. Do three separate temperature alarms cause your asset to
alarm? Or maybe a combination of one pressure and one vibration alarm would be a better indicator. You
determine the logic and the rules. Remember, you are not adding notification services, these alarms will run in
the background and will not interrupt your process. If you have similar assets across the process, copy this
template and apply it to them as well. Continue this process and construct asset and tag alarms for your entire
asset portfolio.

Now, the easy part. Let your process run for 30 to 90 days. At the end of this period, employ the alarming
software’s analytics and review your assets. Questions that may arise include:

Take this new information, and make educated decisions. Adjust some of your alarm points as need be, and
repeat the process again. Continue to reorder your tag groups, refine your operational thresholds, and adjust
alarm rules until you feel comfortable with the results.

Advanced application
Once you feel the alarm levels are dialed in and you have run several “trial and error” applications, apply
alarming to historical data. Review the findings and try to identify which assets you expect to need repairs or
replacements based on your alarm indicators. Now compare your predictive findings to the next six months of
work orders. How often did a work order correlate to an asset in alarm? Once you have validated your results,
you can begin to confidently perform more preventative maintenance and part replacement.

How many downtime disruptions, safety issues, profit losses


or environmental concerns could be eliminated by being more
predictive and less reactive in your asset management
strategy?

Use Calculated Trends to Monitor Efficiency and Cost Savings


The Canary data historian provides an extensive calculated trending tool, allowing users to configure complex
calculations that can include any number of monitored data tags. Often this feature is used by plant
operations to convert temperatures, determine power ratios, estimate efficiency, and better guide the
process. However, one of the most beneficial uses of this tool involves stepping outside of the Operations
mind frame and thinking more in line with Accounting.

Every month, quarter, and end of year, the CFO, controller, or plant accountant run a series of mathematical
equations specifically designed to identify profitability, efficiency, and return on investment. Their results are
shared with CEO’s, VPs, and upper management, and reviewed in offices and boardrooms. How often do these
results make it to the control operator’s desk? Probably never, but shouldn’t they? Include your control and
operation engineers in the accounting process to unlock this best practice.

Since the control and operation engineers are at the heart of the process, wouldn’t it be prudent to ensure
everyone has a better understanding of process efficiency? Using the Canary calculated trend feature,
efficiency calculations can be added directly to the trend charts for each piece of operating equipment in your
facility. You can easily and quickly transform every control room into a real-time profit monitoring center. The
best part, this requires very little time, and no financial investment.

The calculated trend tool can be launched from any Axiom trending chart and comes preloaded with a variety
of mathematical operations including but not limited to Minimum/Maximum, Absolute Value, Sine, Cosine,
Tangent, Arcsine, Arccosine, Arctangent, Square Root, and many others. Trends loaded unto the chart can also
be included in any formula, and you are not limited in character length.

Calculated trends allow you to


unlock your data. With the Canary
Trend Calculation tool, you can
quickly view pressure differentials
and ratios, convert temperatures,
and monitor power ratios.

Operations Supervisor for the City of Boca Raton, Mike Tufts, has seen this work first hand at their water and
wastewater facility. As he explains, “This is very useful for predicting costs and allocating the proper budget
and funding for chemicals and pacing based on the flows. With the Canary software we closely estimate
chemical amounts that will be used based on flow volume. We know exactly how much production is put out,
and what they were using at the time, and we have a tighter and improved number for budgeting and
purchasing for the next year, the next month, and even the next contract.”

Once the calculated trend is created, it appears on the trend chart and will continue to calculate whenever that
chart is loaded. You can then use that calculated trend inside future calculated trends as well, helpful for
example, when calculating pump efficiencies. If you choose, you can also write the calculated trend back into
your data historian as a permanent tag.

Advanced Application
Meet with Accounting and learn more about key profitability and efficiency indicators for the company or your
department and incorporate these as calculated trends. Furthermore, begin to specifically track value
whenever possible for capital improvements. For instance, when replacing an old pump, compare calculated
efficiency trends from the old pump to the new pump.

Based on projected power savings and additional operational


production, where should capital improvements be made?
Are we seeing efficiency savings and performance returns in
line with what our system integrator proposed?
Time Shifting Makes the Invisible Visible
Everyone knows data historians provide the visualization of both real-time as well as historical data. But how
deep into your historical data do you dig, and how often? Do you generally look back in periods of days, or
weeks? How often do you compare real-time data to historical data from six months ago or even six years
ago? Is there an inherit benefit in doing so? Look further back into your historical data when making real-
time comparisons to unlock this final best practice.

Time shifting is certainly not new to historians, but it is a feature that’s rarely used to its full potential. For
those not familiar, time shifting allows live data trends to be stacked directly on top of historical data trends
and is a great tool for comparing a current data point to itself from a previous time period. This is an
important feature as we can easily become accustomed to the data around us and miss small, but significant
changes. This is often a larger problem for experienced staff as they develop a preset knowledge of where
they expect certain values to fall and will be more prone to miss small changes in the data that are nearly
indistinguishable if not viewed as a comparison.

For instance, recall the adage about frogs and hot water. The myth states that
if you throw a frog into a pot of boiling water it will quickly jump out, however,
if you start the frog in a pot of cool water, and slowly increase the temperature,
the frog will fail to notice the gradual temperature change, eventually cooking.
Your ability to interpret data can be very similar. A sudden change is easily
identifiable, however, a slow and gradual change can be nearly impossible to
perceive, and these slow, gradual changes are exactly what we are trying to
identify. Often time shifting does not help, simply because the time shift is not
extreme enough. To illustrate this point, imagine you are monitoring the
exhaust gas temperature (EGT) of a set of CAT 3500 generators. Generally,
during operation, these temperatures hover around 930 degrees Fahrenheit
but have a variance of +/- thirty-five degrees. It is important to the overall
health of these motors that you continue to maintain acceptable exhaust
temperatures so you decide to track these historically, comparing their live
data to historical data from thirty days prior.

If the exhaust temperatures began to increase by a factor of fifteen percent month over month, you
would easily visually identify that trend. But what if they were increasing by only one-third of a percent
each month? Would you be able to see that change, especially with a daily operational variance of nearly
seventy degrees? A small change of less than one percent would typically go unnoticed resulting in no
further analysis. However, there is likely an underlying issue that needs diagnosed that may lead to
machine downtime or future machine inefficiency.

Once the Time Average Aggregate


has been applied to both the time
shift and current trends, the data
becomes much easier to interpret.
Longer term time shifts also gives a
better sense of change, often
missed during shorter time shift
deviations.
Enter the importance of longer time shift intervals. By comparing that same EGT tag to data from two years
earlier, you would see a variance of over twenty degrees. However, even then you may not take action due to
the allowable temperature fluctuation of +/- thirty-five degrees. However, if you leveraged another tool, the
Time Average Aggregate, you could smooth the EGT data. By comparing twenty-four hours of current EGT data
with a sixty second time average to twenty-four hours of EGT data from two years ago with that same sixty
second time average, you are much more likely to notice the resulting change.

Certainly it is not always possible to use data from several years ago, as many factors can and will change.
However, as a best practice, the further back you can reach, the higher the likelihood of identifying gradual
variance in your data. You can also use secondary and tertiary trends to increase the validity of these
comparisons. For instance, when comparing EGT tags, you may also need to include ambient air temperature
and load tags (among others) to better determine any other potential mitigating factors.

Advanced application
Incorporate these time shift observations in your regular schedule. Create and save charts in the Axiom
trending software that will allow you to quickly view time shift comparisons on a monthly basis. These saved
template charts should be preloaded with all necessary tags, formatted to have the trends banded and scaled
together, and have time average aggregates already performed. Don’t stop with basic tag monitoring, follow
through with the previous best practice and additionally monitor calculated efficiency trends.

What operational efficiencies can be discovered by regularly


checking machine efficiency and power consumption? What
would a 3-5% reduction in power consumption mean to the
bottom line?

Learn More About the Canary Data Historian and Analytic Tools
www.canarylabs.com

You might also like