You are on page 1of 5

While PEMS show a number of specific advantages over CEMS (reduced

maintenance, no
consumables, no energy consumption, etc.), because of their data-driven nature,
they require distinctive
attentions.
Great care has to be paid on the quality assurance of PEMS output; building an
effective and reliable
model is not enough as the ongoing accuracy of the model strictly relies on the
quality of the input
variables used by the model itself.
This means that each process parameter fed into the model shall be routinely
verified to confirm that
related instrumentation is performing according to expectations. In the case any
anomalous behaviour
is detected, a proper alarm shall be raised, warning about PEMS prediction quality.
Presently, some of the best commercial solutions exploit simple data range checks
to verify, in real-time,
if a sensor reading stays within predefined limits. Although reasonable, these
checks are pretty relaxed
and not very effective in identifying all possible failures (sensor freezing to a
constant value is for
example not discoverable through such a basic control). Additionally if sensor j is
found out-of-range
the typical action is to replace its value with the last useful sample at time t* (see
Figure 2). Obviously
this emergency gimmick becomes quickly less and less acceptable.
While PEMS show a number of specific advantages over CEMS (reduced
maintenance, no
consumables, no energy consumption, etc.), because of their data-driven nature,
they require distinctive
attentions.
Great care has to be paid on the quality assurance of PEMS output; building an
effective and reliable
model is not enough as the ongoing accuracy of the model strictly relies on the
quality of the input
variables used by the model itself.
This means that each process parameter fed into the model shall be routinely
verified to confirm that
related instrumentation is performing according to expectations. In the case any
anomalous behaviour
is detected, a proper alarm shall be raised, warning about PEMS prediction quality.
Presently, some of the best commercial solutions exploit simple data range checks
to verify, in real-time,
if a sensor reading stays within predefined limits. Although reasonable, these
checks are pretty relaxed
and not very effective in identifying all possible failures (sensor freezing to a
constant value is for
example not discoverable through such a basic control). Additionally if sensor j is
found out-of-range
the typical action is to replace its value with the last useful sample at time t* (see
Figure 2). Obviously
this emergency gimmick becomes quickly less and less acceptable.
While PEMS show a number of specific advantages over CEMS (reduced
maintenance, no
consumables, no energy consumption, etc.), because of their data-driven nature,
they require distinctive
attentions.
Great care has to be paid on the quality assurance of PEMS output; building an
effective and reliable
model is not enough as the ongoing accuracy of the model strictly relies on the
quality of the input
variables used by the model itself.
This means that each process parameter fed into the model shall be routinely
verified to confirm that
related instrumentation is performing according to expectations. In the case any
anomalous behaviour
is detected, a proper alarm shall be raised, warning about PEMS prediction quality.
Presently, some of the best commercial solutions exploit simple data range checks
to verify, in real-time,
if a sensor reading stays within predefined limits. Although reasonable, these
checks are pretty relaxed
and not very effective in identifying all possible failures (sensor freezing to a
constant value is for
example not discoverable through such a basic control). Additionally if sensor j is
found out-of-range
the typical action is to replace its value with the last useful sample at time t* (see
Figure 2). Obviously
this emergency gimmick becomes quickly less and less acceptable.
PEMS Algorithm :

Predictive Emission Monitoring Systems (PEMS) are software-based solutions


able to provide reliable and accurate real-time emission estimations. PEMS exploit
advanced mathematical models, which use process parameters (e.g. pressure,
temperatures, flow, etc.) as input variables.

Depending on the local environmental regulations, PEMS can be adopted as:

 Back-up of the traditional analytical instrumentation in order to increase the


operating availability of the existing emission monitoring infrastructure
 Primary monitoring source for emission values.
ABB Inferential Modeling Platform (IMP) is the software platform tailored for
model creation and online deployment at site. IMP allows connecting with any
control system through standard OPC protocol, acquiring real-time process data
and generating emission estimations.
IMP flexibility allows to make emission values available to the plant control
system and/or to transfer them to the emission data acquisition system.
Model-based emission monitoring provides a number of unique features and
advantages:
 PEMS are an effective and competitive alternative to traditional hardware-
based instrumentation, able to monitor the most common air pollutant emissions as
CO2, NOx, SO2, CO, etc.
 PEMS are compliant to international environmental regulations and
standards.
 As a back-up of existing analytical instrumentation, PEMS are able to
identify possible malfunctioning, providing an alternative measurement when
maintenance on the analyzers is performed.
 PEMS can be interfaced directly with the control system without the
installation of any additional component.
 They provide an overview on the process, highlighting anomalies and
providing a preliminary evaluation of emission excess.
 Predictive systems do not require any specific maintenance.
 No need for consumable and spare parts.

Linear Time Probabilistic algorithm :


We present a probabilistic algorithm for counting the number of unique values in
the presence of duplicates. This algorithm has O(q) time complexity, where q is the
number of values including duplicates, and produces an estimation with an
arbitrary accuracy pre specified by the user using only a small amount of space.
Traditionally, accurate counts of unique values were obtained by sorting, which
has O(q log q) time complexity. Our technique, called linear counting, is based on
hashing. We present a comprehensive theoretical and experimental analysis of
linear counting. The analysis reveals an interesting result: A load factor (number of
unique values/hash table size) much larger than 1.0 can be used for accurate
estimation. We present this technique with two important applications to database
problems: namely, (1) obtaining the column cardinality (the number of unique
values in a column of a relation) and (2) obtaining the join selectivity (the number
of unique values in the join column resulting from an unconditional join divided by
the number of unique join column values in the relation to he joined). These two
parameters are important statistics that are used in relational query optimization
and physical database design.

A randomized algorithm

 It uses a source of randomness as part of its logic. It is typically used to reduce
either the running time, or time complexity; or the memory used, or space
complexity, in a standard algorithm. The algorithm works by generating a random
number, rr, within a specified range of numbers, and making decisions based
on rr's value.

A randomized algorithm could help in a situation of doubt by flipping a coin or a


drawing a card from a deck in order to make a decision. Similarly, this kind of
algorithm could help speed up a brute force process by randomly sampling the
input in order to obtain a solution that may not be totally optimal, but will be good
enough for the specified purposes.

Randomized algorithms are used when presented with a time or memory


constraint, and an average case solution is an acceptable output. Due to the
potential erroneous output of the algorithm, an algorithm known as amplification is
used in order to boost the probability of correctness by sacrificing runtime.
Amplification works by repeating the randomized algorithm several times with
different random subsamples of the input, and comparing their results. It is
common for randomized algorithms to amplify just parts of the process, as too
much amplification may increase the running time beyond the given constraints.

Randomized algorithms are usually designed in one of two common forms: as


a Las Vegas or as a Monte Carlo algorithm. A Las Vegas algorithm runs within a
specified amount of time. If it finds a solution within that timeframe, the solution
will be exactly correct; however, it is possible that it runs out of time and does not
find any solutions. On the other hand, a Monte Carlo algorithm is a probabilistic
algorithm which, depending on the input, has a slight probability of producing an
incorrect result or failing to produce a result altogether.

Practically speaking, computers cannot generate completely random numbers, so


randomized algorithms in computer science are approximated using
a pseudorandom number generator in place of a true source of random number,
such as the drawing of a card.

You might also like