You are on page 1of 17

Real-Time Data Reconciliation for Process Control

Simon Gariépy1, Project Leader


Mario Paredes Malca2, Metallurgist - Process Control
Éric Poulin1, Project Manager

1
GCO, Algosys Division
1389, avenue Galilée
Québec (Québec), Canada
G1P 4G4
PH: (418) 626-9090
E-mail: simon.gariepy@gco.ca
E-mail: eric.poulin@gco.ca

2
Southern Peru Copper Corporation
Toquepala Concentrator
Av. Caminos del Inca 171 Urb. Chacarilla
del Estanque - Santiago de Surco, Lima 33
Peru
PH: +51 (52) 766111, Ext: 2629
E-mail: mparedes@southernperu.com.pe

Key Words: Data reconciliation, mass balance, process control, flotation column
ABSTRACT

While the design of a control strategy depends on the available process knowledge, its
implementation depends on the available process information. On-line chemical analyzers are
today widely spread in the mineral processing industry. A real-time concentration measurement
of chemical components is valuable information that could bring control strategies to whole new
levels. However, the use of these measures rarely reaches anything more than process
monitoring. The reason is simple: the on-line chemical measurements quality is most of the time
insufficient to be directly included in control strategies. The choice is then made to let the critical
eye of the engineer or the operator appreciates the measurements before making any decision
based on it.

Data reconciliation through mass balance is a technique today widely used in the mineral
processing industry to increase the accuracy and the reliability of plant data, mainly for
accounting and survey analysis purposes. Real-time data reconciliation of on-line measurements
appears as a natural coupling of technologies that could bring new process optimization trends.
Once overcome the technical difficulties, full valorization of the instrumentation investment
could be reached.

This paper presents the application of real-time data reconciliation for process control and
performance monitoring. It proposes a general approach, discusses challenges and illustrates
concepts with industry experiences.

INTRODUCTION

Valuable information is the basis of successful implementation of operation procedures, process


control and optimization strategies. This affirmation sounds as an evidence for most people in
the process industry but is still relevant today for companies that want to increase their
profitability on a continuous basis. Even if major developments in the field of instrumentation
were accomplished during the last decades, there is still room for improvement regarding the
real-time measurement of strategic variables. This situation is specifically observed in the
mineral processing industry where a direct measure of key parameters is not often possible and
where analyses are generally complex or require many manipulations. For these reasons, this
sector has traditionally been facing the availability and reliability issues of process data.

The globalization of the markets, that is continually raising the competitivity level, and
environmental regulations have also a major impact on this industry. The respect of higher
quality specifications, the development of value added products, the reduction of product life
cycles and a tighter control of rejects are only few examples that contribute to increase
production standards. That context forces the implementation of efficient control and
optimization strategies based on accurate on-line process information or on non traditional new
process variables. While basic variables are correctly measured and controlled, efforts have to be
directed to strategic process variables.
However, major investments related to instrumentation and process control have already been
made in the mineral processing industry. Most companies typically have been installing:

• Basic instruments as well as sophisticated on-line analyzers;


• A network infrastructure going from field devices to the plant administration;
• Control systems such as distributed control systems (DCS), programmable logic controllers
(PLC) or supervisory control and data acquisition (SCADA) systems;
• Historians, production, management and other software (e.g. PID tuning and performance
monitoring software).

Even if this equipment is associated to an impressive budget (initial costs, maintenance, updates,
etc.), full valorization is generally not reached and the return on investment (ROI) is not
satisfactory (King 1999).

Data reconciliation, coupled with control and optimization strategies, is an interesting approach
to improve benefits related to instrumentation and process control equipment. The data
reconciliation technique is widely used in the mineral processing industry to increase the
accuracy and the reliability of plant data. According to a measurement error model, process
variable measurements are adjusted to make them coherent according to mass and energy
conservation models (Crowe 1996). The technique improves the quality of measurements but it
also enables the estimation of some process variables that are missing or cannot be measured.
However, despite the high potential of data reconciliation, the current usage of this technique is
generally limited to off-line applications such as production accounting and survey analysis.

This paper presents real-time data reconciliation as a systematic and effective method based on
physical principles of a process to link instrumentation, process control and optimization. The
technique is depicted as an intelligent filtering technique that can be applied to:

• Provide higher quality and lower variability data for advanced control strategies;
• Estimate, as a soft sensor, non measured process variables;
• Provide valuable information for fault detection and performance monitoring.

The various concepts are presented as a general approach and supported by industrial examples.
Most illustrations are given for a flotation column at Southern Peru Copper Corporation (SPCC)
Toquepala concentrator. The flotation column is a process that benefits from an adequate control
since, as a final separation unit, it has a major impact on the plant efficiency. It is also well suited
for data reconciliation since measurements around this unit are generally incomplete and present
high uncertainties. Key parameters are also not directly measured. Bouchard et al. (2005)
presents a complete review of the different approaches for modeling and control of flotation
columns, current fields of research as well as trends for future investigations.

FROM DATA MEASUREMENT TO PROCESS OPTIMIZATION

This section briefly reviews process control objectives and presents how real-time data
reconciliation is linked to process control. Figure 1 proposes a version of the well-known process
control hierarchy. The only reason that makes this structure exist is to increase the profitability of
the enterprise. This objective is global for the company, is measured on a specified time horizon
and leads the different decisions and choices related to the control and optimization structure.
Other sub-criteria such as higher product quality, increased yield, reduced variability, reduced
downtimes corresponds to particular objectives of different groups impacted by process control
such as operation, production, maintenance or quality. The challenge is that these objectives have
to be taken into account globally and should not be addressed individually.

Time scale
$ Global objective

Economic and operational


Days Planning
planning
Tracking of optimal operating
Hours Optimizing control
points
Minutes Stabilizing control Regulation of basic variables
Production equipment and
Seconds Process, actuators & sensors
instrumentation

Figure 1: Process control hierarchy

The representation given by Figure 1 is a classical picture generally accompanied with the
bottom-up approach for the implementation of the different control levels. Despite that this
representation is correct, it is important to note that it is static. One important element that is not
represented is process monitoring in order to maintain performances. Figure 2 presents a typical
curve that corresponds to the loss of benefits over time caused by performance deterioration.
Many factors such as actuator wear, sensor faults, improper calibration, equipment degradation
or modification of raw material properties contribute to the performance loss. If no monitoring is
implemented, part of the benefits will be lost. As an indication, Figure 2 shows how a proper
monitoring strategy could be implemented to periodically bring the performance level back to its
optimum.

Target performance level


Benefits

Unacceptable performance level


With monitoring and corrective actions

Without monitoring

Initial intervention Time

Figure 2: Loss of benefits over time caused by performance deterioration


The data reconciliation technique acts on two major aspects:

• It consolidates and enlarges the basis of the structure depicted by Figure 1 by improving the
quality of data (accuracy and reliability) and providing estimates of non measured variables.
• It empowers a new approach for process monitoring and fault detection.

The block diagram of Figure 3 shows a more classical representation of how data reconciliation
could be integrated with process control and monitoring.

$ Optimizing Stabilizing
Planning Process
Control Control

Performance
Monitoring

Data
Reconciliation

Figure 3: Global control block diagram including data reconciliation

Application to flotation columns

The concepts can be illustrated using the flotation column as a whole process in itself. Figure 4
presents a schematic representation of a flotation column. The unit has three input streams
(conditioned pulp feed, air and wash water) and two output streams (concentrate and tailings).
The volume of the column is separated into two zones: a collection zone (pulp) and a cleaning
zone (froth). Finch and Dobby (1990) present a detailed description of the flotation column.

Wash Water

Cleaning Zone (Froth)


Concentrate

Pulp feed

Collection Zone (Pulp)

Air

Tailings

Figure 4: Schematic representation of a flotation column


Many authors structure the flotation column control in multiple layers (Bouchard et al. 2003a).
The first layer mainly controls flow rates (tailings, wash water, air and reagents), the second
layer targets optimal operating points for higher level variables such as froth depth, bias or gas
hold-up, and the last layer directly targets metallurgical objectives such as grade and recovery.
This can easily be fitted in the generic process control hierarchy presented in Figure 2 as shows
Figure 5.

Process, Actuators & Stabilizing Control Optimizing Control Planning


Sensors

Air Flow Rate Gas


Air Valve
Hold-up

Wash Water Recoveries


Wash Water Bias
Valve Flow Rate
Grades
Tailing Level
Valve
Reagent pH
Valve
Air Flow
Wash Water
Flow
Level Reconciled Grades,
pH Flow Rates & Densities
Gas Hold-up Calculated Recoveries
Reconciliation

Flow Rates
Data

Densities
Grades

Figure 5: Control layers for a flotation column

On the left hand side is the process with its sensors and actuators. All these sensors may not be
found on a typical column. Some of them are simply commercially unavailable for the moment,
other are unlikely to be included in a control strategy as their reliability is questionable. This will
be discussed in a subsequent section.

Immediately next to the process is the stabilizing control layer. Here, the basic variables, such as
air and wash water flow rates, are regulated as well as the level of the pulp/froth interface and
pH. A successful implementation of this layer may prevent process upsets and decrease final
product variability, but will not allow production gain otherwise.
The other layer is the optimizing control. Here, specifications such as grade and recoveries are
converted into operating conditions. These operating conditions may be set points sent directly to
stabilizing control loops or set points for other intermediate variables such as the bias or the gas
hold-up. The often cited bubble surface area flux is not shown on Figure 5 but would definitely
be part of the optimizing layer.

On the right-hand side is the economic and operational planning. Typical key performance
indicators (KPI) for flotation column are the output grades and recoveries. As flotation columns
are often the last cleaning units of a concentrator, they have a major impact on the performance
of the process. Minor improvements could lead to impressive benefits. In general, the objective
for flotation columns is to optimize the grade-recovery compromise.

ON-LINE MEASUREMENT OF STRATEGIC VARIABLES

One major obstacle when comes process optimization is the lack of on-line measurement of
strategic variables. High level variables such as liberation, sizing or mill load may be costly or
impossible to measure efficiently in real-time. Moreover, the precision of measurements or
estimations is often very poor, which may be a serious handicap when comes the time to justify
the investment.

On-line chemical analyzers are now commonly found in the mineral processing industry. They
allow the measurement of the mineral or metal grades of numerous flow rates on a real-time
scale frequency. Such measurements are very valuable for operation and optimizing control as
they often are themselves strategic variables or they contribute to the calculation of other KPIs
such as recoveries. The pressure to include on-line analyzers in control strategies that would
generate production gain is high since their cost is very high. Expenses over half a million
dollars may be expected to buy, install and operate an on-line chemical analyzers. However, the
poor quality of measurements seems to prohibit their use in control strategies.

Lack of availability

The measurements of an on-line analyzer may show a rejection rate above 30% due to:

• The lack of attention of operators watching alarms;


• The lack of efficiency of the alarm system that is supposed to assist control room
supervisors;
• Poorly designed and maintained samplers, pipes and pumps that transport samples to the
analyzer;
• Inadequate training of the personnel on statistical techniques and tools provided for
calibrating and configuring the analyzer.

Under such circumstances, the availability of on-line measurements drops considerably, affecting
the reading average by shift. It is also important to provide some simple procedures that allow
operators to recognize the most common alarms related to on-line analyzers.
Lack of quality

Typically, data from analyzers are from two to three times less accurate than the data from the
laboratory. In other words, when comes the time to reconcile plant assays, the standard deviation
associated to any on-line analyzer assay is usually two to three times larger than the same
laboratory assay. This gets a lot worse for concentration closer to the detection limits.

The poor data quality is even more damaging when KPIs are to be calculated with data coming
from on-line analyzers. First, KPI calculations often imply numerous measurements. Each
required additional measurement is another possibility of a missing data. Error propagation is
another issue. For example, the recovery for a separation unit can be described as:

Recovery =
( X Feed − X Tail )X Conc
( X Conc − X Tail )X Feed (1)

Where each X is a specific stream assay.

For example, given the values presented in Table 1 and using equation 1, the calculated recovery
is 82.8%. However, using a sensitivity analysis, the standard deviation of the recovery increases
from 1.9% to 5.2% when on-line analyzer data is used instead of laboratory data (absolute error
on recovery). This greatly affects the possibility of use of the recovery as a target KPI in an
optimizing control strategy.

Table 1: Example of recovery calculations

Flow Concentration Std deviation Std deviation


(Laboratory) (On-line analyzer)
Feed 9.1 ±0.28 ±0.77
Concentrate 29.7 ±2.60 ±7.43
Tailing 2.1 ±0.19 ±0.50
Recovery 82.8 ±1.9 ±5.2

The team responsible for the on-line analyzer should always be attentive to changes in the
characteristics of the mineral hence preventing dramatic alterations of measurements of the
metallic content. On-line analyzers come with a methodology and the statistical tools for
calibrating the on-line analyser model parameters with laboratory results.

However, a good calibration procedure is not sufficient; Samplers, analyzer’s electronic,


mechanical and electric parts, measurement cells and channels should also be revised and
maintained periodically.

Application to flotation columns

The problem of on-line measurement of strategic variables is present even around single process
unit such as flotation columns. At the left-hand side of Figure 5 is displayed the instrumentation
needed for the complete implementation of a control strategy, from base to top. It can be
observed that the process data used by the optimizing control layer constitutes in itself a great
part of the flotation column optimization problem. Bias and gas hold-up sensors are still a
research topic (Bouchard et al. 2003b), density meters are not known for their reliability under
this kind of operating conditions (Bergh and Yianatos 2003) and on-line analyser’s values have
previously been described as rather unreliable.

DATA RECONCILIATION

Measurement errors may come from many sources between sampling and sample analysis. An
easy way to assess measurement errors is to use data redundancy and data reconciliation. For
instance, with the two-product formula, it is possible to calculate the solids split in a flotation
column using metal content analyses. Whichever the analyzed metal, the same solids split value
should be obtained if the measurements were errorless. Using the copper and iron content
measurements in the feed, rejects and concentrate streams, the solids split values have been
calculated. Figure 6 exhibits a systematic difference between the values obtained with Cu and Fe.

2.0
Calculated solids split

1.5
Fe data

1.0

0.5
Cu data

0.0
0 200 400 600 800 1000 1200 1400 1600 1800
Samples

Figure 6: Solid splits in a flotation column calculated using copper and iron analyses

Data redundancy can also be used to increase data reliability. Data reconciliation is a technique
that uses data redundancy and coherency constraints to adjust process data. The most commonly
used constraints are the mass conservation equations. Other constraints such as heat and energy
conservation or operational constraints may also be used. The idea is to adjust the measurements
and make them coherent on a mass conservation point of view while minimizing the adjustments
considering their level of confidence. This can be realized through the minimization of a
lagrangian such as:

(Wi − Wˆi ) 2 ( xij − xˆij ) 2


L=∑ + ∑∑ + ∑ λk Gk (Wˆi , xˆij ) (2)
i σ w2
i
i j σ x2
ij
k
Where W would represent raw solid flow rate measurements, Ŵ the adjusted flow rates, the xij
the raw assays, x̂ij the adjusted assays and the right-hand term would represent the mass balance
constraints.

Such adjusted data are known to be more accurate than raw data (Crowe 1996) and therefore
their related standard deviations are inferior to the ones of raw data. Standard deviations of
adjusted data can be obtained through sensitivity analysis (Flament et al. 1986; Hodouin et al.
1989). The calculation of non measured solid flow rates is another benefit of data reconciliation.
Flow rates are rarely directly measured although they are used in many KPI calculations such as
residence times or circulating loads.

Data reconciliation in the mineral industry has traditionally been used for accounting purposes
and survey analysis. The reconciliation in real-time of process data coming form on-line
analyzers appears as a natural evolution of the technology. Some experience have been made in
the past (Hodouin et al. 1997) but the implementation of real-time data reconciliation remains
anecdotic in the mineral industry.

Coupling real-time data reconciliation with on-line analyzers should enables rapid ROI in
instrumentation, as data reconciliation compensates for the main weakness of on-line analyses.
The benefits of data reconciliation can be classified following three axes:

• Intelligent filtering;
• Virtual sensoring;
• Performance monitoring.

Intelligent filtering

Filtering on-line analyzer data through data reconciliation is done in two steps. The first step
consists in validating the data before sending them to the data reconciliation engine. To be valid,
a measured value should, for instance, belong to an acceptable operating range. Should the value
be out of range, it is replaced by the last known good value or a value generated by any other
replacement method. If redundancy allows it, outliers can simply be removed from of the mass
balance and back-calculated at the end of the reconciliation process.

Once all the measured values have been validated, they can be adjusted to comply with the mass
conservation constraints. As these adjustments take into account a process model (e.g. mass
conservation law), data reconciliation can be compared to a phenomenological observer of the
process.

Virtual sensoring

As previously stated, data reconciliation does not only adjust measurements, it also estimates
flow rate values that cannot be measured. It also copes with defective sensors when outliers are
detected, removed from the balance and back-calculated from reconciled values.
KPIs can also be generated in real-time using reconciled data. On Figure 7, lead recovery has
been calculated using both raw and reconciled data. The recovery values calculated from raw
data exhibits strong variations between 30% and 130%. On the other hand, the recovery values
calculated from reconciled data are always in the operation expected range. Only the reconciled
recovery values could be used as a KPI in an optimizing strategy.

Raw data Reconciled data


140
Pb Recovery (%)

120
100
80
60
40
20
0
Samples

Figure 7: Example of a lead recovery calculated using raw and reconciled data

Performance monitoring

Monitoring a continuous mass balancing procedure provides valuable information about the
process and its instrumentation. The statistical analysis of the data reconciliation residuals is an
example of a simple and very efficient process monitoring through data reconciliation.

The data reconciliation residuals are the differences between the raw and the reconciled data.
Traditionally, the statistical analysis of the residuals of a mass balance has been used for
determining the measurement error distribution. For instance, the mean of the absolute values of
the reduced residuals would indicate if the model needs to be tightened or loosened. However,
the study of the evolution of the residuals of every single measurement over time provides a new
type of information. Over a period of time, reduced residuals for a given measurement are
expected to be normally distributed around zero. Leaks and sensors bias can hence be detected
through simple observation of the distribution of the residuals.

Figure 8 shows two residuals distributions. In the first graph, the reduced residuals are
distributed in something very similar to a normal distribution. The second graph shows a case
where a sensor fault was present. The distribution is significantly shifted to the left of the normal
curve and the peak is higher than in the normal curve.
0.09
0.14
0.08
0.07 0.12
0.06 0.1
0.05 0.08
0.04
0.06
0.03
0.04
0.02
0.01 0.02
0 0
-5.1 -3.1 -1.1 0.9 2.9 4.9 -5.1 -3.1 -1.1 0.9 2.9 4.9

Figure 8: Detection of a sensor fault through the analysis of residuals distributions

Application to flotation columns

A real-time data reconciliation system (Bilmat Real-Time) has been installed by GCO-Algosys
on the column #1 at SPCC Toquepala concentrator. The column feed, concentrate and tail
streams are sampled every 15 minutes by an Outukumpu Courier 30-XP which provides analysis
of density (% solid), copper, iron and molybdenum. Although other sampling points are present
in the plant, the use of them to extend the scope of the mass balance was not investigated at the
moment. This disallowed the reconciliation engine from recovering from missing analyses, all
three flows being needed. However, missing values often mean that the column is not under
normal operating conditions. Figure 9 presents the first flow sheet implemented in Bilmat Real-
Time for balancing copper, iron and molybdenum analyses as well as water, hence allowing an
estimation of the bias.

Water Flow Rate


Water
Wash

% Solid Feed Concentrate % Solid


Cu Cu
Fe Fe
Tail

Mo %Sol Mo
Cu
Fe
Mo
Pulp Flow rate

Figure 9: Mass balance flow sheet of the flotation column

Following a preliminary data analysis, the molybdenum assay was abandoned. Its very low
concentration leads to incoherent data on one of the three streams almost all the time. The water
balance was also abandoned after a physical inspection of the column revealed a non measured
water addition in the concentrate to help its carriage. It became hence impossible to estimate the
bias. However, prior experiences in the plant had lead the plant personnel not to use the wash
water. As the desired grade was very easily obtained and the wash water was known to affect the
recovery, this operating practice was not challenged on first instance. And, as a consequence, the
bias measurement is useless.

Table 2 presents a typical mass balance result for the flotation column. The standard deviations
of Cu streams are almost identical for measured and adjusted values. For Fe streams, the
standard deviations of reconciled values are slightly reduced. This is explained by the fact that
iron is almost not separated compared to copper. In this configuration, copper grade and recovery
accuracies are not significantly improved through data reconciliation.

Table 2: Typical mass balance for the flotation column

Value Standard deviation


Feed Conc. Tail. Feed Conc. Tail.
% Cu Measured 10.964 28.768 1.708 0.050 0.050 0.010
Adjusted 11.130 28.365 1.707 0.049 0.050 0.010
% Fe Measured 24.058 28.487 16.744 0.050 0.050 0.080
Adjusted 22.190 29.413 18.242 0.036 0.046 0.062
Relative flow Adjusted 1.000 0.353 0.647 0.100 0.109 0.127

REAL-TIME DATA RECONCILIATION TO CONTROL A FLOTATION COLUMN

In this section, the first phase of the design of a control strategy for the SPCC Toquepala
flotation column #1 is presented. The flotation column, used in the cleaning circuit, has the
following dimensions: φ8’x h40’. Figure 10 presents the P&ID of the column.

The installed instrumentation allows:

• The automatic control of the air flow, the wash water flow and the froth depth;
• The remote monitoring of the column operation.

Currently, the control strategy is mainly manual and aims at rejecting disturbances generated by
strong pulp feed stream changes. The automatic control strategy objectives are:

• To stabilize the column operation;


• To minimize human intervention;
• To maximize the grade-recovery compromise.

Several factors have influenced the design of the automatic control strategy:

• The general approach and the layered structure presented in a previous section (see Figure 5);
• Process variable measurement issues;
• Acceptance by plant personnel of the control strategy mechanism and objectives.
The first step of any control strategy design project is an audit of the current situation and a
definition of the objectives. The two main KPIs that are monitored for the column are the copper
grade and recovery. Foundry contract based objectives are defined for the grade. Hence the plant
personnel concentrate their set point adjustments of air flow rate and froth depth on keeping the
grade at the desired level. Such a strategy maintains the product quality at the cost of its quantity.
Thus, the objective of the automatic control strategy is to optimize the recovery without affecting
the grade.

Figure 10: P&ID of the flotation column

The stabilizing control layer consists in a control loop on the air flow rate and the froth depth
(see Figure 5). As previously stated, the wash water is not currently used on this column and is
left for a future investigation. The control loop on the wash water flow rate is thus left inactive.
As no on-line sensors were available to monitor the use of the pH loop, current setting is kept as
is.

The optimizing control (see Figure 5) includes the control of gas hold-up and bias variables. The
bias is discarded since the first implementation will not use the wash water. Bubble surface area
flux sensors are not commercially available. Furthermore, at least one pressure sensor is missing
for estimating the gas hold-up. However, they are known to relate with the air flow rate, hence
the designed control loop of Figure 11.

Grade Air Flow Air Valve Grade


Set Point Set-point Opening

PID PID Air Grade


Process Process
Set-point
limiter
Data
Reconciliation

Figure 11: Control strategy proposed for the flotation column (first phase)

Within a given range, the air flow rate is known to have opposite effects on grade and recovery.
As the grade is the constrained, a control loop insures the grade does not drop under the
specifications, but each time the grade overpasses the specifications, the air flow rate will also
increase to bring the grade back to its set point, meanwhile increasing the recovery. Limits on the
air flow rate set point are set to prevent the set point to fall below a minimal operating zone or to
create churning up conditions (Finch and Dobby 1990).

Data reconciliation remains as a filter to validate on-line measurements and make them coherent
according to the mass conservation principle. Given the current configuration, copper grades and
recovery accuracies are not significantly upgraded through data reconciliation. The next section
provides possibilities that could help increasing copper measurements accuracy. Data
reconciliation also opens the possibility to observe and optimize the recovery of molybdenum in
the column.

CONCLUSION

The paper has presented a general description of real-time data reconciliation and how this
technique can be linked to process control and performance monitoring. The approach was
applied to design a control strategy for a flotation column at SPCC Toquepala concentrator. This
work consists in the first step of a global strategy to optimize the operation of the process unit.
Recommendations below are currently being assessed before carrying out the next project
phases.

Data reconciliation improvement:

• Add the zinc assay to the balance. Unlike iron, zinc is usually well separated in a copper
concentration process. Zinc could hence help to improve the accuracy of the copper assay;
• Include the water balance. Even if the wash water is not used, water should be included in the
data reconciliation procedure. The water addition to the concentrate flow could be estimated
once and then considered fixed;
• Extend the scope of mass balance. Extending the mass balance to other units of the plant
would improve data redundancy;
• Consider the use of different techniques to cope with the column dynamics instead of using
steady-state mass balances. Stationary or dynamic mass balances should be considered.

Control strategy improvement:

• Include gas hold-up measurements in the cascade loop of Figure 11;


• Use the froth level along with the air flow rate in a multivariable controller;
• Activate the wash water and consider the bias in the control strategy.

Production yield improvement:

• With the additional flexibility provided an advanced control strategy, the recovery strategy at
the rougher stage could be revised and better coordinated with the recovery strategy at the
cleaner stage.

REFERENCES

Bergh, L.G. and Yianatos, J.B., “Flotation Column Automation: State of the Art”; Control
Engineering Practice, Vol. 11, pp. 67-72, 2003.

Bouchard, J., Desbiens, A. and del Villar, R., “Model-Based Control of Column Flotation:
Toward Industrial Applications and Real-Time Optimization”; 35th Canadian Mineral
Processors (CMP) Conference, Ottawa, Canada, Paper 31, 2003a.

Bouchard, J., del Villar, R., Desbiens, A. and Aubé, V., “On-Line Bias and Froth Depth
Measurement in Flotation Columns: A Promising Tool for Automatic Control and Real-Time
Optimization”; 2003 SME Annual Meeting, Cincinnati, USA, Preprint 03-123, 2003b.

Bouchard, J., del Villar, R., Desbiens, A. and Núñez, E., “Modeling and Control of Flotation
Columns: An overview of the past – A snapshot of the present – A glance at the future”; Int.
Journal of Mineral Processing, submitted for publication, 2005.

Crowe, C.M, “Data Reconciliation – Progress and Challenge”; J. Proc. Cont., Vol. 6, No. 2, pp.
89-98, 1996.

Finch, J.A. and Dobby, G.S., “Column Flotation”; Pergamon Press, Oxford (UK), 1990.

Flament, F., Hodouin, D., Bazin, C., “Propagation of Measurement Errors in Mass Balance
Calculation of Mineral Processing Data”; Proceeding of APCOM’86, Pennsylvania State
University Editor, SME-AIME, 1986.

Hodouin, D., Bazin, C. and Flament, F., “Reliability of Material Balance Calculations – A
Sensitivity Approach”; Minerals Engineering, Vol. 2, No. 2, pp. 157-170, 1989.
Hodouin, D., Bazin, C. and Makni, S., “Dynamic Material-Balance Algorithms: Application to
Industrial Flotation Circuits”; Minerals and Metallurgical Processing (USA), Vol. 14, No. 2, pp.
21-28, 1997.

King, M.J., “How to Lose Money with Advanced Controls”; in: L.A. Kane (Ed.), Advanced
Process Control and Information Systems for the Process Industries, Gulf Publishing Company,
Houston, Texas, pp. 38-41, 1999.

You might also like