You are on page 1of 23

Indian Chemical Engineer

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/tice20

Modelling of chemical processes using artificial


neural network

Rashi Verma & Chandra Shekar Besta

To cite this article: Rashi Verma & Chandra Shekar Besta (19 Dec 2023): Modelling
of chemical processes using artificial neural network, Indian Chemical Engineer, DOI:
10.1080/00194506.2023.2280655

To link to this article: https://doi.org/10.1080/00194506.2023.2280655

Published online: 19 Dec 2023.

Submit your article to this journal

Article views: 82

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tice20
INDIAN CHEMICAL ENGINEER
https://doi.org/10.1080/00194506.2023.2280655

Modelling of chemical processes using artificial neural network


Rashi Verma and Chandra Shekar Besta
Department of Chemical Engineering, National Institute of Technology Calicut, Calicut, India

ABSTRACT ARTICLE HISTORY


In this paper, a methodical strategy is put forth for the development of Received 29 June 2023
models for chemical processes by using an Artificial Neural Network Accepted 18 October 2023
(ANN). To show the effectiveness of the proposed methodology, three
KEYWORDS
industrial chemical processes were considered. The intended work is to Artificial neural network;
introduce the guideline for designing any ANN model by identifying the prediction; neurons; hidden
optimum number of neurons and hidden layers and coming up with layers
the most suitable architecture for the neural network. The effect of
viable parameters on predicting the specified parameter was evaluated
using an ANN model. A systematic procedure adopted, ANN networks
were designed to predict the boiler efficiency, frost thermal
conductivity, and gas holdup under different operating conditions. At
last, a comparative study on the experimental output and ANN
approximated output for the three chemical processes was done. For
developing the ANN model for boiler and frost conductivity, it was
found that employing two layers provided a superior fit. The R² values
for these fits were found to be 0.98877 and 0.9891, respectively. On the
other hand, when configuring the column flotation model, a single
layer with nine nodes was identified as the optimal ANN structure,
yielding an excellent R² fit value of 0.99337.

Nomenclature
ANN Artificial Neural Network.
NN Neural Network.
MSE Mean Square Error.
RMSE Root Mean Square Error.
PSO-RBF Particle Swarm Optimizations – Radial Basis Functional.
CSA-LSSVM Coupled Simulated Annealing – Least Square Support Vector Machine.
FF-ANN Feed-Forward Artificial Neural Network.
ELU Exponential Linear Unit.
CNN Convolutional Neural Network.
RNN Recurrent Neural Network.
LSTM Long Short-Term Memory.

1. Introduction
Systematic modelling and simulation of various aspects of industrial applications are essential for
training, making techno-economic decisions, and continuously monitoring the operations of any
facility. To create models and conduct simulations, datasets are essential. As each chemical process
plant consistently gathers a substantial volume of operational data to guarantee the plant’s optimal
performance through system monitoring, these datasets are typically stored in databases. These

CONTACT Chandra Shekar Besta schandra@nitc.ac.in Department of Chemical Engineering, National Institute of
Technology Calicut, Calicut 673601, India
© 2023 Indian Institute of Chemical Engineers
2 R. VERMA AND C. S. BESTA

databases could potentially serve as the foundation for developing an Artificial Neural Network
(ANN) model aimed at analyzing the chemical process plant behaviour [1]. Artificial intelligence
algorithms have expanded fast in recent years and are now widely used in research. The majority
of artificial intelligence algorithms rely on large volumes of data and machine learning algorithms.
ANN is a common type of artificial intelligence algorithm that is utilised in a variety of disci-
plines. The Neural Network (NN) is used to detect and estimate the final output under a set of
input situations using network training and testing, the ANN model may be used as an effective
tool for predicting system output in place of the first principal model. On the other hand, the
ANN model’s prediction accuracy is strongly dependent on the data of the input layer and network
specification. The weights of the neural network and the bias values are important considerations.
Researchers have previously used artificial intelligence approaches to handle complicated engineer-
ing challenges, such as fuzzy logic, genetic algorithms, and artificial neural networks. Each approach
has its benefits; however, it has been shown that ANN may produce a more accurate forecast.
ANN is a tool that comprises three layers and may be used to construct nonlinear relationships
between inputs and outputs [2]. The hidden layer connects the input and output layers. It is made
up of neurons that are reliant on the model’s setup, and each neuron builds a weighted link between
the input and output parameters to describe connections. A range of input parameters and the
number of neurons in the hidden layer was investigated to build the best accurate model. ANN
models were trained using the experimental findings. Many studies have shown that even with a
minimal quantity of data, ANN is capable of accurately predicting goal values.
Like in [3], an ANN model was created to accurately anticipate the compressive durability of
Carbon Dioxide concrete. The reported model showed a regression of 0.95 and an average error
of 3.43%. Similarly, in [4], to estimate the mass defect of a nucleus an ANN model was developed.
The model was able to predict the mass defect very accurately, giving a coefficient correlation of
0.9984 for training and very low MSE. A similar application of ANN was used to predict the
power production of the power plant by predicting the efficiency of the steam boiler. The reported
model predicted a high efficiency of 90.018%, which was higher compared to the real boiler
efficiency [1]. Further, an improved moving window-based ANN was used to design a framework
for online prediction and supervision of crude oil pollution in industrial preheat exchangers [5].
The authors in [6] introduce a modelling approach based on ANN to anticipate the mass transfer
coefficient of the ozone absorption process in rotating packed beds (RPB). On the training and test
sets, the result shows a predictive accuracy with regression of 0.9896 and 0.9877, RMSE of 0.01801
and 0.03085, and Mean Absolute Error (MAE) of 0.01265 and 0.02219, respectively. The studied
ANN model can also be used to increase the quantity of adsorption in RPB and aid in its develop-
ment, like the work done in [7]. It was discovered in [8] that multilayer ANN can assist in deter-
mining the best conditions for any process, such as carbon dioxide capture using sodium hydroxide.
In [9], the application of ANNs for the prediction of crucial variables in eliminating air pollu-
tants by spray tower was shown to be effective. In comparison to experimental results, the majority
of the errors generated by this network structure were just under 5%. Two efficient artificial neural
networks, PSO-RBF and CSA-LSSVM, were used in [10] to determine the equilibrium adsorption
of Sulphur components from the liquid phase of hydrocarbon solution in isotherm systems. ANN
model is used successfully to correlate surface tension data points of 91 hydrocarbons from three
different classes, including alkane, alkene, and cycloalkane [11]. The ANN model was capable of
correlating more than 88 percent of the data points. ANN modelling has several perks. Not only
does it produce more reliable and precise results than first principles models. Table 1 presents a
summary of the utilisation of Artificial Neural Networks (ANN) in the modelling of chemical pro-
cesses. The table encompasses diverse applications of ANN modelling within the realms of Chemi-
cal and Process Engineering. This encompasses various areas, including thermodynamics, kinetics,
catalysis, process analysis, optimisation, process safety, and control. The applications are classified
alongside corresponding case studies and other information like the class of NN, activation func-
tion, NN structure or topology with its reference.
Table 1. Applications of ANN to various Chemical Process.
Activation
Application Area Field Case study Class of NN Function Topology* Software Reference Year
Kinetics and catalysis Modelling of catalytic Analysis of NO decomposition over FF-ANN Sigmoid 4–32-1 in-house [12] 1995
processes Cu/ZSM-5 zeolite software
Kinetics and catalysis Catalyst design Design of catalyst for propane FF-ANN Sigmoid 6–20–12–2 in-house [13] 1997
ammoxidation software
Kinetics and catalysis Catalyst design Design of a catalyst for methane FF-ANN Sigmoid 6–20–9-2 in-house [14] 2001
oxidative coupling software
Kinetics and catalysis Combinatorial catalysis Modelling of catalysts for oxidative FF-ANN 13–26–12-6 SNNS neural [15] 2002
dehydrogenation of ethane networks
simulator
Kinetics and catalysis Modelling of catalytic Catalytic activity for n-paraffin FF-ANN Sigmoid/ 4–8–6-3 SNNS neural [16] 2003
processes isomerisation Tanh networks
simulator
Thermodynamics and Phase Equilibrium Prediction of azeotrope formation FF-ANN Sigmoid 16–6-1 in-house [17] 2003
transport phenomena software
Process analysis and Process Optimisation Isoprene Process FF-ANN Sigmoid 10 neural networks (all in-house [18] 2004
optimisation with one hidden software
layer)
Process analysis and Process Analysis Grated coconut industry FF-ANN Tanh 9–4-1 Matlab [19] 2008
optimisation
Thermodynamics and Ionic Liquids Estimation of physical properties of FF-ANN Tanh 10–15–15-1 Matlab [20] 2009
transport phenomena ionic liquids
Process analysis and Process Synthesis Absorption-based CO2 capture and FF-ANN Tanh Several neural Matlab [21] 2010
optimisation Maleic Anhydride process networks (all with
one hidden layer)
Kinetics and catalysis Modelling of catalytic Estimation of the reaction rate in FF-ANN Tanh/Linear 3–6-1 Matlab [22] 2010
processes methanol dehydration
Kinetics and catalysis Modelling of catalytic Selective CO Oxidation over FF-ANN Tanh 14–7–7-1 Matlab [23] 2011

INDIAN CHEMICAL ENGINEER


processes Copper-Based Catalysts
Thermodynamics and Phase Equilibrium Vapor–Liquid equilibrium of NH3/ FF-ANN Sigmoid 2–13-2 in-house [24] 2011
transport phenomena H2O and CH4/C2H6 systems software
Process safety and Soft Sensors pH control in a chemical process RNN Tanh 5–14–1 [25] 2016
control
Kinetics and catalysis Catalyst deactivation Dry reformer under catalyst FF-ANN Tanh 3–12–5-6-1 in-house [26] 2018
sintering software
Thermodynamics and Molecular Enhancing the High-Throughput FF-ANN Linear/ELU** 25–16–8-4-3 PyTorch [27] 2018
transport phenomena Thermodynamics Force Field Simulation (HT-FFS)
Process safety and Surrogate model from Phthalic anhydride synthesis in a RNN ReLu 3–64–64–1 Keras [28] 2019
control CFD in MPC fixed-bed catalytic reactor

(Continued)

3
4
Table 1. Continued.
Activation
Application Area Field Case study Class of NN Function Topology* Software Reference Year

R. VERMA AND C. S. BESTA


Thermodynamics and Transport Phenomena Determination of reduced boiling FF-ANN Sigmoid 2–2–2-1 Matlab [29] 2019
transport phenomena point from molecular weight and
acentric factor
Process safety and Surrogate model in MPC Reaction process in a CSTR FF-ANN Tanh 3–10-1 Matlab [30] 2019
control and RTO
Kinetics and catalysis Catalyst selection Catalyst selection for the WGS FF-ANN Sigmoid 51–12-1 R – neuralnet [31] 2019
reaction
Thermodynamics and Phase Equilibrium Vapor–liquid flash calculations FF-ANN Linear 3–10-2 Keras-Python [32] 2019
transport phenomena /Sigmoid
Process analysis and Industrial Process Methanol production CNN ReLu 5 convolution layers, 36 Caffe [33] 2020
optimisation Operating (Predictive filters, and 3 pooling
Control) layers
Process analysis and Predictive Control non – isothermal continuous stirred RNN Tanh 2 hidden layers with 30 Python-Keras [34] 2020
optimisation tank reactors neurons in each layer
Process analysis and Process Synthesis CryoMan Cascade cycle system FF-ANN Python-PyTorch [35] 2020
optimisation
Process analysis and Process Analysis Crystallization process FF-ANN two-layer neural Matlab [36] 2020
optimisation network with four
hidden neurons
Process safety and Hybrid model in a MPC Two-consecutive CSTRs RNN Tanh 2–30–30-4 IPOPT-Python [34] 2020
control
Process safety and Cyber Security MPC integrated with cyber-secure FF-ANN Tanh 4–12–10-9 Matlab [37] 2020
control feedback controller
Process analysis and Process Optimisation Large scale gas to liquids process FF-ANN Sigmoid 4–7–15-1 Matlab [38] 2020
optimisation
Process safety and Fault Detection Penicillin fermentation process LSTM Sigmoid 10–20–15-2 Matlab [39] 2020
control
Kinetics and catalysis Determination of catalyst Determination of acidity in metal FF-ANN Tanh 6–10-1 Matlab [40] 2020
acidity incorporated zeolites by FTRI
Thermodynamics and Molecular Correlation functionals of the Fully connected Sigmoid 4–8 neurons in each TensorFlow [41] 2020
transport phenomena Thermodynamics electronic density neural hidden layer
networks
Process analysis and Process Analysis Fluidised bed reactor Fenton FF-ANN ReLu 4–10–10 − 10 − 10 − 2 R – Keras [42] 2021
optimisation process
Process analysis and Process Analysis Thermo-catalytic methane FF-ANN Sigmoid 6–9-1 Matlab [43] 2021
optimisation decomposition
Process safety and Fault Detection Tennessee Eastman Process LSTM Sigmoid/ Matlab [44] 2022
control Tanh
*The first and last elements in topology represent the number of neurons in the input and in the output layer, respectively. Among them, the number of neurons in the hidden layer(s).
INDIAN CHEMICAL ENGINEER 5

The paper aims to develop a methodology to find the ideal number of neurons in the hidden
layer and also to determine the number of hidden layers. Using the schemed methodology ANN
model network is designed to predict the output for the three systems, namely the boiler system,
frost thermal conductivity system, and column floatation system, and also to lay the guidelines.
To carry out the ANN model building and to perform the simulation MATLAB ANN toolbox is
used. The article is structured into five distinct sections. Section 2 provides a fundamental compre-
hension of Artificial Neural Networks (ANN) accompanied by an exploration of the mathematical
aspects. In Section 3, the adopted methodology details provided, while Section 4 focuses on the
application of the adopted methodology to model the three industrial chemical processes. The
final section encapsulates the concluding remarks regarding the proposed research.

2. Artificial neural network (ANN)


ANN is a calculational algorithm anticipated to replicate the biological operation of the central ner-
vous system of humans, which is composed of ‘neurons’. When given an input value, it simply
learns a few rules through self-training and produces the most likely results related to the projected
output value. The ANN learning process is broken down into three parts training, testing, and vali-
dation, which uses real-world data. ANN assigns weights to linkages based on input data, which are
then compared to a neuron’s threshold. Individual neurons from the preceding layer are used to set
weights for the next layer. When the actual outcome is compared to the anticipated result by the
learning model, the final decision is made at the exit. The network will save the weights and
thresholds distribution as needed if the solution is within the tolerance; otherwise, the technique
will be repeated.
ANN is typically represented as a vector of values. These inputs are linearly combined with cor-
responding weights, and a bias term is added. Mathematical representation of neuron i in a given
layer is given below:

zi = (wi .xi ) + bi (1)

zi is the weighted sum of inputs for neuron i; wi is the weight connecting neuron of previous layer to
neuron in the current layer; xi is the input from neuron; bi is bias term for neuron.
The weighted sum zi is then passed through an activation function to introduce non-linearity
into the network.
ai = gi (zi ) (2)
where gi is the chosen activation function and ai is the neuron output.
The ANN architecture, which consists of input, output, and hidden layers made of neurons,
must be chosen before the training process. The number of neurons in the input and output layers
is defined by the amount of input and output variables. A validation error is determined after each
training period throughout the training process, along with a training error. The validation error
will increase as the ANN overtrains, while the training error will decrease. The training step is ter-
minated, and the ANN specification at the minimal validation error position is repeated if the vali-
dation error continues to rise for a defined period of epochs.
The hidden layer is a layer of neurons in a neural network that lies between the input layer and
the output layer. It is responsible for learning the intermediate representations of the data, which
are then used by the output layer to make predictions. The number of neurons in the hidden layer
affects the complexity of the model. A larger number of neurons allows the model to learn more
complex representations of the data, but it also increases the risk of overfitting. Overfitting occurs
when the model learns the training data too well and is not able to generalise to new data. In your
case, the MSE value decreased as the number of neurons in the hidden layer increased. This suggests
that the model was able to learn more complex representations of the data as the number of neurons
6 R. VERMA AND C. S. BESTA

increased. However, it is important to note that the MSE value may not always decrease as the num-
ber of neurons increases. In some cases, increasing the number of neurons may lead to overfitting
and an increase in the MSE value.
The ideal number of neurons in the hidden layer depends on the complexity of the problem
being solved and the amount of training data available. A good way to find the optimal number
of neurons is to experiment with different values and see which one gives the best results. Here
are some other factors that can affect the role of the hidden layer: (i) activation function used by
the neurons in the hidden layer, (ii) learning algorithm used to train the neural network and
(iii) the regularisation techniques used to prevent overfitting. The hidden layer plays a vital role
in the learning process of a neural network. By learning intermediate representations of the data,
the hidden layer allows the neural network to make more accurate predictions. However, it is
important to choose the right number of neurons in the hidden layer to avoid overfitting.
The hidden layer’s transfer function is an important aspect of the artificial neural network that
creates the mapping between input and output variables. In Back Propagation(BP) artificial neural
networks, the sigmoid function is a frequent nonlinear function. The log-sigmoid function (labelled
‘logsig’) and the tan-sigmoid function (labelled ‘tansig’) are the two main types of the sigmoid func-
tion. The former has a range of values of (0,1), whereas the latter has a range of values of (−1,1) [45].
In most circumstances, the sequence of logsig or tansig for the hidden layer and purelin for the out-
put layer yields the best accuracy. The logsig, tansig, and purelin functions’ algorithms are given in
Eq (3-5) below.
1
g1 (x) = (3)
1 + e−x

1 − e−x
g2 (x) = (4)
1 + e−x

g3 (x) = x (5)

Epochs in the context of ANN modelling refer to the number of times the learning algorithm
iteratively processes the entire training dataset during training. In each epoch, the algorithm
updates the model’s parameters to minimise the chosen loss function. The significance of epochs
lies in their role in the model’s learning process. Training a model for multiple epochs allows it
to progressively learn from the data, refining its internal representations and improving its predic-
tive capabilities. Initially, the model makes larger adjustments to its parameters to fit the training
data better. As training continues, these adjustments become smaller, leading to a gradual conver-
gence towards a solution that hopefully generalises well to unseen data. However, using too few
epochs may result in underfitting, where the model hasn’t learned enough from the data, while
using too many epochs might lead to overfitting, where the model becomes overly specialised to
the training data and performs poorly on new data. The optimal number of epochs depends on fac-
tors such as the complexity of the problem, the size of the dataset, and the model’s architecture.
Regular monitoring of validation performance and using techniques like early stopping can help
determine the appropriate number of epochs for a given task. In most cases, the activation function
of hidden and output layers is either linear or nonlinear. To establish the connections between the
inputs and outputs, a variety of learning methods are available. The feed-forward back-propagation
learning Levenberg–Marquardt algorithm is the most extensively employed [46]. The mean square
error (MSE) is used to assess the ANN models’ performance, where the mathematical expression is
given in Eq (6).

1 N
MSE = (Yo − YE )2 (6)
S i=1
INDIAN CHEMICAL ENGINEER 7

Yo is the predicted output from the ANN models, YE is the experimental output is, and S is the num-
ber of samples.

3. Methodology
ANN is made up of ‘neurons,’ which are interconnected processing units stacked in layers with
single or more hidden levels and an output layer. The neurons are linked together by linkages
that have weight factors of their own. The Neural Network is trained by altering the weights of
the links between neurons so that the network’s output matches the represented experimental
data as closely as feasible.
This paper presents a comprehensive guideline for the systematic development of an optimal
Artificial Neural Network (ANN) model. The prescribed procedure involves a sequence of crucial
steps. Initially, the appropriate count of input and output nodes for the neural network is ascer-
tained. Subsequently, the determination of the optimal number of neurons within the hidden
layer, a pivotal factor influencing the network’s predictive capacity, is addressed. The configuration
of the hidden layer’s – neurons presents challenges when approached from a purely theoretical
standpoint. Consequently, an empirical methodology is embraced, wherein a correlation (Eq. 7)
is employed to compute the suitable number of neurons for the hidden layer. As outlined in Eq.
(7) from reference [7], the calculation of the neuron count is facilitated through the correlation
expression as follows:
n = (2∗Ni ) + 1 (7)
where n is the neurons of the hidden layer and Ni is the number of independent variables of any
process. The optimal neuron number was determined to achieve adequate prediction accuracy
while avoiding overfitting.
It could be said that Step 2 is to identify the optimum number of neurons in a hidden layer by
analyzing the value of MSE and regression, which was calculated for every single neural network.
Neural networks which gave the minimum MSE and maximum regression are selected as the best
network, suggesting the optimum number of neurons. Step 3 is to identify the number of the hidden
layers with an optimum number of neurons found in Step 2. Step 4 is to come up with most suitable
the architecture for the neural network, having the maximum regression where regression reflects
the performance of the network. All the steps mentioned above, when followed, will help to build
the most suitable ANN model for any system. After the selection of model architecture, the model is
trained.
The segmentation of an input-output relationship into a sequence of linearly separable
segments using hidden layers is the central premise of neural computing. The following is how
the ANN model is developed with the help of MATLAB and Neural Network Toolbox Release
R2021b:

. Choosing the database


. Normalising data
. Model inputs selection
. Identifying model architecture
. Training of model
. Validation of the model

Iteration refers to the process of training input-output sets, implementing the network, evaluat-
ing the RMS error, and lastly updating weights and biases to minimise RMS errors. The output layer
in this paper was created with the tansig function and the trainlm network training function, which
updates weight and bias values using Levenberg-Marquardt optimisation. trainlm is often the fastest
8 R. VERMA AND C. S. BESTA

backprogation algorithm in the MATLAB ANN toolbox. As per the foundational theory of artificial
neural networks (ANNs), an increase in the number of layers should theoretically enhance the
capacity of the fitting function, potentially leading to improved outcomes. However, a larger num-
ber of layers might occasionally cause the problem of overfitting to converge. As a result, for the vast
majority of situations, one or two hidden layers are sufficient, balancing prediction accuracy and
convergence impact. As a result, one and two hidden layers were considered in this project. In
this study, all input (before feeding to the NN) and output variable data (before feeding to the
NN) in the training phase were normalised so that they changed between 0 and 1.

4. Case studies
Three chemical processes were taken to study ANN modelling. We aim to create the methodology
for designing ANN models, for that different networks were designed to predict the outputs for all
three processes. After the completion of modelling, the performance of the network was checked
using the regression and Mean Square Error value. The model’s prediction impacts were considered
good as the regression values approach to 1 and Mean Square Error value to minimum value.

4.1. Industrial boiler


The majority of industrial operations need the use of heat. This is commonly accomplished by using
boilers to generate steam. Steam is employed as a heating fluid because, like water, it evaporates at a
temperature well below the metallurgical limit of carbon steel, which is used in the manufacturing
of boilers. Fuel combustion in the presence of air and heat transfer from combustion product to
water are the two primary processes in a boiler. The steam generated is then used in the process.
Solid fuels like coal, lignite, bagasse, and rice husk, as well as liquid fuels like furnace oil, Light Die-
sel oil, High-Speed Diesel Oil, and gaseous fuels like natural gas, could be used in boilers.
In this paper, industrial data from [47] is utilised for modelling of steam boilers efficiency.
Because steam boilers have a high latent heat value, they are intended to provide steam for heat
transfer. Steam has a two-fold higher heat transfer coefficient than water, it is the primary input
for numerous enterprises, whether they use it directly or indirectly. As a result, it may be success-
fully used in power generation facilities to create energy. As a result, the boiler is regarded as an
important part of power plants, refineries, and other similar facilities, and the schematic diagram
of the boiler is given below in Figure 1.

Figure 1. Schematic diagram of improving steam system efficiency.


INDIAN CHEMICAL ENGINEER 9

A boiler’s operational conditions should be constantly checked. It should be noted that because
boilers operate at high temperatures and under pressure, the explosion is a severe concern that
poses a threat to boiler operations [1]. Several factors should be examined and acknowledged during
the boiler design process, including economic, fuel, and equipment costs. The intricacy of the steam
boiler makes conventional measures difficult since various elements influence the boiler’s perform-
ance traditional techniques of measuring boiler performance are hardly cost-effective nor time-
saving.
In this research, the modelling of boiler efficiency is performed based on experimental data.
The input layer encompasses two input nodes: one representing steam flow rate and the other
representing output temperature, both of which are independent variables. Conversely, the out-
put layer comprises a single node that denotes boiler efficiency, the dependent variable. The
overall correlation can be found as boiler efficiency = f (steam flow rate, output temperature).
A feed-forward back propagation algorithm for the prediction of boiler efficiency was built.
The input layer consists of steam flowrate(Kg/s) and temperature (°C), therefore the number
of nodes in the input layer was taken as two. Similarly, for the output layer, since one output
was there, only one node was taken. The influence of neurons on prediction accuracy has been
considered.

4.1.1. ANN modelling results – boiler


The impact of neurons on the performance of ANN models was investigated. For network training,
validation, and testing, 95 groups of data sets taken from [47] were employed. Figure 2 (a) and (b)
demonstrate how the Mean Square Error and Regression of each model vary with the number of
neurons in one and two layers. The MSE value decreased as the number of neurons in the hidden
layer increased. As shown in Figure 2 (a), the optimum number of neurons for a single hidden layer
is 11, with a minimum MSE of 0.0021 and maximum regression of 0.9916. For two hidden layers,
the MSE value further decreases to 0.0015 at ID 8 as shown in Figure 2 (b), and did not decline
substantially with increasing neuron number. Furthermore, placing too many neurons in the hid-
den layer during ANN simulation will result in an overfitting issue. Therefore, the neuron number
of the hidden layer was chosen as 11 arranged in two hidden layers. Generally, the hidden layer’s
role in a neural network is to learn complex features and representations from the input data.
Increasing the number of neurons in the hidden layer can enhance the network’s capacity to capture
intricate patterns, leading to reduced MSE and improved predictive performance, up to a certain
point where overfitting becomes a concern. Balancing the number of neurons with regularisation
techniques is important to achieve the best trade-off between fitting the data and generalising it
to new data. Figure 3 (a) shows the neural network of the boiler having an input layer, followed
by two hidden layers and an output layer.
Table 2 contains regression values for training, validation, and test data sets for different neural
network architectures. The regression reflects the performance of the network, the nearer the

Table 2. Description of R2 values of created ANN model of boiler system for different neural network architecture.
Architecture(nodes) Regression
ID Layer 1 Layer 2 Training Validation Test ALL
1 1 10 0.92194 0.96072 0.81151 0.91847
2 2 9 0.98719 0.98437 0.9853 0.98527
3 3 8 0.9901 0.97672 0.97447 0.98514
4 4 7 0.98892 0.97745 0.97643 0.98537
5 5 6 0.98288 0.9682 0.93032 0.97693
6 6 5 0.99059 0.98529 0.96422 0.98653
7 7 4 0.98322 0.99077 0.97845 0.98387
8 8 3 0.99161 0.98541 0.97289 0.98877
9 9 2 0.99057 0.96284 0.97179 0.9845
10 10 1 0.9884 0.98044 0.96584 0.98461
10 R. VERMA AND C. S. BESTA

Figure 2. (a). Variation of R and MSE for 1 hidden layer. (b). Variation of R and MSE with nodes for 2 hidden layer.
INDIAN CHEMICAL ENGINEER 11

Figure 3. (a). Artificial Neural Network Architecture. (b). The regression values for all data sets. (c). ANN performance diagram
with 11 neurons and 2 hidden layer.
12 R. VERMA AND C. S. BESTA

regression value is to one, the higher the model’s prediction accuracy. As shown in Figure 3 (b), the
regression values for training, validation, and test are 0.9916, 0.98541, and 0.97289, respectively, and
the network’s composite regression is 0.98877. This implies that the model predicted boiler
efficiency with a high degree of accuracy. Also, from Figure 3 (c), which shows the performance
graph of the ANN, it was concluded that ANN was trained very well due to decreasing MSE
value at the end of the training phase and showing the best validation performance at epoch
6. Figure 4 shows a graph plotted between the steam flow rate as normalised input against the pre-
dicted and experimental boiler efficiency as the normalised output, which clearly shows the same
trend for both predicted and experimental data, concluding that the proposed model showed
high accuracy. All the values lie between 0 and 1.

4.2. Case study 2: frost thermal conductivity


When the temperature of a cold surface gets in contact with humid air that is colder than the freez-
ing point of the water and colder than the air’s dew point, frost occurs [48]. The rate of frost for-
mation rises as the air stream’s humidity or the temperature differential between the cold surface
and the surrounding air rises. In refrigeration and air-source heat pump systems, frost formation
on heat exchanger surfaces is common, causing a reduction in heat exchange response to a rise
in pressure drop due to flow restriction [49]. As a result of frost development, heat exchangers
and refrigeration systems consume more energy.
Determining the frost thickness, frost density, and frost thermal conductivity is hard due to the
porous structure of the frost layer, which has a wide range of porosity. One of the most essential
variables in determining the structure and rate of frost production is its thermal conductivity.
Frost porosity, air velocity, air temperature, wall surface temperature, air relative humidity, and
time length are all factors that affect this property, according to the study.

Figure 4. Graph plotted between normalised predicted and experimental efficiency for industrial boiler efficiency.
INDIAN CHEMICAL ENGINEER 13

In this section, we perform, the modelling of frost thermal conductivity based on experimental
data following the guidelines discussed in Section 2. The independent and dependent variable is
used to predict the frost thermal conductivity using the ANN model.

4.2.1. Modelling result


To study the influence of neurons on ANN modelling, the data of 57 sets taken from [50]. The
data set consists of parameters mentioned in the above section as input and frost thermal con-
ductivity as output. This data set was used for training, validation, and testing of the network.
The MSE value and regression were calculated by varying the number of neurons in the hidden
layer. Figure 5 (a) depicts the MSE variation of each model with different neurons within a
single hidden layer. The MSE of the ANN model with one hidden layer was as low as 0.0038
at neuron 8.
Furthermore, by increasing the number of hidden layers to 2, and 8 neurons arranged in differ-
ent architectures, it was observed from the values of regression for different neural network archi-
tectures given in Table 3 that ID 3 will be the best architecture for the neural network. Further
increasing the number of neurons in the hidden layer may result in an overfitting problem.
Figure 5 (b) shows that the value of MSE decreases further to 0.0032, and the regression reaches
the value of 0.99222 at ID 3. Figure 6 (a) shows the neural network of the boiler having an input
layer, followed by two hidden layers and an output layer. As shown in Figure 6 (b), the regression
values for training, validation, and test are 0.9922,0.9817 and 0.9926, and the composite regression
is 0.9891. This indicates that the model predicted frost thermal conductivity with a high degree of
accuracy. Also, from Figure 6 (c), which is the performance graph of the ANN, it was concluded that
ANN was trained very well due to a very small MSE value at the end of the training phase and show-
ing the best validation performance at epoch 14.
At last, a graph plotted between the normalised air velocity of the process as input against the
predicted and experimental normalised frost thermal conductivity, which is the output of the pro-
cess as shown in Figure 7.

4.3. Case study 3-column flotation


Column flotation (CF) is a flexible piece of equipment that has been used in mineral processing for
quite some time. Surface chemistry and the other specified parameter like froth condition regulate
mineral particle separation. Column flotation is a multi-phase transitional flow with a complex flow
behaviour [51, 52]. Traditional flotation cells have been proven to be less effective than column
flotation., especially when it comes to fine particle separation. It is preferable over traditional
cells because it is easier to handle, has no moving part, has a high gas hold-up, has less wash
water entrainment, consumes less reagent and energy, has a higher product recovery rate, and
has a high selectivity [53]. Despite its enhanced performance, the sparger’s obstruction at the bot-
tom part has limited the industry’s use of column flotation to a smaller number of applications.

Table 3. Description of R2values of created ANN model for frost thermal conductivity system for different neural network
architecture.
Architecture (nodes) Regression
ID Layer 1 Layer 2 Training Validation Test ALL
1 1 7 0.9369 0.92399 0.96269 0.94091
2 2 6 0.96162 0.96179 0.9133 0.95166
3 3 5 0.9922 0.98173 0.99026 0.9891
4 4 4 0.86078 0.95767 0.87798 0.85527
5 5 3 0.88982 0.9308 0.94192 0.89176
6 6 2 0.95471 0.89273 0.99576 0.93424
7 7 1 0.96983 0.95094 0.93797 0.96633
14 R. VERMA AND C. S. BESTA

Figure 5. (a). Variation of R and MSE with nodes for 1 hidden layer. (b). Variation of R and MSE with nodes for 2 hidden layer.

With the introduction of porous modern material, column flotation performance is now deter-
mined by several factors, including the recovery of the flotation column, which is greatly
influenced by gas hold-up [52, 54]. The bubble-particle interactions in the column play a
INDIAN CHEMICAL ENGINEER 15

Figure 6. (a). Artificial Neural Network Architecture. (b). The Regression of the ANN model. (c). ANN performance diagram with 8
neurons in 2 hidden layer.
16 R. VERMA AND C. S. BESTA

Figure 7. Graph showing the Experimental and Predicted data.

critical role in the separation of solid particles via froth. The gas hold-up illustrates the effect on
bubble-particle interactions, as well as the impact on the column’s rate constant and flotation
recovery.
Because of the complexities involved in predicting gas holdup in the flotation process, neural
networks can be extremely useful. The usage of a feed-forward artificial neural network
(ANN) for classification is the main goal of this research, forecasting the flotation column’s gas
holdup.

4.3.1. Modelling result


An ANN model was constructed to predict the gas hold-up under different operating conditions
and module parameters. For this, Sparger, Frother, Liquid height, Liquid velocity, and Air superfi-
cial velocity were chosen as input factors in this study, while gas hold-up was chosen as an outcome
variable. When all of the input variables were evenly distributed across their operational ranges, the
neural network toolbox was used to create the ANN model 140 data set.
The ANN model was trained with 70% of the whole experimental data, validated with 10%, and
tested with 20% of the total experimental data. For analyzing the effect of neurons in hidden layers,
neural networks with neurons spanning from 1 to 11 were created, as given in Table 4, along with
the regression. As indicated in Figure 8, ANN models with 9 neurons in one hidden layer achieved
the highest regression value for all the networks. Figure 9 (a), it is seen for training and testing, the
highest average regression values were 0.99227 and 0.99215, respectively. This implies that the
model predicted gas-hold up with a high degree of accuracy. Also, from Figure 9 (b), which is
the performance graph of the ANN, it was concluded that ANN was trained very well due to a
INDIAN CHEMICAL ENGINEER 17

Table 4. Description R2 values of created ANN model for Column Flotation system for different neural network architecture.
Layer 1
Nodes Training Validation Test ALL
1 0.90599 0.91638 0.93548 0.91113
2 0.95066 0.95369 0.89782 0.94257
3 0.99163 0.98507 0.97153 0.9877
4 0.9925 0.97519 0.98311 0.9877
5 0.98992 0.99618 0.98593 0.991
6 0.99371 0.98671 0.98839 0.99179
7 0.9948 0.99427 0.98965 0.99348
8 0.99314 0.98775 0.98276 0.99135
9 0.99576 0.98586 0.98776 0.99337
10 0.99541 0.99225 0.98343 0.9932
11 0.99314 0.99008 0.99457 0.99285
12 0.995 0.98763 0.97283 0.99169
13 0.99504 0.98738 0.98925 0.99316

very small MSE value at the end of the training phase and showing the best validation performance
at epoch 6.
Figure 9 (c) shows the neural network of the flotation column having an input layer, followed by
one hidden layer and an output layer. To determine how accurate the ANN model is at predicting
the outcome, the Mean square Error(MSE) was calculated and found to be 4.3360 × 10−06 which is
the lowest value with neuron 9. High regression and a low MSE value indicate a well-performing
ANN model and a good fit between experimental and ANN predicted data, as shown in Figure
10, where a trend is plotted between the normalised predicted data and the experimental data

Figure 8. Variation of R2 and MSE with nodes for 1 hidden layer.


18 R. VERMA AND C. S. BESTA

Figure 9. (a). The Regression of the ANN model. (b). ANN performance diagram with 9 neurons in the hidden layer. (c). Artificial
Neural Network architecture.
INDIAN CHEMICAL ENGINEER 19

Figure 10. Graph showing the Experimental and Predicted data.

for the 140 data set, which is clearly showing a good correlation between the gas hold-up normalised
predicted and the experimental output of the process.

5. Conclusion
At last, its concluded that the ANN model constructed to predict the result of boiler efficiency, frost
thermal conductivity, and gas hold-up of fluidised bed column flotation showed a good linear
relationship with the experimental data. A systematic procedure is adopted to develop and design
the Artificial Neural Network model to predict the outputs for the three chemical processes. Simu-
lation results indicate that an ANN model with a structure of 2:8:2:1 for boiler efficiency, 6:3:5:1 for
frost conductivity and 5:9:1 for gas hold-up achieves the best prediction performance. The accuracy
of the network was quantified in terms of regression and the mean square error. The calculated MSE
value was very low for all three cases, which indicated the high accuracy of the ANN model. Finally,
it was proved from the high value of regression and low MSE value that ANN showed a high-per-
formance model indicating a good match between the experimental data and the ANN predicted
data.

Authorship contributions
Each section of this work was equally contributed to by the author. The final manuscript was exam-
ined and confirmed by the author.

Disclosure statement
No potential conflict of interest was reported by the author(s).
20 R. VERMA AND C. S. BESTA

Funding
The author would like to acknowledge the financial support from the National Institute of Technology, Calicut as per
the Ministry of Human Resource Development (MHRD)

Ethical approval
This research did not include any human or animal subjects.

Data availability statements


Inquiries concerning the availability of data should be addressed to the authors.

ORCID
Chandra Shekar Besta http://orcid.org/0000-0002-3898-424X

References
[1] Strušnik D, Golob M, Avsec J. Artificial neural networking model for the prediction of high efficiency boiler
steam generation and distribution. Simul Model Pract Theory. 2015;57:58–70. doi:10.1016/j.simpat.2015.06.
003
[2] Quadri TW, Olasunkanmi LO, Akpan ED, et al. Development of QSAR-based (MLR/ANN) predictive models
for effective design of pyridazine corrosion inhibitors. Mater Today Commun. 2022;30:103163, doi:10.1016/j.
mtcomm.2022.103163
[3] Tam VWY, Butera A, Le KN, et al. A prediction model for compressive strength of CO2 concrete using
regression analysis and artificial neural networks. Constr Build Mater. 2022;324:126689, doi:10.1016/j.
conbuildmat.2022.126689
[4] Özdoğan H, Üncü YA, Şekerci M, et al. Mass excess estimations using artificial neural networks. Appl Radiat
Isot. 2022;184:110162, doi:10.1016/j.apradiso.2022.110162
[5] Navvab Kashani M, Aminian J, Shahhosseini S, et al. Dynamic crude oil fouling prediction in industrial pre-
heaters using optimized ANN based moving window technique. Chem Eng Res Des. 2012;90:938–949. doi:10.
1016/j.cherd.2011.10.013
[6] Liu T, Liu Y, Wang D, et al. Artificial neural network modeling on the prediction of mass transfer coefficient for
ozone absorption in RPB. Chem Eng Res Des. 2019;152:38–47. doi:10.1016/j.cherd.2019.09.027
[7] Li W, Wei S, Jiao W, et al. Modelling of adsorption in rotating packed bed using artificial neural networks
(ANN). Chem Eng Res Des. 2016;114:89–95. doi:10.1016/j.cherd.2016.08.013
[8] Kianpour M, Sobati MA, Shahhosseini S. Experimental and modeling of CO2 capture by dry sodium hydroxide
carbonation. Chem Eng Res Des. 2012;90:2041–2050. doi:10.1016/j.cherd.2012.04.005
[9] Valera VY, Codolo MC, Martins TD. Artificial neural network for prediction of SO2 removal and volumetric
mass transfer coefficient in spray tower. Chem Eng Res Des. 2021;170:1–12. doi:10.1016/j.cherd.2021.03.008
[10] Mohebbi A, Ahmadi-Pour M, Mohebbi M. Accurate prediction of liquid phase equilibrium adsorption of sulfur
compound. Chem Eng Res Des. 2017;126:199–208. doi:10.1016/j.cherd.2017.08.024
[11] Lashkarbolooki M, Bayat M. Prediction of surface tension of liquid normal alkanes, 1-alkenes and cycloalkane
using neural network. Chem Eng Res Des. 2018;137:154–163. doi:10.1016/j.cherd.2018.07.021
[12] Sasaki M, Hamada H, Kintaichi Y, et al. Application of a neural network to the analysis of catalytic reactions
analysis of NO decomposition over Cu/ZSM-5 zeolite. Appl Catal, A. 1995;132:261–270. doi:10.1016/0926-
860X(95)00171-9
[13] Hou Z-Y, Dai Q, Wu X-Q, et al. Artificial neural network aided design of catalyst for propane ammoxidation.
Appl Catal A. 1997;161:183–190. doi:10.1016/S0926-860X(97)00063-X
[14] Huang K, Chen F-Q, Lü D-W. Artificial neural network-aided design of a multi-component catalyst for
methane oxidative coupling. Appl Catal A. 2001;219:61–68. doi:10.1016/S0926-860X(01)00659-7
[15] Corma A, Serra JM, Argente E, et al. Application of artificial neural networks to combinatorial catalysis: mod-
eling and predicting ODHE catalysts. Chem Phys Chem. 2002;3:939–945.
[16] Serra JM, Corma A, Chica A, et al. Can artificial neural networks help the experimentation in catalysis? Catal
Today. 2003;81:393–403. doi:10.1016/S0920-5861(03)00137-8
[17] Alves RMB, Quina FH, Nascimento CAO. New approach for the prediction of azeotropy in binary systems.
Comput Chem Eng. 2003;27:1755–1759. doi:10.1016/S0098-1354(03)00150-9
INDIAN CHEMICAL ENGINEER 21

[18] Alves RMB, Nascimento CAO. Neural network based approach applied to for modeling and optimization an
industrial isoprene unit production. Austin (TX): American Institute of Chemical Engineers; 2004.
[19] Assidjo E, Yao B, Kisselmina K, et al. Modeling of an industrial drying process by artificial neural networks.
Braz J Chem Eng. 2008;25:515–522. doi:10.1590/S0104-66322008000300009
[20] Valderrama JO, Reátegui A, Rojas RE. Density of ionic liquids using group contribution and artificial neural
networks. Ind Eng Chem Res. 2009;48:3254–3259. doi:10.1021/ie801113x
[21] Henao CA, Maravelias CT. Surrogate-based process synthesis. Comp Aided Chem Eng. 2010;28:1129–
1134. doi:10.1016/S1570-7946(10)28189-0.
[22] Valeh-e-Sheyda P, Yaripour F, Moradi G, et al. Application of artificial neural networks for estimation of the
reaction rate in methanol dehydration. Ind Eng Chem Res. 2010;49:4620–4626. doi:10.1021/ie9020705
[23] Günay ME, Yildirim R. Neural network analysis of selective CO oxidation over copper-based catalysts for
knowledge extraction from published data in the literature. Ind Eng Chem Res. 2011;50:12488–12500.
doi:10.1021/ie2013955
[24] Vashishtha M. Application of artificial neural networks in prediction of vapour liquid equilibrium data.
Krakow: ECMS 2011; 2011.
[25] Kamat S, Madhavan K. Developing ANN based virtual/soft sensors for industrial problems. IFAC-
PapersOnLine. 2016;49:100–105. doi:10.1016/j.ifacol.2016.03.036
[26] Azzam M, Aramouni NAK, Ahmad MN, et al. Dynamic optimization of dry reformer under catalyst sintering
using neural networks. Energy Convers Manage. 2018;157:146–156. doi:10.1016/j.enconman.2017.11.089
[27] Gong Z, Wu Y, Wu L, et al. Predicting thermodynamic properties of alkanes by high-throughput force field
simulation and machine learning. J Chem Inf Model. 2018;58:2502–2516. doi:10.1021/acs.jcim.8b00407
[28] Wu Z, Tran A, Ren YM, et al. Model predictive control of phthalic anhydride synthesis in a fixed-bed catalytic
reactor via machine learning modeling. Chem Eng Res Des. 2019;145:173–183. doi:10.1016/j.cherd.2019.02.016
[29] Joss L, Müller EA. Machine learning for fluid property correlations: classroom examples with MATLAB. J
Chem Educ. 2019;96:697–703. doi:10.1021/acs.jchemed.8b00692
[30] Zhang Z, Wu Z, Rincon D, et al. Real-Time optimization and control of nonlinear processes using machine
learning. Mathematics. 2019;7:890, doi:10.3390/math7100890
[31] Cavalcanti FM, Schmal M, Giudici R, et al. A catalyst selection method for hydrogen production through water-
gas shift reaction using artificial neural networks. J Environ Manage 2019;237:585–594. doi:10.1016/j.jenvman.
2019.02.092
[32] Poort JP, Ramdin M, van Kranendonk J, et al. Solving vapor-liquid flash problems using artificial neural net-
works. Fluid Phase Equilib. 2019;490:39–47. doi:10.1016/j.fluid.2019.02.023
[33] Wang Y, Ren YM, Li H. Symbolic multivariable hierarchical clustering based convolutional neural networks
with applications in industrial process operating trend predictions. Ind Eng Chem Res. 2020;59:15133–
15145. doi:10.1021/acs.iecr.0c02084
[34] Wu Z, Rincon D, Christofides PD. Process structure-based recurrent neural network modeling for model pre-
dictive control of nonlinear processes. J Process Control. 2020;89:74–84. doi:10.1016/j.jprocont.2020.03.013
[35] Savage T, Almeida-Trasvina HF, del Río-Chanona EA, et al.. An adaptive data-driven modelling and optimiz-
ation framework for complex chemical process design. Comp Aided Chem Eng. 2020;48:73–78. doi:10.1016/
B978-0-12-823377-1.50013-6
[36] Lin M, Wu Y, Rohani S. Simultaneous measurement of solution concentration and slurry density by Raman
spectroscopy with artificial neural network. Cryst Growth Des. 2020;20:1752–1759. doi:10.1021/acs.cgd.
9b01482
[37] Chen S, Wu Z, Christofides PD. A cyber-secure control-detector architecture for nonlinear processes. AIChE J.
2020;66:e16907, doi:10.1002/aic.16907
[38] Khezri V, Yasari E, Panahi M, et al. Hybrid artificial neural network–genetic algorithm-based technique to opti-
mize a steady-state gas-to-liquids plant. Ind Eng Chem Res. 2020;59:8674–8687. doi:10.1021/acs.iecr.9b06477
[39] Peng C, Lu R, Kang O, et al. Batch process fault detection for multi-stage broad learning system. Neural Netw.
2020;129:298–312. doi:10.1016/j.neunet.2020.05.031
[40] Juybar M, Khorrami MK, Garmarudi AB, et al. Determination of acidity in metal incorporated zeolites by infra-
red spectrometry using artificial neural network as chemometric approach. Spectrochim Acta Part A.
2020;228:117539, doi:10.1016/j.saa.2019.117539
[41] Dick S, Fernandez-Serra M. Machine learning accurate exchange and correlation functionals of the electronic
density. Nat Commun. 2020;11:3509, doi:10.1038/s41467-020-17265-7
[42] Cai QQ, Lee BCY, Ong SL, et al. Fluidized-bed Fenton technologies for recalcitrant industrial wastewater treat-
ment–recent advances, challenges and perspective. Water Res 2021;190:116692, doi:10.1016/j.watres.2020.
116692
[43] Alsaffar MA, Ghany MARA, Ali JM, et al. Artificial neural network modeling of thermo-catalytic methane
decomposition for hydrogen production. Top Catal. 2021;64:456–464. doi:10.1007/s11244-020-01409-6
[44] Verma R, Yerolla R, Besta CS. Deep learning-based fault detection in the Tennessee Eastman process. 2022
Second International Conference on Artificial Intelligence and Smart Energy (ICAIS); 2022: 228–233.
22 R. VERMA AND C. S. BESTA

[45] Petroli G, Dalmolin I, Brusamarello CZ. Prediction of phase equilibrium between soybean biodiesel, alcohols
and supercritical CO2 using artificial neural networks. Chem Thermodyn Therm Anal. 2022;6:100048, doi:10.
1016/j.ctta.2022.100048
[46] Ranade NV, Nagarajan S, Sarvothaman V, et al. ANN based modelling of hydrodynamic cavitation processes:
biomass pre-treatment and wastewater treatment. Ultrason Sonochem. 2021;72:105428, doi:10.1016/j.ultsonch.
2020.105428
[47] Maddah H, Sadeghzadeh M, Ahmadi MH, et al. Modeling and efficiency optimization of steam boilers by
employing neural networks and response-surface method (RSM). Mathematics. 2019;7:629, doi:10.3390/
math7070629
[48] Zendehboudi A, Hosseini SH, Ahmadi G. Modeling of frost thermal conductivity on parallel surface channels.
Measurement. 2019;140:293–304. doi:10.1016/j.measurement.2019.03.045
[49] Kandula M. Frost growth and densification in laminar flow over flat surfaces. Int J Heat Mass Transf.
2011;54:3719–3731. doi:10.1016/j.ijheatmasstransfer.2011.02.056
[50] Negrelli S, Nascimento VS, Hermes CJL. A study of the effective thermal conductivity of frost formed on par-
allel plate channels. Exp Therm Fluid Sci. 2016;78:301–308. doi:10.1016/j.expthermflusci.2016.06.019
[51] Nakhaei F, Irannajad M, Mohammadnejad S. Column flotation performance prediction: PCA, ANN and image
analysis-based approaches. Physicochem Probl Miner Process. 2019;55:1298–1310.
[52] Vakamalla TR, Vadlakonda B, Aketi VAK, et al. Multiphase CFD modelling of mineral separators performance:
validation against tomography data. Trans Indian Inst Met. 2017;70:323–340. doi:10.1007/s12666-016-0995-4
[53] Jena MS, Biswal SK, Das SP, et al. Comparative study of the performance of conventional and column flotation
when treating coking coal fines. Fuel Process Technol. 2008;89:1409–1415. doi:10.1016/j.fuproc.2008.06.012
[54] Vadlakonda B, Mangadoddy N. Measurement of gas–solid dispersion characteristics in a slurry flotation col-
umn using ERT technique. Trans Indian Inst Met. 2020;73:2129–2140. doi:10.1007/s12666-020-02019-2

You might also like