You are on page 1of 8

Innovative Food Science and Emerging Technologies 5 (2004) 57–64

Neural networks for the heat and mass transfer prediction during drying
of cassava and mango
´ ´ a, *, M.A. Garcıa-Alvarado
J.A. Hernandez-Perez ´ b
, G. Trystrama, B. Heyda
a
Joint Research Unit Food Process Engineering (Cemagref, ENSIA, INAPG, INRA) ENSIA, 1 avenue des Olympiades, 91744 Massy Cedex,
France
b
´ Quımica
Departamento de Ingenierıa ´ ´
y Bioquımica, Instituto Tecnologico de Veracruz P.O. Box 1420, Veracruz, Ver., Mexico

Abstract

A predictive model for heat and mass transfer using artificial neural network is proposed in order to obtain on-line predictions
of temperature and moisture kinetics during the drying of cassava and mango. The model takes into account shrinkage of the
product as a function of moisture content. Two separate feedforward networks with one hidden layer were used (for cassava and
mango, respectively). The best fitting with the training data set was obtained with three neurons in the hidden layer, which made
possible to predict heat and mass transfer with accuracy, at least as good as the experimental error, over the whole experimental
range. On the validation data set, simulations and experimental kinetics test were in good agreement. The developed model can
be used for on-line state estimation and control of drying processes.
䊚 2003 Elsevier Ltd. All rights reserved.

Keywords: Shrinkage; Heat and mass transfer; Drying; Neural networks

Industrial relevance: Existing heat and mass transfer models do not permit an adequate control of the air drying process in industrial applications.
Consequently, it was the aim of this very relevant work to test the efficiency of neural networks to model and predict temperature and moisture
transfer during air drying of foodstuff. The data suggest that the model used was successful in predicting the experimental drying kinetics. Since
the models can be realized by simple arithmetic operations they can be applied for on-line estimation of air-drying processes.

1. Introduction in industrial applications. Physical dynamic models,


considering the complexity of the process, usually result
Air-drying is an essential procedure in food processing in coupled non-linear differential equations with partial
industries. On-line state estimation and control of air derivative, where numerical simulations require special-
drying operation requires the mathematical description ized software, and are very time consuming. It is
of food temperature and moisture evolution during the important to mention that the latter are able to predict
process. The dynamics of food drying process involves in wide range of operations conditions. However, these
simultaneous heat and mass transfer, where water is equations can be simplified (Courtois, Lebert, Duquen-
transferred by diffusion from inside of the food material ´
oy, Lasseran & Bimbenet, 1991; Hernandez, Pavon &
towards the air–food interface, and from the interface Garcia, 2000a), although not taking into account the
to the air stream by convection. Heat is transferred by complexity of the process, and still contain ordinary
convection from the air to the air–food interface and by non-linear differential equations that take too long to
conduction to the interior of food (Balaban & Piggot, simulate for control applications (Trelea, Raoult-Wack
1988; Karathanos, Villalobos & Saravacos, 1990; Kir- & Trystam, 1997b). Empirical models representations
anoudis, Maroulis & Marinos-Kouris, 1993; Zogzas & approximate the drying kinetic by several line segments
Maroulis, 1996). This phenomenon has been modeled (Daudin, 1982), high order polynomials (Techasena,
with different levels of complexity. Existing models do Lebert & Bimbenet, 1992) and neural networks (Dornier
not permit an adequate control of the air drying process et al., 1993), etc. These models have a narrower validity
*Corresponding author. Tel.: q33-1-69-93-51-10; fax: q33-1-69- range, but require only a limited number of simple
93-51-85. arithmetic operations for simulation, and can be easily
E-mail address: jherp@massy.inra.fr (J.A. Hernandez-Perez).
´ ´ incorporated in control software.

1466-8564/04/$ - see front matter 䊚 2003 Elsevier Ltd. All rights reserved.
doi:10.1016/S1466-8564(03)00067-5
58 ´ ´
J.A. Hernandez-Perez et al. / Innovative Food Science and Emerging Technologies 5 (2004) 57–64

Shrinkage is a very important factor in mathematical


modeling for the prediction of the food drying kinetics.
Mulet (1994) has tested models with different levels of
complexity and concluded that the shrinkage is a key
factor that cannot be neglected in drying. Models that
take into account the shrinkage of the product consider
volume as a function of content of moisture. Shrinkage
of biological products during drying is not perfectly
homogeneous (Ratti, 1994; Hernandez´ et al., 2000a). It
is interesting to mention that there is a deviation of
kinetics predictions at lower moistures levels (C-0.05),
still considering shrinkage during food drying (Hernan- ´
Fig. 1. The neural network computational model. ksnumber of
dez et al., 2000a). inputs; Insinputs; Outsoutput; thick linessweights and biases.
The progress of neurobiology has allowed researchers
to build mathematical models of neurons to simulate to a given architecture. The standard network structure
neural behavior. Neural networks are recognized as good for function approximation is the multiple layer percep-
tools for dynamic modeling, and have been extensively tron (or feedforward network). The feedforward network
studied since the publication of the perceptron identifi- often has one or more hidden layers of sigmoid neurons
cation method (Rumelhart & Zipner, 1985). The interest followed by an output layer of linear neurons. Multiple
of such models includes the modeling without any layers of neurons with non-linear transfer functions
assumptions about the nature of underlying mechanisms allow the network to learn non-linear and linear rela-
and their ability to take into account non-linearities and tionships between input and output vectors. The linear
interactions between variables (Bishop, 1994). Recent output layer lets the network produce values outside the
results establish that it is always possible to identify a range y1 to q1 (Limin & Fu, 1994). For multiple-
neural model based on the perceptron structure, with layer networks we use the number of the layers to
only one hidden layer, for either steady state or dynamic determine the superscript on the weight matrices. The
operations (Hornik, Stinchcombe & White, 1989; Hor- appropriate notation is used in two-layer networks. A
nik, 1993). An outstanding feature of neural networks simple view of the networks structure and behavior is
is the ability to learn the solution of the problem from given in Fig. 1.
a set of examples, and to provide a smooth and reason- The number of neurons in the input and output layers
able interpolation for new data. Also, in the field of are given by the number of input and output variables
food process engineering, it is a good alternative for in the process under investigation. In this work, the
conventional empirical modeling based on polynomial, input layer consists of five variables in the process (air
and linear regressions. For food processes, the applica- temperature, air velocity, shrinkage, time and air humid-
tion of neural networks keeps on expanding (Linko & ity), and the output layer contains two variables (tem-
Zhu, 1992; Huang & Mujumdar, 1993; Zhu, Rajalahti perature and moisture of the product) for mango and
& Linko, 1996; Linko, Luopa & Zhu, 1997; Trelea, cassava. The optimal number of neurons in the hidden
´
Courtois & Trystam, 1997a; Hernandez-Perez, ´ ´
Ramırez- layer(s) ns is difficult to specify, and depends on the
Figueroa, Rodriguez-Jimenes & Heyd, 2000b). type and complexity of the task, usually determined by
The aim of the present work was to test the impor- trial and error. Each neuron in the hidden layer has a
tance and efficiency of neural networks to model and bias b, which is added with the weighted inputs to form
predict the temperature and moisture transfer during air the neuron input n. This sum, n, is the argument of the
drying of foodstuff. The model validation was made transfer function f:
with experimental drying data of cassava parallelepipeds
and mango slices.
nsWi{1,1}In1qWi{1,2}In2q««.qWi{1, Inkqb
k}

2. Materials and methods


The coefficients associated with the hidden layer are
2.1. Neural network systems grouped into matrices Wi1 (weights) and b1 (biases).
The output layer computes the weighted sum of the
Neural networks are composed of simple elements signals provided by the hidden layer, the associated
operating in parallel. As in nature, the network function coefficients are grouped into matrices Wo2 and b2.
is determined largely by the connections between ele- Using the matrix notation, the network output can be
ments (neurons), each connection between two neurons given by Eq. (1):
has a weight coefficient attached to it. The neuron is
grouped into distinct layers and interconnected according Outsf9wWo2=f (Wi1=Inqb1)qb2x (1)
´ ´
J.A. Hernandez-Perez et al. / Innovative Food Science and Emerging Technologies 5 (2004) 57–64 59

Table 2
Experimental conditions studied for Cassava, * learning database,q
test database

File Initial thickness Air temperature Air velocities Time


number Lo (cm) T (8C) Va (mys) t (min)
* C1 1 70 0.5 0–600
* C2 3 50 0.5 0–600
* C3 1 70 1.75 0–600
* C4 3 50 1.75 0–600
* C5 1 50 3.0 0–600
* C6 1 60 3.0 0–600
q C7 1 70 3.0 0–600
* C8 2 50 3.0 0–600
q C9 2 60 3.0 0–600
* C10 2 70 3.0 0–600
q C11 3 50 3.0 0–600
* C12 3 60 3.0 0–600
Fig. 2. Recurrent network architecture for the drying kinetic consid- * C13 3 70 3.0 0–600
ering the shrinkage of the product and the procedure used for learning
neural network.
msn(Inq1)qOut(nq1) (2)
Hidden layer neurons may use any differentiable
transfer function to generate their output. In this work, 2.2. Learning algorithm
a hyperbolic tangent sigmoid transfer function and a
linear transfer function were used for f and f9, respec- A learning (or training) algorithm is defined as a
tively (Demuth & Beale, 1998). The number of network procedure that consists in adjusting the coefficients
coefficients (weights and biases) is given by the Eq. (weights and biases) of a network, to minimize an error
(2). function (usually a quadratic one) between the network
outputs for a given set of inputs and the right outputs
Table 1 already known. If smooth non-linearities are used, the
Experimental conditions studied for Mango, * learning database,q gradient of the error function can be easily computed
test database by the classical back propagation procedure (Rumelhart,
Hinton & Williams, 1986). Previous learning algorithms
File Initial thickness Air temperature Air velocities Time
number Lo (cm) T (8C) Va (mys) t (min)
used this gradient directly in a steepest descent optimi-
zation, but recent results show that second order methods
* M1 0.5 50 0.5 0–600 are far more effective. In this work, the Levenberg–
q M2 0.75 50 0.5 0–600
* M3 1.0 50 0.5 0–600
Marquardt, optimization procedure in the Neural Net-
q M4 0.5 60 0.5 0–600 work Toolbox of Matlab (Demuth & Beale, 1998), was
* M5 0.75 60 0.5 0–600 used. The algorithm of Levenberg is an approximation
* M6 1.0 60 0.5 0–600 of Newton’s methods, this algorithm was designed to
* M7 0.5 70 0.5 0–600 approach second order training speed without having to
* M8 0.75 70 0.5 0–600
q M9 1.0 70 0.5 0–600
compute the Hessian matrix (Martin, Hagan & Moham-
* M10 0.5 50 1.75 0–600 mad, 1994). Despite the fact that computations involved
* M11 0.75 50 1.75 0–600 in each iteration are more complex than in the steepest
q M12 1.0 50 1.75 0–600 descent case, the convergence is faster, typically by a
q M13 0.5 60 1.75 0–600 factor of 100. The root mean square error (RMSE)
* M14 0.75 60 1.75 0–600
* M15 1.0 60 1.75 0–600
between the experimental values and network predic-
* M16 0.5 70 1.75 0–600 tions were used as a criterion of model adequacy, this
q M17 0.75 70 1.75 0–600 is shown in Fig. 2.
* M18 1.0 70 1.75 0–600
q M19 0.5 50 3.0 0–600 2.3. Database preparation
* M20 0.75 50 3.0 0–600
* M21 1.0 50 3.0 0–600
* M22 0.5 60 3.0 0–600 Experimental data provided by Hernandez ´ et al.
* M23 0.75 60 3.0 0–600 (2000a) consisting in heat and mass transfer kinetics of
q M24 1.0 60 3.0 0–600 mango slices of 4=4 cm, with thickness (0.5, 0.75 and
* M25 0.5 70 3.0 0–600 1 cm), and cassava parallelepipeds of 10 cm long, with
q M26 0.75 70 3.0 0–600 thickness (1.0, 2.0 and 3.0 cm) were used. The experi-
* M27 1.0 70 3.0 0–600
mental drying kinetics were carried out at three different
60 ´ ´
J.A. Hernandez-Perez et al. / Innovative Food Science and Emerging Technologies 5 (2004) 57–64

Fig. 3. Test root mean square error (RMSE) vs. iterations number for mango and cassava, and various numbers of hidden neurons.

air temperatures (50, 60 and 70 8C) and at three air moisture and initial moisture of the product, respectively
velocities (0.5, 1.75 and 3.0 mys) with a time of 600 (g wateryg dry matter).
min for each kinetic. It resulted on 27 and 13 experi- The learning database was obtained to optimize the
mental kinetics for mango and cassava, respectively, neural network and the test database was reserved for
with one repetition for each one. Experimental files the validation of the predictive capability of the model.
were split into learning and test databases to obtain a
good representation of the situation diversity (see Table 3. Results and discussion
1 and Table 2). Table 1 and Table 2 give experimental
conditions studied for each kinetic of mango and cas- The proposed model wEq. (4)x involved three neurons
sava, respectively. The inputs (In) of the network were nss3 in the hidden layer (21 weights and 5 bias) to
air temperature (T)y100, air velocity (Va)y5, shrinkage determine the temperature and moisture evolution in a
(Lv), time (t)y700, and air humidity (Xa); the outputs dimensionless form for mango and cassava, one for
(Out) were food temperature and moisture content each type. This model includes the effect of shrinkage
expressed in dimensionless form ŽC,Uˆ ˆ . wsee Eq. (4)x. (Lv) as a function of the last moisture content. It is
Food moisture evolution during drying was calculated important to mention that the mango slices were dried
by sample weight loss of the product and food temper- by one side, and cassava parallelepipeds were suspended
ature was obtained by thermocouples inserted in the so that drying took place by the four sides (two
bottom of the product at times ts0, 15, 30, 45, 60, 75, dimensions) (Hernandez
´ et al., 2000a). The parallele-
90, 105, 120, 150, 180, 210, 240, 300, 360, 420, 480, pipeds dimensions have the same size and the drying is
540 and 600 min. Wet and dry bulb temperatures of the homogeneous on the four sides, so we used as input
ambient air were measured with glass thermometers to one only Lv for mango and cassava.
estimate the air moisture content (Xa). The Eq. (3) was
used to determine the variation of shrinkage during
kinetic drying of mango and cassava, proposed by
´
Hernandez et al. (2000a), this equation is a function of
moisture content of the product:

w B X Ez
LvsL0xDLfqŽ1yDLf.C F| (3)
y D X0 G~

Where Lv represents the thickness (cm), L0 is the


initial thickness (cm), DL f the fraction of initial slab
thickness at the end of drying (this is DL f s0.55 for
mango and DL f s0.68 for cassava), X and X0 are the
´ ´
J.A. Hernandez-Perez et al. / Innovative Food Science and Emerging Technologies 5 (2004) 57–64 61

Fig. 4. Comparison of RSME learn and test vs. the number of neurons in the two cases (mango and cassava).

Where Û is the food temperature dimensionless form, when new data are presented (e.g. test database) to the
Ta represents the air temperature, To is the food initial network, the RSME is large. Therefore, the RSME on
temperature, Ĉ represents the moisture of the product the test database is a good criterion to optimize the
in dimensionless form, Xe is the moisture in equilibrium number of iterations and avoid over-fitting. Fig. 3 shows
(g wateryg dry matter). This moisture (Xe) was deter- that for two and six neurons, the error training is small
mined using the relationship air moisture and the sorp- for mango, and for cassava it is between four and six
tion isotherms that were experimentally obtained for neurons. To determine the number of neurons in the
mango and cassava (Eqs. (5) and (6)), as reported by hidden layer, we plotted the RMSE (learning and test
De La Rocha Soto (1988). bases) against the number of neurons; this is represented
in Fig. 4. It is evident that the error in the learning
database decreases when the number of neurons
Mango: aws1yexpwyexp(0.9154
increase, this is another cause of over-fitting because the
error on the test database can increase at the same time.
q0.5639(lnX))x (5)
Table 3
Cassava: aws1yexpwyexp(y44.8q8.66(lnT) Characteristics of the best neural network for mango and cassava

Mango: 3 hidden neurons (100 iterations); RSME learn 0.0550 and


y6.41(lnX)q0.049T(lnX) test 0.0517
Wi1 y0.1035 y0.0665 y0.1367 y7.2518 0.0500
y6.7X10y5T2 (lnX))x (6) 0.4369 y0.5174 1.7002 y3.2170 8.2998
0.4172 y0.5266 1.6910 y3.4127 8.3539
The moisture ratio (XyX0) in Eq. (3) was determined Wo2
for the last output network, which makes one closed 9.2722 5.3779 y4.9553
17.5940 1.2053 y1.0138
boucle for (Lv); this is represented in Eq. (4). b1 y1.8564
1.1365
3.1. Learning bases of the model 1.2656
b2 9.6197
17.7911
Fig. 3 gives the trial RSME against the iteration
number in the case of mango and cassava, for one to Cassava: 3 hidden neurons (100 iterations); RSME learn 0.0559 and
six neurons in the hidden layer. These results showed test 0.0552
that the typical learning error decreased when the num- Wi1 y0.6747 y0.0856 y0.4637 y3.9055 y6.8911
ber of neurons in the hidden layer increased, but for 2.5133 0.4588 y4.0862 2.0404 y2.4127
two or more neurons in the hidden layer for mango y6.7229 2.7800 18.3839 0.1518 y98.4576
Wo2 2.8522 y0.2957 0.0556
(respectively, three or more neurons for cassava), an 3.7302 y0.1002 0.1021
additional increase in structure complexity did not b1 y0.4892
decrease strongly the RSME. y1.1895
One of the problems that occur during neural network y0.2923
training feedforward is called ‘over-fitting’. The RSME b2 3.1475
3.9383
on the training set is driven to a very small value, but
62 ´ ´
J.A. Hernandez-Perez et al. / Innovative Food Science and Emerging Technologies 5 (2004) 57–64

Fig. 5. Moisture simulated vs. moisture experimental data for test of mango and cassava.

The optimal number of neurons in the hidden layer is and cassava. It shows that the moisture prediction was
three for both mango and cassava. correct. However, for temperature simulated results vs.
Table 3 gives the result of the best fits of the proposed temperature experimental data for test database of man-
model for the number of neurons in the hidden layer go (Fig. 6), the food temperature prediction was less
and iterations, as well as weight values, biases values, correct (r 2s0.91) that the moisture prediction (r 2s1).
learning and test RSME, for mango and cassava, respec- Fig. 7 and Fig. 8 depict the ability of the models to
tively. As a whole, the learning and test RSME were predict drying kinetics at different thickness, tempera-
similar, this accounts for a good generalization capability tures and air velocities for a narrower validity range.
of the neural network. Fig. 7 shows some moistures simulated results and
experimental data obtained by the test database (M4,
3.2. Validation of the proposed model M12, M26, C7, C9 and C11). Fig. 8 presents tempera-
ture simulated results and experimental data obtained by
Fig. 5 presents the moisture-simulated results against the test database (M4, M12, M26, C7, C9 and C11). It
moisture experimental data for test database of mango is evident that the model was successful in predicting

Fig. 6. Temperature simulated vs. temperature experimental data for test of mango and cassava.
´ ´
J.A. Hernandez-Perez et al. / Innovative Food Science and Emerging Technologies 5 (2004) 57–64 63

Fig. 7. Experimental data and simulated curves generated with the proposed model in the drying kinetics of mango and cassava. The symbols
represent experimental moisture evolution at different conditions.

the experimental drying kinetics. This shows the impor- 4. Conclusions


tance of the artificial neural network to simulate the
drying curves of foodstuff. These models are not com- This study shows that neural network modeling can
plex because simulation is realized by simple arithmetic be used to obtain good quality simulation of the moisture
operations, and therefore, they can be used for on-line transfer during food drying and less accurate for tem-
estimation in air drying processes for industrial perature prediction, over a wide experimental range.
applications. This neural network modeling was validated with exper-

Fig. 8. Experimental data and simulated curves generated with the proposed model in the drying kinetics of mango and cassava. The symbols
represent experimental temperature evolution at different conditions.
64 ´ ´
J.A. Hernandez-Perez et al. / Innovative Food Science and Emerging Technologies 5 (2004) 57–64

imental drying data. The technological interest of this ing of Food Processes: Drying and Microfiltration. Artificial
kind of modeling must be related to the fact that it is Intelligence for Agriculture and Food (AIFA Conference). Nımes,ˆ
France. 27–29 October: pp 233–240.
elaborated without any preliminary assumptions on the ´
Hernandez, J. A., Pavon, G., & Garcia, M. A. (2000a). Analytical
underlying mechanisms. In addition, it is simple and solution of mass transfer equation considering shrinkage for mod-
fast. The applications of artificial neural networks can eling food-drying kinetics. Journal of Food Engineering, 45, 1 –
be used for the on-line state estimation and control of 10.
drying processes. ´ ´
Hernandez-Perez, ´
J. A., Ramırez-Figueroa, E., Rodriguez-Jimenes,
G., Heyd, B. (2000b). Spray Dried Yogurt Water Sorption Iso-
therms Prediction Using Artificial Neural Network. 12th Interna-
5. Nomenclature tional Drying Symposium. 28–31 August: pp 56.
Huang, B., & Mujumdar, A. (1993). Use of neural networks to
aw: Food water activity predict industrial dryer’s performances. Drying Technology, 11,
B1, b2: Matrix bias 525 –541.
In: Input Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer
feedforward networks are universal approximations. Neural Net-
K: Number of input work, 2, 359 –366.
L0: Initial thickness (cm) Hornik, K. (1993). Some new results on neural network approxima-
Lv: Thickness (cm) tion. Neural Network, 6, 1069 –1072.
ns, n: Neuron in the hidden layer Karathanos, V. T., Villalobos, G., & Saravacos, G. D. (1990).
Out: Output Comparation of two methods of estimation of the effective moisture
T: Air temperature (8C) diffusivity from drying data. Journal of Food Science, 55(1), 218 –
223.
t: Time (min) Kiranoudis, C. T., Maroulis, Z. B., & Marinos-Kouris, D. (1993).
ˆ
U, U: Dimensionless variable food temperature Heat and mass transfer modeling in air drying of foods. Journal
V a: Air velocity (mys) of Food Engineering, 26, 329 –348.
Wi, Wo: Matrix weight Limin, F. (1994). Neural networks in computer intelligence.
McGraw–Hill International Series in Computer Science.
Xa: Air moisture content (g wateryg dry air)
Linko, P., & Zhu, Y.-H. (1992). Neural networks for real time
X, X0 and Xe: Food moisture content: in each time, variable estimation and prediction in the control of glucoamylase
initial and in equilibrium respectively (g fermentation. Progress in Biochemistry, 27, 275 –283.
wateryg dry matter) Linko, S., Luopa, J., & Zhu, Y.-H. (1997). Neural networks as
Greeks: ‘software sensors’ in enzyme production. Journal of Biotechnology,
ˆ : Dimensionless variable food moisture 52, 257 –266.
C, C Martin, T., Hagan, M. T., & Mohammad, B. N. (1994). Training
DL f : Fraction of the initial characteristic length at feedforward networks with the marquardt algorithm. IEEE Trans-
the end of drying period (cm) actions on Neural Networks, 6(5), 989 –993.
Mulet, A. (1994). Drying modeling and water diffusivity in carrots
and potatoes. Journal of Food Enginnering, 22, 329 –348.
Ratti, C. (1994). Shrinkage during drying of foodstuffs. Journal of
References Food Engineering, 23, 91 –105.
Rumelhart, D., & Zipner, D. (1985). Feature discovering by compet-
Balaban, M., & Piggot, G. M. (1988). Mathematical model of itive learning. Cognitive Science, 9, 75 –112.
simultaneous heat and mass transfer in food with dimensional Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning
changes and variable transport parameters. Journal of Food Sci- internal representations by error propagation. Parallel data Proc-
ence, 53(3), 935 –939. essing, 1, 318 –362.
Bishop, C. M. (1994). Neural networks and their applications. Techasena, O., Lebert, A., & Bimbenet, J. J. (1992). Simulation of
Reviews on Scientific Instrumentation, 65(6), 1803 –1832. deep bed drying of carrots. Journal of Food Engineering, 16, 267 –
Courtois, F., Lebert, A., Duquenoy, A., Lasseran, J. C., & Bimbenet, 281.
J. J. (1991). Modeling of drying in order to improve processing Trelea, I. C., Courtois, F., & Trystam, G. (1997a). Dynamic models
quality of maize. Drying Technology, 9(4), 927 –945. for drying and wet-milling quality degradation of corn using neural
Daudin, J. D. (1982). Modelisation
´ ´
d’un sechoir a` partir des cine-
´ networks. Drying Technology, 15(3&4), 1095 –1102.
´
tiques experimentales ´
de sechage. Ph. D. thesis, ENSIA, Massy, Trelea, I. C., Raoult-Wack, A. L., & Trystam, G. (1997b). Note:
France. application of neural network modeling for the control of dewater-
De La Rocha Soto, J. N. (1988). Modelacion ´ Matematica
´ de las ing and impregnation soaking process (osmotic dehydration). Food
´
Propiedades Termodinamicas del Agua Contenida en el Mango Science and Technology International, 3, 459 –465.
Manila. Tesis de Maestrıa.´ Instituto Tecnologico
´ de Veracruz. Zogzas, N. P., & Maroulis, Z. B. (1996). Effective moisture diffusivity
Demuth, H., & Beale, M. (1998). Neural Network Toolbox for estimation from drying data. A comparison between various meth-
Matlab—Users Guide Version 3. Natrick, MA: The MathWorks ods of analysis. Drying Technology, 14(7&8), 1543 –1573.
Inc. Zhu, Y.-H., Rajalahti, T., & Linko, S. (1996). Application of neural
Dornier, M., Rocha, T., Trystram, G., Bardot, I., Decloux, M., Lebert, networks to lysine production. Biochemical Engineering Journal,
A. 1993. Application of Neural Computation for Dynamic Model- 62, 207 –214.

You might also like