Food Research International 34 (2001) 55±65

www.elsevier.com/locate/foodres

Thermal process calculations using arti®cial neural network models
M. Afaghi a, H.S. Ramaswamy a,*, S.O. Prasher b
a
Department of Food Science and Agricultural Chemistry, Macdonald Campus of McGill University, 21111 Lakeshore Road,
Ste. Anne-de-Bellevue, Quebec, Canada H9X 3V9
b
Department of Agricultural and Biosystems Engineering, Macdonald Campus of McGill University, 21111 Lakeshore Road,
Ste. Anne-de-Bellevue, Quebec, Canada H9X 3V9

Received 22 September 1999; accepted 6 June 2000

Abstract
In this study, arti®cial neural network (ANN) models were evaluated as potential alternatives to conventional thermal process
calculations methods. ANN is a computing system capable of processing information by its dynamic response to external inputs.
ANNs learn from examples through iteration by adjusting the internal structure to match the pattern between input and output
variables. Finite di€erence simulations, which are widely recognized as practical alternatives to experimental methods, were used to
generate temperature pro®les under thermal processing conditions for a wide range of can sizes and operating conditions. Time-
temperature data so gathered were used to evaluate the heat penetration parameters, fh, jch, fc and jcc as well as to compute process
lethality and process time. These data were used for developing the ANN models. Selected Formula methods were also used to
calculate the respective process times/process lethalities. The accuracy and ability of ANN models were compared with the Formula
methods, both with respect to process time and process lethality computations using data from the ®nite di€erence model as the
reference. Process calculation results from ANN model were comparable to, and sometimes better and more ¯exible than, the
currently available Pham and Stumbo methods. # 2001 Elsevier Science Ltd. All rights reserved.
Keywords: Arti®cial neural network; Thermal process calculations; Lethality; Process time

1. Introduction achieving a pre-selected process lethality or evaluating
the lethality of a given process. For the ®rst time, Bige-
Thermal processing of packaged foods is one of the low, Bohart, Richardson and Ball (1920) introduced a
most widely used methods of preservation in the twen- graphical procedure for evaluating the eciency of a
tieth century (Teixeira & Tucker, 1997). Nicholos heat treatment process for packaged foods. This method
Appert introduced this method for the ®rst time in 1810. was the basis of a group of processing calculation
The concept of thermal processing is based on heating methods, which were later termed ``General'' methods.
the packaged foods for a certain length of time at a The general method is the most accurate method for a
certain temperature to obtain a safe product complying given experimental condition, as it makes use of real
with public health standards. Associated with thermal time-temperature data for process calculations. How-
processing is always some degradation of heat-sensitive ever, the application of this method could be tedious
quality factors that is undesirable. In order to meet and time consuming as for each variation in processing
the consumer's demand for safe and shelf-stable food conditions, food product or can size, a new set of time-
products with high quality attributes, processing sche- temperature data is required. Formula methods, on the
dules are designed to keep the process time to the other hand, are based on linking characterized heat
required minimum. The main objective of thermal penetration parameters such as heating rate index (fh)
process calculations is to determine the process time for and heating/cooling lag factors (jch/jcc) to destruction
kinetics. The formula methods are somewhat less
restrictive than the general method and can accom-
* Corresponding author. Tel.: +1-514-398-7970; fax: +1-514-398-
7977.
modate variations in product, container and processing
E-mail address: ramaswamy@macdonald.mcgill.ca (H.S. Ramas- parameters. The following are the more commonly used
wamy). formula methods for process calculations:
0963-9969/01/$ - see front matter # 2001 Elsevier Science Ltd. All rights reserved.
PII: S0963-9969(00)00132-0

cherau. 1995). 1987. governing equations. Hoogenboom. 1973). NeuralWare (1996) provides a wide overview of This method is an improved version of the Stumbo potential applications of a neural network as classi®ca- method (Pham. On sub. Pham method meters in the development of an ANN model is possi- ble. were used to determine the signi®cant variables in har- erning heat transfer equations of packaged food pro. and pro- temperature prediction equations are based on the cessing conditions. thought to be robust to noise and inconsistencies in sequent works. prediction.822). / Food Research International 34 (2001) 55±65 1. They found the ANN model to Teixeira & Manson. using near infrared spectra. Stumbo method ticular application. the parameters of table were recalcu. Bou- aged food have been successfully applied in optimiza. Ni and Gunase- karan (1998) found the complex task of food quality Finite di€erence models of heat transfer into pack. Bourgine and Palagos (1992) developed a neural- tion and control (Datta. Arti®cial neural networks inaccuracies in the method. time-temperature data at any speci®c location of the ture di€erence between product and heating medium product can be obtained by solving the appropriate temperature is a straight line after an initial lag time. Cole This method was reported to be at least as accurate as and Doner (1997) applied neural network modeling for the Stumbo method (Pham. These tables were originally based on data between the set of parameters. In most of these duction heat transfer in a ®nite cylinder with in®nite studies. which do not calculation in this method is quite similar to the Ball need any prior knowledge of the nature of relationships method. data. the Ball method will be inaccurate. Parmer. The most signi®cant assumptions were a constant cooling lag factor (jcc) Arti®cial neural networks (ANNs) have been the equal to 1. for processes deviating from technology. Teixeira & Manson. a based on regression. ANNs have been used as a modeling tool in g-values). Zahradnik & Zinsmeister. network-based method for prediction of apple juice quality Teixeira. The procedure for process time and process lethality they are known as truly adaptive systems. numerical solution was used to solve these equations. the ANN models have been demonstrated to heat transfer coecients at the container walls and uni. ANN has the ability of approximating arbitrary con- racy. tables accommodating the variability with respect to jcc. ANN models can accommodate time-temperature data of product (Stumbo. For high sterilization values (low Recently.1. these models have puted from each other.925) than traditional linear 1. Neural networks prediction of the temperature pro®le based on the gov.3. heat transfer most widely used method by the food industry. 1986.56 M. Also. Dixon.41. several food processing applications. using processing conditions largely replaced the need to carryout experiments for (retort temperature and initial temperature) and table or routine data-gathering when the boundary conditions graphs of related parameters (fh/U and g). rendering them more advantageous compared to lated using a ®nite di€erence solution to predict the empirical models. and is based on two ranges tion. of sterilization values.b. 1997. the Ball formula method is the thermal di€usivity of the food product. 1991). 1990). ANN models are from hand drown heat penetration curves. which resulted in some 1. multiple-input and multiple-output systems (Baughman & Liu. Rather than being programmed for use in a par- 1. Finite di€erence models require serveral input data related to the food product and system such as Introduced in 1923. statistical or paramtric models. As they obtain this ability through the stage of learning. variability of multiple para- 1. an analytical solution was applied for con. Obviously. Blanlenship. 1990). and equal heating and cooling rate focus of interest in many diverse ®elds of science and indexes (fh=fc). The main feature of these models is the the conventional reqression models. accurate time-temperature history. The coecient of the heating and cooling medium.5.4. which simulate very simple abilities of our brain. Finite di€erence models regression techniques (R2= 0. better explain the associated experimental variabiliy than Tucker. Afaghi et al. models. Stumbo and Longley (1966) published a new set of tinuous functions based on a set of given observations. McClendon. Neural networks are basically computer these assumptions. neural network models generate their own rules by learning from provided examples. Therefore. observation that the semi-logarithmic plot of tempera. data association and optimization. 1969a. While revising the Ball method to increase its accu. vesting and processing e€ects on surimi quality by . Ball method ducts. 1982. prediction simpli®ed by the use of ANN models. When these conditions are known. Teixeira & Tucker. estimation of a¯atoxin contamination in peanuts. Development are well de®ned. perform better than the conventional ones which were form initial temperature. Because of their ability to provide Achieved process lethality and process time are com. For low sterilization values. which performed better (R2=0.2. of these tables and graphs was carried out with respect to some limiting assumptions.

In order to predict determined process time. process lethality and respective heating model was based on a numerical solution of unsteady and cooling parameters. Kim and continuing the calculation process. Pittsburgh. as the basic preference of network. An implicit method was used of the oven used for the baking process.1. based models as potential alternatives to existing ther. Afaghi et al. g-value and jcc parameters were pro®le of product was computed. perature. Materials and methods In order to locate the beginning of a straight line in each curve an iterative regression technique was applied and 2. and fc. to predict opti. models.23 perature. with 1215 data state heat conduction for an object of cylindrical geo. Development of ANN models set to the initial temperature of the product. and di€erent can sizes were selected from those used in mal process calculation methods. detailed in Table 1. retort temperature). At the beginning of the process time.. jch. was written in fortran language (Sablani. The same procedure applied as two input variables to predict the corre- . Morrissey. / Food Research International 34 (2001) 55±65 57 Peters. Hence. version 5. (1995) applied neural network models. The ANN models industry. it was neces- of the oven without any loss of bread quality. to predict opti. This data set. while the temperature at the surface was set at the retort tem. browning and temperature. trained using In order to optimize the required time step and space data from a ®nite-di€erence program. With a known set of initial conditions. Sreekanth. The de®ned range of each parameter The objective of this study was to develop ANN covered the common range of processing conditions.e. The process time. sary to employ iterative technique in the solution mal processing. each predicting quality factors of derivatives appearing in the heat ¯ow equations and volume. providing transient temperature distribu. Finite di€erence model process parameters of fh. Ramaswamy and Prasher procedure. product characteristic (thermal di€usivity). product char- prediction of psychometric parameters with less than acteristics and packaging sizes were considered as 4% errors of prediction. all the interior points of the cylinder were 2.2. PA) was employed for equations were solved at each time interval. in least quality degradation. jcc were obtained for each set of conditions. As an application in drying. In ther. these (NeuralWare Inc. Ramas- wamy and Sablani (1998) developed an ANN model for A wide range of processing conditions. steps sizes. (total lethality calculated at the end of cooling) were computed. particle heat transfer in agitation processing and found the former to be more accurate and versatile than the 2. model were veri®ed against the data obtained from the mal sterilization temperatures (0. the time-temperatures predicted from this mum thermal processing conditions i. 10 and 20 s and four Prasher (1997) compared the ANN models with con. 1996). can size (height and dia- ment of the ANN models. This procedure was continued for a pre. Sablani. Application of for time derivatives. The optimized ®nite di€erence program was were to be trained and tested using data obtained form a applied to generate temperature pro®les based on computer based ®nite di€erence models under a wide achieving a selected heating lethality from which the range of conditions appropriate to thermal processing actual process time (while medium temperature is set to of canned foods. a data set was A ®nite di€erence program was applied to obtain a obtained consisting of processing condition (initial tem- general and appropriate training data set for develop. Sablani. Each time±temperature pro®le was divided into two semi-logarithmic curves of heating and cooling.5 C) which resulted analytical solution of heat conduction in a ®nite cylinder. 10 and 10. which were was applied for cooling of the product by changing the later useful for determining factors that a€ect the ®nal ambient temperature to cooling water temperature and product quality in a multi-process operation. Sylvia and Bolte (1996). The ®nite di€erence program meter). tion throughout the container. Ramaswamy and Five di€erent time steps of 1. NeuralWorks Professional II/Plus. 2. grid sets in both the horizontal and vertical axes (5 and ventional dimensionless models for modeling ¯uid to 5. was adopted for development of the ANN metric shape. M. 5. The new ANN modeling. during which the temperature the process time. A standard back-propagation algo- temperature distribution at the end of each time interval rithm with tangent hyperbolic transfer function and was used to set the initial conditions for the following normalized cumulative delta learning rule was applied time interval. The Crank±Nichol- Cho (1997) developed three ANN models for the bread son scheme was used for ®rst and second order spatial baking process.3. The developed backward di€erence scheme for the ®rst order derivative ANN models were part of a fuzzy controller simulation in boundary equations. Since transformed equations con- such a controller resulted in a decrease of heating cost tain temperature values of next time steps. and their accuracy compared against heating temperature) and delivered process lethality existing methods of process calculations. 15 and 15 and 20 and 20) were examined. records. Data generation latter.

type ing set (Linko & Zhu. Likewise. 1 provides a better presentation of the input calculation and if the epoch is too large the advantage variables combination.055. of using overall error function can be lost. Keeping the above constraints in mind.80 11 No.95 7 No.517.704.04 6 No.0 5. 300 0.29 10 No.211 cylinder 0.645 6.0 15.1 tall 0. 1992).1 picnic 0. and fh/U in the form of arctangent(log is the number of training examples presented between fh/U) was predicted from log g and jcc. (kg) Radiusheight/2 (cm) 1 6z 0.300 3. large more uniform data set.56 8 No.240 3. log g was predicted from (Baughman & Liu.454 4.057.30 1 12 No.5 10. Its value the corresponding g-values.50 2 8 z tall 0. At the same time. higher momen- variables were changing over a wide range.414. Several other have sucient information to describe the relation variables also a€ect the ANN model development and between input and output variables. The original data (1215 data) were sorted from the lowest to highest value. / Food Research International 34 (2001) 55±65 Table 1 Range of parameters used in ®nite di€erence program Retort temperature ( C) Initial temperature ( C) Thermal di€usivity107 (m2/S) Heating lethality (min) 110 70 1. 1995). The momentum ables (NeuralWare.13 3 No. A proper format of performance. Arrangement and procedure of developing ANN models.410 3.165.3 cylinder 1.19 5 No.89 sponding fh/U.367. Therefore. The schematic description rule learning for updating weights will require more in Fig.58 M.170 2. Therefore.07 9 No. a learning curve was developed.0 120 80 1. 1995. Yang & Wade.568 4.89 14 No.2 0.595 4. For a more con. a mathematical transform func. To Fig. number of hidden layers.416. Among these variables. 1. It has been recognized that the network predictability depending on the problem.303 cylinder 0.365. 1996). The training data should of learning rule and transfer function. this value should be opti- and performance should be optimized with respect to mized. using cumulative delta- will be denoted as Model B. 1996). greatly by the quantity and quality of data in the train- number of processing elements in the hidden layer.408. This stage of ANN models development is thus regarded as data preprocessing which could be performed in di€erent ways to improve the model eciency (Lacroix.340 4. However.2 cylinder 0.0 Can size no.815.895.364.738 4. This method of updating weights which venient recalling of models.0 130 90 2.303 0.420 5. weight updates.454 3.868. 1997). to select the required number of examples in the training set. for predicting the process represents a fractional value from the previous weight lethality.5 1.370 3. from now on prediction of increases the convergence speed is known as cumulative log g will be denoted as Model A and prediction of fh/U delta rule learning. for a better tum values speed up low learning and prevent being presentation of input and output to the network and a trapped in local minimum. NeuralWare. every seventh data row .95 2 13 No. As the input and output can change between 0 and 1. momentum and variables could increase the learning ability of ANNs by epoch size have the most signi®cant e€ect on the network detecting the relation between input and output vari- performance (Baughman & Liu. The performance of an ANN model is in¯uenced size of the training data set.2 vacuum 0. however. Afaghi et al. An epoch log(fh/U) and jcc. Salehi.000 7.2 0.822 5.08 4 No. Can size Approximate ®ll wt. fh/U and jcc were the input variables predicting change added to the present weight correction.415. have six equal groups of data.10 3. values of momentum will cause large error oscillations tion was applied.15 15 No.

E 606 606 training and testing data sets. as well as time step size. n: in vertical and horizontal axes. / Food Research International 34 (2001) 55±65 59 was selected and removed from the original set. F B. Each of these groups thus covered the entire range A B. This procedure Range of network architecture and learning parameters used in the was followed for both models (A and B). C.4 16 stage. This procedure is continued until a pre. while a negative sign n shows an overestimation of process time. the NeuralWare software pro.8 32 vides an option called ``save best''. D. a Bold numbers with underscore are default values provided in ber of cycles. The following criteria were used for evaluating the F0…ref† ÿ F0…methods† Error ˆ  100 …1† ANN models performance with respect to di€erent net. the ``save best'' network is cess lethality were compared with respective values from retained and the model is ready for veri®cation. E. both where: Yo: desired value. Relative error com- respective ANN predicted parameters of fh/U and g. M. Results and discussions Mean relative error …MRE. C. F E 1010 202 training were used for testing data during development of the model.6 20 ± ± 0. Testing data were 5 2 and 5 0. 3. Finally. F D. the selected formula methods.3 8 applied to cross validate the model during the training 10 5 and 5 0. The predicted ANN values of process time and pro- ing of the network. Combinations of training and testing data for network optimization The same procedure was repeated with the remaining Group(s) in Group(s) in Number of Number of data until six homogenous groups (A±F) of data each training set testing set data pairs data pairs with 202 data points were obtained from the original in training set in testing set data. Y: ANN predicted value. C. These Table 2 formed the ®rst (Group A) data set of 202 data points. the result of a most recent and better network is replaced with the last saved network. results of the network in a ®le. Comparison of ANN models selected error of the network is reached or after a num- ber of iterations without any improvement in the learn. puted as follows. The developed models were also tested with all the available data from the original data set to Table 3 evaluate the consistency of performance. D. testing with testing data and saving the software. B. while Mean absolute error …MAE† ˆ mean of jY0 ÿ Yj predicting the process time.4.2 4 as a training and testing data set. study Various network architectures and network learning Number of PEs Momentuma Epoch parameters (momentum and epoch size) were examined as summarized in Table 3. For this purpose. F B. Optimization of ®nite di€erence model jY0 ÿ Yj ˆ mean of  100 Y0 The optimization results of the ®nite di€erence model are shown in Fig. %† 3. Di€erent combinations of A. The data sets that were not used for A. D. the original data were separated into two groups 2 2 and 2 0. which was the as calculated process time and process lethality from data from ®nite di€erence model. 2 based on the number of nodes. E 404 404 these groups were used to achieve di€erent sizes of A.1. was applied as the error parameter: values. 2. number of records Increasing the number of nodes in each axis increased . All the predicted values Performance of the ANN model was demonstrated from either ANN model or formula methods were with respect to direct predicted g-values or fh/U as well compared against the reference model. These arrangements are A. The function of this option is to stop the training stage after a certain num. F 202 1010 of data in the original set. C. F0…ref† work con®gurations and parameters: s This error parameter was calculated based on lethality X …Y0 ÿ Y†2 and. Each time. D. 15 10 and 5 0. therefore. a positive error of lethality calculation Root mean square …RMS† of error ˆ demonstrates an underestimation. E 808 808 shown in Table 2. To avoid over-training of 1 Hidden layer 2 Hidden layers models. The error signs for under and overestimation are reversed. C. Afaghi et al. B.

increasing the PEs in one hidden layer was work simultaneously. Figs. which layer networks had a signi®cantly higher performance. Also. Smaller ted with the entire original data set and the combined increments in the axes increase the accuracy of the results were shown in these graphs. Finite di€erence model optimization: (a) number of nodes in (as a measure of network ability to learn the data). demonstrating increasing the number of nodes further had less e€ect.60 M. as each axis and (b) time step. Fig.2. which had the lowest error of predic- no appreciable reduction in the associated RMS error. However. These ®gures demonstrate had the highest performance for Model A and was . 4. 5 depicts the e€ect of PEs and number of hidden For developing the ANN model. 2. Learning curve for ANN model A. With this respect. well as testing the network with testing data which were not used in training (as a measure of generalization the computational time due to additional calculations to ability of network). the potential ability of the network for generalization. were very close for each training set size. Model B was steps used in computation. was used. For both models. Afaghi et al. and a data set increases steeply with time step sizes less than 5 s. 3 and 4 illustrate the e€ect not signi®cant compare to networks with 2 hidden lay- of the number of examples in the training data set for ers. 2 hidden was to choose the correct size of the training set. A network with 2 and 5 PEs in each hidden layer models A and B. respectively. Learning curve for ANN model B. and a training set with trend was applied to select the optimum number of time 606 data (50%) was selected. the performance of the network and the question of how well the network can learn depended on the input 3. the nature of variables a€ected parameter in model. a tion. after an initial sharp drop from 5 to 10 nodes. ensures the learning and generalization ability of net. calculated errors based on recalling the training data set Fig. As computation time more dependent on the training set size. The same function of training data set size. In Model A. the performance of the network was not a optimum number of nodes in both directions. a 10 by 10 grid size was selected as the In Model A. with with 1010 data. each network model was tes- be performed for center temperature prediction. Fig. 2. Model development and optimization variable. In both models. model in temperature prediction. / Food Research International 34 (2001) 55±65 Fig. Although the source of data for both 5 s time step size was used as the optimum value for this models was the same. although as shown in the errors associated with training and testing data sets the Fig. the ®rst objective layers for Model A and B. 3.

The error para- throughout the entire range. The performance of values and reference values. It has been generally recog. process time had higher values compare to process dicted g-value or fh/U vs. Both models shows close performance of models. Log Process Atan Process however.2 0.998 0. 0. 1995. For the Model B.028 0.41 MAE 0.039 0. E€ect of PEs and hidden layers on the performance of models A and B. however.6 0. Model B Table 4 had a better prediction ability with 2 hidden layers and E€ect of momentum and epoch size on the performance of Models A a network with 2 PEs in each hidden layer was selected and B as the optimum topology. 1996). Performance of the ANN models R2 0.74 3. The associated error para. meters of ANN models prediction are summarized in meters are summarized in Table 5.44 2. However.93 2.029 8 0.03 2.035 0. decreasing this parameter slightly decreased g time log fh/U lethality the MRE error.70 0.030 32 0.041 0.029 16 0. The predicted values for both models were very ANN models showed a good performance for predict- close to desired values and were evenly distributed ing process time and process lethality.27 MRE 2. the process time or process lethality.041 0.028 are sucient (Baughman & Liu.029 The epoch size was systematically varied between the 0. M. Also.3.05 4. Epoch size did Error parameters ANN model A ANN model B not signi®cantly a€ect the networks predicting ability. 7 in the form of process time prediction was higher than process lethality since time and process lethality calculated from ANN pre.17 0. Therefore. 0. / Food Research International 34 (2001) 55±65 61 Fig.997 0.116 0. consequently the default value was selected for Model B.039 0. High R2 values Table 5. selected as the optimum con®guration. RMS 0. . the minimum value for this variable was selected Table 5 for Model A.999 The performance of ANN models A and B are shown in Fig.029 1996.043 0. two hidden layers for most of the problems 0.037 0.029 20 0. this parameter did not Error parameters for performance and veri®cation of ANN models A and B a€ect the performance of the network.4 0. RMS of errors for process ANN models is shown in Fig. Momentum MRE Epoch MRE nized that assigning the appropriate number of PEs and Model A Model B Model A Model B hidden layer(s) is mostly a trial and error method. reference values.14 4. there- fore. Increasing the momentum had a reverse e€ect on the prediction ability of the network for Model A.01 0. Both models had a high R2 close to unity shows the excellent performance of both ANN models showing excellent correlation between ANN predicted in predicting the output variables.8 0. 5.049 0.053 0. 6 as plots of ANN predicted values vs. the minimum value of the epoch was selected for both models.02 0. Neuralware.3 0. however. Swingler. close mean relative errors of both lethality obtained from the ®nite di€erence model.997 0.029 4 0.030 limits and its e€ect on the RMS was evaluated (Table 4). Afaghi et al.

Table 6 shows the range of the ®nite di€erence program was run based on a pre- errors for process time and process lethality for formula assigned heating lethality. As (Ball. had the most deviation while other meth- ods have a close performance (R2>0. methods (Smith & Tung. can size and processing time with obviously that they were very similar with respect to the performance of three selected formula methods both process time and process lethality calculations. 1997). Fig. Comparison of ANN model with existing models assumptions. The Ball method. 8 and 9.D. the model performance was even better The general performance of the selected formula with a mean average error of 1. considering the restrictive 3. / Food Research International 34 (2001) 55±65 Fig. Performance of ANN models A and B in process time and process lethality prediction. The trend The performance of ANN models were compared observed while comparing the performance of the other for the entire range of processing conditions. of the ANN model for process possible reasons for higher associated errors). Pham. Afaghi et al. mean average errors was between the Stumbo and Pham methods. The range of errors as well clustered around three ®xed lethality values (one of the as the mean and S. the range of evaluation. In process time calculation. thermal two methods (Stumbo and Pham) with ANN models. mon range of process times employed in the industry Noronha. 9 for process lethality is methods and ANN models. It error of 2. (up to 120 min).98 min.7 min indicated high accuracy of the ANN should be mentioned these two methods have the high- model in process time prediction applied range of this est accuracy of calculations compared to other formula parameter (between 40 and 450 min). Stoforos. 1990. 6. With more important error parameter and provides better respect to the process lethality prediction.4. Stumbo and Pham). methods and the ANN models are compared in Figs. In the more com. Performance of ANN models A and B with respect to predicted output. Hendrickx & Tobback. with respect to process lethality mean relative error is a time prediction was closer to the Pham method. is di€usivity of product.62 M. . 7. Fig.99). 1982.

the currently used research where such programs are easily available and Stumbo and Pham models which are considered to be people with programming experience could write. one could simulate the process conditions ciency. gesting in this paper.29 0. Min Max Mean S. / Food Research International 34 (2001) 55±65 63 Table 6 Range.57 19. can easily be adapted to these situations .61 3. or even better than.63 26.22 Stumbo 0. Stumbo.63 1. ANN. it can be seen that the ANN models would make no signi®cant di€erence in academic are as good as.06 6.60 0. Pham or the one that we are sug.96 0.01 8. Thus.10 Fig.05 0. Afaghi et al. It is true.92 2. For- than trying to use more empirical and simulated models mula methods like Ball.01 5.25 1. ANN model. the most accurate.75 68.64 3.18 1.00 8.71 2. Stumbo. Ball 1. based on data from the ®nite di€erence com. Such concepts probably puter simulation.05 2.60 3. 8.60 ANN 0. with today's computer develop and execute these programs with ease and e- technology. and the developed like Ball.60 3.06 19.31 15.50 Pham 0.02 31. Performance of ANN model in process time calculation in comparison with the other selected formula methods.03 0.69 0. M.D.D. mean and standard deviation of errors for all the combination of parameters Methods Error in process time calculations Error in process lethality calculations Min Max Mean S. However. such facilities are not easily available using an appropriate computer model and use it rather in industrial sectors who actually use the results.00 6.18 0.

Performance of ANN model in process lethality calculation in comparison with the other selected formula methods. using the trained model one does not alternative to existing methods of thermal process cal- need such data. 9. Although the ANN model is developed from data derived from numerical simulations with The possibility of ANN model development as an exact solutions. The trained models can greater depth. and can be executed with minimal training.64 M. the tion of process calculation procedures. For anyone to explore the be used automatically to upgrade the training of the depth. one has to get to know its bene®ts. Although it is useful and can problems on the surface. If one were to opment of ANN based process calculation programs to attempt only the more complex problems without be a step forward for achieving progress in the applica- demonstrating its potential to the simpler tasks ®rst. Secondly. / Food Research International 34 (2001) 55±65 Fig. ANN model would remain more abstract. its use has so far been only to tackle the easily come to the rescue. it exploration of ANN in the ®eld of food science/proces. The achieved lethality or adjustments in process time. Afaghi et al. 4. the general method approach is good only when exact time±temperature data for the given situa. the devel- optimization and process control. becomes less ecient (and seems more complex) to use sing is still at its infancy. The output from a trained model can be complex problems. For routine calculation of di€erences in culations was studied. it is considered. it is not necessary for the user to have the ANN slowly but steadily being explored for more and more program. Hence. We have started to explore the use down-loaded to a basic program and can be run on a set of ANN for variable retort temperature processes of data like any other commercial software available for (VRT) and link it with genetic algorithms for process process calculations. ANNs are model. ANN models were developed for . Conclusions tion are known. Although it has potential for such ®nite di€erence programs.

Sreekanth. (1997). C. Dixon. & Gunasekaran. & Doner. A neural plicity and on-line compatibility. S. Noronha. networks. G. (1966). 61(5). Comparison of formula methods for calculating thermal process lethality. (1997). 120 min) was comparable with the Pham method in Pham. Also... A. J. A. Computer. Teixeira. Ramaswamy. S. S. (1998). sim. (1969b). 19. Neural network modeling and fuzzy Swingler. (1990)... Peters. T. Re®nement and extension of fh/U:g parameters Datta. process calculations with advantages of accuracy. A. Yang. & Tobback. & Palagos. R. A method for pre. J. Food Research International.. Pittsburgh.. Stumbo. 341±345.. K. The Stumbo cans subjected to end over end processing. / Food Research International 34 (2001) 55±65 65 process calculations based on input from a ®nite di€er. 411±441. processing of conduction-heated foods. S. New parameters for process tions. (1996). R. Teixeira. S. K. CA: Academic Press. O. Prediction of Bigelow. 626±630. R. (1997). M. for dairy yield prediction and cow culling classi®cation. Linear applicable range of processing conditions and can sizes. J. Z. 30(2).. G.. . the ANN Model B pre.. & Manson. A. (1995). K. 36(4). Q.. 2(1). R. & Tung. Y. & Bolte. & Longley. Teixeira. J. S. 51(2). Computer optimization of nutrient retention in the thermal time variable estimation and prediction in the control of glucoamy. H. neural network and induction analysis to determine The optimized model had 2 hidden layers.. (1997). (1991). (1986). J. A. Blanlenship. San Diego. Sablani. 52(10). X. & Zinsmeister. Drying Technology. ANN Science. E. S.. References Smith. 23. Journal of Food Science.. Ramaswamy.. 51.. E. 726±728.. 480±484. Stumbo. Lacroix. M. 27. G. S. J. E.. Critical Reviews to prediction of apple quality using near infrared spectra. & Wade.. J. 352±354. 38. 809±813. (1973). J. 47. H. A Bochereau. Neural networks in bioproces. 85±90. R. (1996). critical analysis of mathematical procedures for the evaluation and diction by combining data analysis and neural networks: application design of in-container thermal processes for foods. M. Hendrickx. H. A. W. A. & Liu. S. & Zhu. Food Control.. 283±301. A. D. Lethality calculation for thermal processes with di€erent heating and cooling rates.. (1998). 15±19. S. Heat transfer studies of liquid/particle mixtures in which was very close to Stumbo's method. Computer determination of spore survival distribution in data preprocessing on the performance of arti®cial neural networks thermally processed conduction heated foods. Calculation of thermal process lethality for conduc- entire range (reduced to 1. R. Science and Technology.. Food fessional II/Plus and NeuralWorks Explorer. G. S. A neural network approach for thermal processing applications. S. E. & Ball. Richardson. 40(3). Food Technology. G. Zahradnik.. W. S.. S. C. C. Food quality prediction with neural thermal sterilization of canned foods. (1920).. Neural computing: a technology handbook for pro. M. Food Con- P. E€ects of (1969a). Teixeira. Salehi. 37(5). Food Technology. Journal of Food Science. Process Biochemistry. H. based retort control logic for on-line correction of process devia. T. Cole. 40(3). & Manson. W. 16(3±5). (1982). Journal of Food layers with 2 and 5 PEs in each hidden layer. Sylvia. for improved thermal process calculations and control. S. Estimation of a¯atoxin trol. tions of the ASAE. Food Technology. 876±880.. 207±216. O. M. G. and hence the ANN Sablani. (1992). G. R. 825±837. C. Agricultural Engineering and Research.. Journal of in Food Science and Nutrition.. lase fermentation. Zahradnik. On-line retort control in Ni. E. C. J. 671±676. 275±283. Development and use of numerical techniques Parmer. 25. F. (1987). calculation. & Tucker.74% of relative error. Food Technology. 845±850. Ramaswamy. (1996). P. methods of process calculations. S. S. 105±116. Dept. network modeling of heat transfer to liquid particle mixtures in cans subjected to end-over-end applications. McGill theses. 23. O. (1982).. S.. J. S. G. National Can. Morrissey. Journal of models demonstrate a good potential for application in Food Processing and Preservation. Teixeira. Tucker. Bull 16L. & Zinsmeister.98 min up to a process time of tion-heated canned foods. Sablani. 8(1). dicted the process lethality with 2. sing and chemical engineering.. A. Dixon. Ware. Y. Neural network modeling for real. psychometric parameters using neural networks. & Cho. P. 148±156. 13±20. S.. (1997). I. 20(3). Computer control of batch NeuralWare (1996).. of Food Science and Agricultural Chemistry. A. W. 839±846.. T. P. T. Model A with 2. Bourgine. Kim. contamination in preharvest peanuts using neural networks. for process calculation. PA: Neural Technology. Trans- ence model under a wide range of data covering the actions of the ASAE. San control simulation for bread-baking process.. Hoogenboom. Journal of Food Science. Transactions of the Diego. & Prasher. A. ners Association. R. retort operations with on-line correction of process deviations.. W. D. CA: Academic Press.. G. 60±65. Heat penetration in processing canned foods.70 min mean absolute error over the Pham. R. Afaghi et al. Q. International Journal of Food process time prediction. 2 hidden harvesting and processing e€ects on Surimi quality. (1997). A.. Transac. Baughman. Linko. Journal of Food Science. regression. 52(4). 967±974. 40(3). McClendon. Macdonald and Pham methods are considered the most accurate College. L. B.. Applying neural networks a practical guide. (1995). R. N.. A. H. & Prasher.. ASAE. (1992). Stoforos. Bohart. & Sablani.