This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

Editors' Picks Books

Hand-picked favorites from

our editors

our editors

Editors' Picks Audiobooks

Hand-picked favorites from

our editors

our editors

Editors' Picks Comics

Hand-picked favorites from

our editors

our editors

Editors' Picks Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

The literature abounds with the application of optimization methods for estimating model parameters in equation systems. The utility of these methods is frequently demonstrated on pathological examples using simulated data generated from a known model with a random error component and a known statistical distribution. Unfortunately, parameter estimation problems encountered in practice do not have ti advantage. The true model is frequently not known. In fact, one is faced with hs choosing among various candidate models, all of which may be wrong. Moreover, the error structure is generally unknown and must be estimated from the data. Finally, a great deal of mathematical expertise is required to transform the model and select meaningful starting guesses before parameter estimation can be successful. In order to demonstrate the difficulties of parameter estimation in the industrial environment and the limitations of existing methods, a parameter estimation problem formulated by the Dow Chemical Company is presented and solved. This test problem consists of a stiff differentiallalgebraic (DAE) model that describes complex kineticsand requires the estimation of nine parameters from batch reactor data. Here the model was inadequate to describe the data, the error structure was not specified and the starting guesses led to a nontrivial optimization problem. The Dow parameter estimation problem was distributed in 1981to 165 researchers as a followup to the 1980IWCAPD conference. Of those researchers, eleven agreed to apply their methodologies and expertise to this problem. However, only five acceptable solutions were finally submitted. Here we present and compare these results. Each solution was obtained using different strategies. In most cases the form of the model was also changed to accommodate the algorithms used and to ease the solution procedure. Therefore, while this case study does not present a direct numerical comparison of algorithms, it does offer guidelines and insight towards the solution of difficult parameter estimation problems.

**L. T. BIEGLER and J. J. DAMIANO
**

Chemical Engineering Department Carnegie-Mellon University Pittsburgh, PA and

G. E. BLAU

Dow Chemical Company Midland, MI

SCOPE

Parameter estimation arises in fitting models containing several unknown parameters to experimental data through adjustment of these parameters. Model formulation is not a unique process; many different formulations may be used to fit the data and optimize model parameters. Of particular concern are formulations that are sufficiently accurate to represent physical or chemical phenomena and also are amenable to reasonably efficient computer solutions. Here we consider a parameter-estimation problem formulated by the Dow Chemical Company in order that current parameter estimation methods can be compared on an industrial kinetic parameter estimation problem with real data. The model consists of nonlinear differential and algebraic equations and is stiff over a wide range of parameter values. Thus one must choose a reliable model solver before parameter estimation can begin. Several methods for solving stiff ODE and DAE systems are available (see, e.g., Carnahan and Wakes, 1981). Most of those used in this case study are based on Gear’s method. After a model is proposed and solution techniques are chosen, an objective function that determines the goodness of fit must be selected. From maximum likelihood analysis, several alternative

J J Damfano

IF

currently with Standard 011of Ohlo, Cleveland, Ohlo

objective function forms can be chosen, depending on our assumptions of the error structure of the data. These can range from simple least-squares functions to fairly complex nonlinear functions that incorporate general unknown covariance of the measurements and heteroscedastic ermrs (see e.g., Bard, 1974). The final step in parameter estimation is to optimize the model parameters and any unknown statistical parameters in the objective function. The optimization problem formed by the model and the objective function generally requires repeated and expensive solution of the ODE or DAE system. An efficient algorithm should minimize the number of model and function evaluations and still converge easily to the solution. Generally, these algorithms require gradient information and some,approximation to the second derivative matrix. For simple and weighted leastsquares objective functions, the Gauss-Newton and LevenbergMarquardt algorithms provide the second derivativeinformation quite easily. Otherwise, quasi-Newton updating algorithms such as BFGS (Gill et al., 1981)can also be applied. Five researchers have submitted solutions to the parameterestimation problem described below. After the strategies that were used are outlined, the solutions are compared in terms of accuracy and efficiency.

AlChE Journal (Vol. 32, No. 1)

January, 1986

Page 29

the solutions presented here do not offer a direct a m parison among model solving and optimization algorithms. behs cause they relied on different problem formulations. After pmenting the differentiallalgebraic equation (DAE)model and a suggested form for the objective function. Based on the results. six solution strategies are described and presented. homogeneous. regardless of the sophistication of the software.+ Hi In order to devise a model to account for these reactions. Overall concentrations are defined for three components based on neutral and ionic species. PROBLEM DESCRIPTION The parameter estimation problem is based on a kinetic model of an isothermal batch reactor system. it is first necessary to distinguishbetween the overall concentration of a species and the concentration of its neutral form..] + [MBM-I [HA1 = [(HA). 1986 AlChE Journal (Vol. T i leads to a model that is more efficiently handled by the ODE hs solver. Al of these solutions used implicit ODE or DAE solvers. The reacting species have been disguised for proprietary reasons.k. This leads to a smaller and better behaved optimization problem. it is clear that the most important consideration for parameter estimation is formulation of a simple. Except for an unsuccessful solution presented for illustration.] [ B M ] . ons All of the solutions required careful attention by the researchers to several computational difficulties.] + [ABM-I where [ ] denote concentration of the species in gmol/kg. The catalyst QM is initially assumed to be 100% dissociated to Q+ and M . No.3)3=ABM- Rapid Acid-Base Reactions MBMHeKi*MBM- + H' H A ~ K ~ = A + H' HABMeKseABM.[M-J[AB] + k-JABM-] [MBMH] = [(MBMH). four inl vestigators used variationsof Gear's method. The reactions occur in an anhydrous.][If'] [(HABM)NI The anionic species may then be represented by: K.[MBM-l . 2. to study a batch reactor system was solved by five independent mearchers. 1) . The reaction is initiated by adding the catalyst QM to a batch reactor containing the two miscible reactants with reactant BM in excess.k-. Judging from the very wide exposure the problem received. This not only leads to a closer starting point but also avoids any u n n v stiffness problems due to starting p i t far from the optimum.[M-][BM] + kkJABM-I (d) + k-.K3can be defined a follows: s From stoichiometry. = [MBM-] 5 (KI KJMBMH] + [H'I) M- + A- BM-k_Ikl-7MBM- + BM ks=%4BMMaterial balance equations for the three reactants in the slow kinetic reactions yield: M- + A B C k . None of the investigators was able to solve the problem automatically in one go. Eliminate as many unn-ry model parameters as p&ble. 32. 3. yet realistic process model. Finally. If the error structure is not known.[A-][BM] (g) 'IHA1 dt Page 30 January. some physical insight into the model parameters.ions. -drBM1 dt -kJM-][AB] -k.CONCLUSIONS AND SIGNIFICANCE A parameter-estimation problem formulated by the Dow Chemical Co. Instead. The following mechanism is proposed to describe the reaction: Slow Kinetic Reactions - + + [ABM.] + [A-I [HABMI = [(HABM). liquid phase catalyzedby a completely dissociated species. it appears that discovery and improvement of parameter-estimation algorithms remains a very necessary and fruitfularea for research and development. 4 Determine an initial set of parameter estimates through .[A-l[BMl (e) d[ABl = at -k. Eliminate all dependent equations in the proposed model.. the equilibrium constants K. rate expressions can also be written for the total species: d[MBMH1 dt - k J M . ti study illustratesthe importance of model formulation and insight in tackling difficult parameter-estimation problems as well as the limitations of current methods. use an objective function based on maximum likelihood that allows direct application of Gauss-Newton-type methods.[MBM-] = k. Guidelines for this should include the following points for more efficient solution: 1. The desired reaction is given by: HA 2BM AB MBMH where AB is the desired product. By assuming the rapid acid-base reactions are at equilibrium.K2. only Gauss-Newtontype methods and successive quadratic minimization were used for optimization.

/RT) 1) Explicit Runge-Kutta and linear multistep methods for nonstiff systems. a. this relation holds at long times. The data p m n t e d for parameter estimation represent a pretreatment of the actual data. The data sets are given in Appendix A. 1) January.67./RT) exp( . E l . choosing an appropriate objectivefunction from maximum likelihood. The letter labels for the following equations refer to the corresponding kinetic and equilibrium expressions derived above. Each of these can be fitted with adjustable model parameters.l . From the model. a .ci determined by matching the above series with a Taylor series expansion of desired order.~~] the vector.ABM-. The model initial conditions and the initial parameter estimates are given in Appendix A.[M-][AB] - k-JABM-1 (i) An electroneutrality constraint gives the hydrogen ion concentration [ H + ] as: [H'] + [Q']= [ M . ( w 2 . SOLVING THE DAE MODEL [BW = initial [HABMJ . H + . The temperatures were set at 40. and optimizing for the parameters. When ail = 0 for j . assuming an Arrhenius temperature dependence.E . they are insufficient to estimate all of the parameters present in the proposed model and a few assumptionswere made. but only three species HA.1 .. an unmeasured species.bi.E. These three measurements were normalized so that: [HA] + [HABM] + [AB] = initial [ H A ] Normalization adjustments resulted in concentration changes of less than ten percent. counter h = stepsize T order of Runge-Kutta method - are The coefficientsaii. we briefly summarize the algorithms that were used. M B M H . [BM]. kl.MBM-1. Runge-Kutta Methods . the above problem was distributed to its 165 participants in 1981. and 100O C .E J R T ) a2exp( . We classify these in terms of solving the DAE model. First. When ai.A-. time profiles are presented. this relationship is only true if [AB] [MBMH]. and AB were actually measured. and the parameters. k . 0. given by: 0 = [al. B M . Also. thorough analyses of these methods have led to a fairly complete classification according to stability. given by: y = [ H A . represent the preexponential factor and activation energy. 32. In all of the runs. = = k.I must be estimated. y . That is: Numerous methods have been developed for the solution of initial value ordinary differentialequations.was derived from: The nine parameters form the vector.1 with wheren = step. the initial catalyst concentration was 0. Although many data are present.= 0 for j > i. HABM. for the appropriate rate constant. three rate constants.] + [A-] + [ A B M . H A B M . and aij # 0 AlChE Journal (Vol. we assume: - k.+ k. accuracy and performance. As a follow-up to the 1980 FOCAPD conference. Given a differential model of the form: the Runge-Kutta (R-K) techniques of integration use formulas defined by: i. Also. T is the reaction temperature in Kelvins. M . The model can therefore be expressed mathematically as six differentialequations and four algebraic equations.] + [ M B M . = h where R is the gas constant.2[AB] However. k-. K ~ . A B . based on the similarities of the reacting species. As a result of these comparisons. respectively. it was determined from other data that the equilibrium constants did not vary with temperature over the interval 40-100°C.0131 gmollkg. 1 a the qi can be calculated in order and the R-K method is called explicit. 1986 Page 31 . Before describing these solutions and the methodologies behind them. No. Four component concentration vs. Based on these last two assumptions.] (j) The measured values came from actual kinetic data obtained from three isothermal runs in a batch reactor. Eleven of these researchers agreed to tackle this problem and five groups submitted acceptable solutions. 112 k-. the most commonly used ODE IV methods are: kl = kL= aIexp(-E. A fourth species. and 2) Semi-implicit Runge-Kutta and the Gear methods for stiff problems. E . E 2 . ~ 2and the predicted concentrationsform . kz.

method is called semi-implicit. Systems of DifferentiallAlgebraic Equations (DAE) The choice of the objective function used in parameter estimation should incorporate the specific error structure of the experimental data if the best fit of the data is to be achieved.hbnoJ(yn). 0 = vector of adjustable parameters often occur in chemically reacting systems as well as in other fields such as circuit analysis or transient flowsheet simulation. the diagonal elements of the matrix represent the independent variances of the measured variables and off-diagonal terms represent estimates of correlated or dependent error relationships. The experimental error vector. N ( 0 .Chan et al. Petzold advises extreme caution in solving nonlinear DAE systems. the normal distribution. formulas of order Y are obtained with 11 = r and l2 = 0. . then: (16) where F = combined f and g equations y' = derivatives of y Page 32 January. 1986 AlChE Journal (Vol. using linear. From a practical standpoint. Carnahan and Wilkes. defined as the difference. and covariance V ." / ~ ( V det-li2(V) exp[-l/2(z ) . Note that the system is coupled. is given by: I .. exp d asN(v. between the variables in experiment u . a backward difference formula should be used to discretize the differential equations. 1978). termination criteria. Finally. enforcing convergence by reducing the step size leads to an ill-conditionedJacobian because E is singular for DAE systems. and Michelsen (1976) give accounts of semi-implicit Runge-Kutta schemes. z. have experimental errors associated with their values. the measured variables. Hence. are implemented in the popular Gear.Yn-i . and bni. 32.t) for a single experiment in which m variables are measured with a covariance V between these measured variables.solved iteratively. If n experiments are carried out. called backward difference formulas. Numerous methods exploit this strategy (see. e. The discretized ODEShave the form of Eq.Prokopakis and Seider (1981). The combined system of algebraic and differential equations can be written in the general form: EY' = F(Y. While implicit R-K methods have a drawback in that the qi must be. No. eu. the method is termed implicit.. but with no correlation between measured variables in the different experiments. allows the numerical algorithm to track the model components accurately with larger step sizes after the transient or stiff region has passed. To apply the likelihood function assume a relationship of the form: Z". 15has the Jacobian: E - aF aY Linear multistep methods for numerical integration of differential equations have the following form: j-1 j-0 with coefficients.g.V). with variables from the algebraic equations needed to compute the state variables yoand vice versa. which is described later. = Models composed of nonlinear algebraic and stiff ordinary differential equations of the form: + €"I d Y 4 =0 where y = vector of state variables yn = subvector of state variables to be solved from differential equations. i. Gear (1971) stated that for stiff equations. z .. The desirable stability property they have. with mean q . assuming the model is correct Here. in each of which m variables are measured with covariance V. needed in the Newton-Raphson where J = afn/aYn. when aij# 0 the for j > i. and convergence failures when discontinuities in the forcing function are present.. The algebraic equations could then be combined with the discretized equations and the resulting algebraic system could be solved by Newton's method. While the example problem described in this paper does not fall into this problem class. 1981) software packages. semi-implicit methods can be linearized and sohed with only one iteration per step.B. which has been termed stiffly stable. have a Gaussian probability distribution. The form of these experimental errors is given by a covariance matrix.. Gear (1971)proposea a family of formulas that are "nearly" Astable. 1981).~(B) = computed value of comoonent j from model in uth experiment 0 = vector of adjustable model parameters Euj = residual error.t ~ )thus has . Many of her suggestionsfor dealing with the above problems have been incorporated into her code. and LSODE (Byrne. Moreover. A discussion of backward difference formulas can be found in Carnahan and Wilkes (1981). If it is assumed that the measured variables.: yn = Canj~n--r j= 1 + hbnofn (13) Petzold (1982) has shown that solving DAE systems may lead to far more difficulties than solvinga standard ODE model. Linear Multistep Methods e = vector of adjustable model parameters E = matrix partitioned as [I/OIT so that Eqs. a.I (15) The Jacobian of this equation.q)] z (17) y. DASSI. MAXIMUM LIKELIHOOD METHOD These formulas.hb*.e. 1) . 14 and 16 are equivalent For DAE systems the analog of Eq. E ( z . determined by postulating the solution as an interpolating polynomial.for j = i. The likelihood function provides a general formulation for the oojective function by means of which many typesof error relationships can be represented. Petzold describesa number of problems with error estimates. 13: 4 where z. in certain cases. iteration...Can. She points out that DAE systems can have many similaritiesto stiff systems and.d"= 0 j. p ( z ) . Vu). have errors that do not vanish as the step size goes to zero. then: p ( z ) = ( ~ T ) . Episode. nonhomogeneous systems as an example.t~)~V-'(. = measured value for component 1 Y.

No. from Eq. and in other instances they can be removed by simple transformation at the optimum. This is not true in the example considered here and different approaches for missing entries were used in the solutions described below. i. relative error constant throughout experiment u=l where E. the model prediction.. 66). the residual vector. 0 (Note that by max- imizing log L the residuals should be smaller than the actual errors. if we assume that each measured value has Gaussian errors with the same known variance. 1974. t. then all yj's are set to zero and Eq. This leads to the likelihood function: log L = -nm/2[log(27r) + 1 1 n m u=lj-l n / \ The maximum likelihood method seeksthose values of the adjustable model parameters 6 for which the probability of obtaining ' values is maximized. At this maximum. some bias is introduced. iia Finally. It is assumed the measured variables are independent so that the covariancematrix is diagonal It is assumed the error is heteroscedastic and the diagonal ele ment of V. Gauss-Newton and Marquardt methods take advantage of the structure of several functions derived from maximum likelihood. OPTIMIZATION ALGORITHMS FOR PARAMETER ESTIMATION u-lj-1 where n = total number of measurements for a component m = number of measured components yt. If we assume that the errors in each experiment are Gaussian and distributed with the same diagonal covariance matrix but are not heteroscedastic.If one assumes that the proposed model is correct and that the parameters e are not far from the optimal parameters. The important stepsof these algorithms are the calculation of a search direction and determination of the step length to take along this direction. For example. -= A number of gradient-based algorithms have been developed which exploit least-squares structure and significantly reduce the computation necessary to find the optimal parameters. SubstitutingEq. 21 yields: or AlChE Journal (Vol. p. since the parameters 0 were chosen to make the residuals as small as possible. is a power transformation depending on the magnitude of the expected value of zuj. To solve our parameter estimationproblem. the likelihood function is to be maximized for the 0.. o. Analyses of this problem can also be found in Bard (1974) and Stewart and Sorensen (1981). 0. corresponding to component j . 1986 Page 33 . are normally distributed with zero mean and a covariance matrix. and component 1 1s substituted for the expected value.for time t. In particular. The iteration step for the optimization variables. Observation errors at different time points t. Because of this. For some functions the unknown statistical parameters are not present. are uncorrelated The errors at t. 22 using the optimal parameters. Assuming a general unknown covariance matrix leads to an expression smlr to Eq. In this particular problem. 22 into Eq. = yi = yi = variance coefficient for component j 0. yu. Here.and y parameters. 23 simply becomes: log L(0) = mn/2[log(nl27r) -1 1 m n a-1 u-1 Thus. 32.e. 1) January.: where u . This objective function was suggested by Reilly et al.. = residual vector for uth experiment Substitutingthis diagonal covariancematrix into the likelihood function yields: It should be noted that the above derivations apply only to experimental runs with no missing data. then the likelihood function reduces to a simple residual sum of squares which can be minimized to fit the observed data. V. 18as an adequate representation of the measurement errors.yuj. can be substituted into Eq.j = model prediction which is a functin of 0 Note that the knowledge of the error structure of the data is instrumental in the derivation of the final model and objective function equations.' U=l where cuj is the model error. 24 for parameters 8 and estimate v. several participants modified this objective function even further. uui.. absolute error constant throughout experiment 2. (1977).one must maximize Eq.In this case study. can be expressed as: where 6 H - = aiog L Using this relation results in solving explicitly for Wi: g = One can approximate Hessian information by exploiting the form of the objective function. zUj. 24 (Bard. the following error structure for the measured values was assumed in the Dow problem statement. the simple least-squares function is given by: step length Hessian matrix of the objective function or its approximation gradient of objective function u=l yUI.

These methods assume nothing about the structure of the objective function. as the magnitude of the gradient approaches zero at the solution of an unconstrained problem. Equations 25 are called sensitivity equations and the values %/a% are called sensitivity coefficients. do not appear explicitly in the least-squaresor likelihood function. and using the chain rule we have: The choice of p is crucial in order to insure a descent direction. Truncation error is due to the neglected higher-order terms in the Taylor series expansion and is proportional to the perturbation size for the forward difference formula. The computation error is due to errors in computing the function values used in the gradient approximation. Therefore. Other methods for handling nondescent directions after spectral decomposition are given in Bard (1974). forward differences should not be used if the change in perturbed function values is small for a given step size. These methods were only tried in the attempted solution and yielded unfavorable results. 8. Finally. does not have full rank. several quasi-Newton methods are availablefor unconstrained minimization (see Gill et al. yai. the calculation of the gradients of the objective function with respect to the model parameters cannot be calculated analytically if the model predictions. with the quantities aFylaB and d F / h determined by simple differentiation. and perturbation size squared for the central difference formula. leads to the Gauss-Newton metbod: Z J ~ J. In either case. but instead approximate second derivatives from past changes to the gradient vector. vector of model predictions. No. are the model predictions used in the objective function. Two alternative approaches are available to calculate the gradients. Computing the Gradient -&. they may become unstable if the problem is ill-conditioned.. search direction.where yu = = z.. = A. 1) . problems can occur when solving the least squares problem if the matrix. a switch to central differences may be required. Here the H matrix undergoes a spectral decomposition to: Interchanging the order of differentiation yields: H = RAR' where A is a diagonal matrix.RTs = Rrg . We differentiate both sides of Eq. The first alternative is to use either forward or central difference formulas. the choice of the finite-difference perturbation size is important. All of the solu- Page 34 January. s. gradient calculations needed in the optimization algorithm usually make up the most expensive step. The second alternative for gradient calculation is through sensitivity analysis. Here.1 u-1 However. but not too small to lead to roundoff errors.y. Bard (1974) summarizes Marquardt's original algorithm for the selection of p. where the values.. 16 with respect to 8. y.'$ U=l and the Hessian obtained by: with Note that the first term in the Hessian formulation contains the residual vector. The alternative Hessian is proposed as: and the search direction is chosen to satisfy: / n \ n In parameter estimation. Thus. 1981). The Levenberg-Marquardt method proposes a nonnegative addition to the Gauss-NewtonHessian approximation to insure a positive definite Hessian and a descent search direction. 25. the former are more efficient but less accurate. SOLUTIONS OF PARAMETER-ESTIMATION PROBLEM u. otherwise Five research groups provided solutions to the parameter-estimation problem formulated in the first section. Thus. Assuming that the residual is small. Its value must be chosen to minimize the effects of truncation error and computation error. This type of error is proportional to the inverse of the perturbation size for both finite-differenceformulas. Here. 1986 AlChE Journal (Vol. 1979). 32. u. Using this Hessian to find a . J . at time u vector of measured values at time u with the gradient represented by: or g(9) = Here {is a small positive tolerance. Defining a new coordinate transformation: g s . the Hessian can be approximated only by the first derivatives contained in the second term of the Hessian. Since forward differencesrequire half the number of function evaluations needed for central difference approximations. For instance. or v i = 0. e.-' if X > <. one needs to obtain dy/d8. are not solved explicitlyfrom the model. it is important that the finite-differenceinterval be kept small to control the truncation errors. y. Another way of insuring descent directions with Gauss-Newton type methods is through rotational discrimination (Fariss and Law.g I ! leads to each element of s given by: Si = In order to obtain c3yyl8 one must solve the DAE model together with the set of simultaneous differential and algebraic equations given by Eq. Bard (1974) illustrates the use of these equations to calculate the gradient for a nonlinear differential equation model. While these methods ensure positive definiteness of H. since computation errors will overshadow the gradient calculation. the adjustable model parameters.CJ'e n n S.

However. 1) January. however. Switzerland (5) B. The explanation for this transformation will be d i s c d after describing the solutions. Very few numerical integration packages have the built-in capability of solving DAE’s simultaneously. This prevented the Complex algorithm from using an incorrect value of the objective function but required the algorithm to choose more points. the lower bound for the value of the model prediction. some conclusions can be drawn from this study and much can be gained from the problem-solvingtechniques used for this parameter estimation problem. and starting point stated above. Korea (4) R. 3 ODE’s (3) B. convergence problems could be handled by defining a constraint function that was violated whenever the model failed to converge. these modifications preclude a straightforward comparison of the model solution and optimization algorithms. The two different optimization techniques required different model tolerances to handle the limitations of the model solving routine. OF SUBMITTED SOLUTIONS- Na of Components No. Thus. (LeastSquares) 3 8 3 ODEs ‘See Amendix B for initial and final Darameter estimates of each investieator _________ AlChE Journal (Vol. tight relative and absolute error toleranceswere used in order to compute gradients and search directions. Needless to say. Fariss. a transformation of the final parameter vectors for each solution is given in Table 3. W. we refrain from reporting and interpreting final covariance matrices on a consistent basis. they may not be totally objective. If model convergence failure occurred during a forward difference perturbation then the gradient with respect to the perturbed variable was set to zero. 21 from getting too large. a loose error tolerance was used to evaluate more points for a fixed CPU time. Rimensberger & D. convergence problems can occur in solvingthe model for certain choices of the model parameters picked by the optimization scheme. Klaus. Ahn. Indian Orchard. Madison.r. The investigators are listed with their affiliationsin Table 1. Finally. since only three solutions r e g r d on all four measurements. the optimal parameter values and their confidence intervals are listed in Appendix B. University of Wisconsin. Unknown 7. Seoul. since the experimental values of component BM are derived and the other TABLE SUMMARY 1. solved with each set of final parameter values. Two optimization strategies. MA Objective Function Unknown General V. H. To prevent the second term in Eq. It was observed. a solution was attempted by the second author using the model. This code uses the stable Gear backward difference corrector formulas to convert the differential equations to algebraic equations. However. Monsanto Plastics and Resins. However.rt = 1 Unknown Diagonal Vtt. Instead we describe the characteristics of the solutionsand model more qualitatively. against the three data sets. Due to the different nature of the solutions and for the sake of brevity. Unknown Diagonal Vii Unknown yi 4 17 3 ODE’s 4 Algebraic Eqs. Sarup. A summary of results is given in Table 2. For the gradient-based algorithm. Regressed 2 Parameters Optimized 12 Final Model Formulation 6 ODEs 4 Algebraic Eqs. W. for the sake of brevity we have chosen to present only the first (and largest) data set in Figures 2 through 8. Michelsen & J. It should be cautioned that systematic errors may be observed in the fits of this data set due to the need to compensate for opposing errors in the other sets. The algebraic and differential equation variables are then solved using a modified Newton method. More specific details about each solution are available on request from the first author. Danmarks Tekniske Hojskole.l. Instead. California. The original model consists of a DAE system with six ODE’s and four algebraic equations and is stiff for the initial set of parameters. Hence. Because the model convergence problems could not be eliminated completely by tightening the error tolerance. Zurich. Korean Advanced Institute of Science and Technology. different initial parameter estimates were used for the optimization. This heuristic helped to avoid grossly inaccurate search directions as a result of failed gradient calculations. objective function. while Figures 2 through 8 give a qualitative comparison of the five solutions. however. Before beginning. = 0 4 ODE’s Eidgenoessische Technixhe Hochwhule. Villadsen. Unknown Diagonal V. T. Due to the different nature of each solution. With this approach. In three cases. the most straightforward comparison can be made simply by plotting the model. The intention of this exercise was to see how easily the problem could be solved with availablesoftware and no “coaxing”on the researcher’spart. Caracotsios. T. M. Sorensen. that convergence failures occurred less often if tighter tolerances are imposed for each step of the DAE solver. of Investigator (1) M. WI (2) R. 1986 Page 35 . No. A brief account of the computational difficulties encountered is given below. 32. Rippin. a few minor changes were made in the formulation of the objectivefunction. S. with a plot of the model with initial parameter estimates in Figure 1. only these could be compared with a common objectivefunction. it was essential for the model to converge as often as possible. Both algorithms handle parameter bounds easily but do not take advantage of the structure of the objective function. With the MC routine. One of these is the DASSL software package developed at the Sandia National Laboratory in Livermore. yur. 1965) and a quasi-Newton (QN) gradient-based strategy were chosen for parameter adjustment. was set to 1. Denmark Unit Matrix V. Stewart & J. E.tions were obtained by reformulating the model or the parameters.0 x Also. Lyngby. Attempted Solution To provide a measure of the difficulty of this problem. a modified Complex (MC) routine (Box.

TABLE 2. three measurements were normalized. but is replaced by 6 if either yui or yuj is .84 (2) 1.O throughout the optimization runs. 0. Solution of this problem pnxeeded in two stages.35 .42 8. The DAE solver had failure rates of 28 and 12% for MC and QN. Closer inspection of the Hessian showed that parameters p .07 26.1/Tb)Po lo@.0) 55 (115) 637 (290) 459.9 Final objective function value Rot.98 (5) 2. This number was reduced to one whenever a concentration was missing. 561. **Sensitivityequations evaluated once per iteration.199. 25 were used to calculate the gradient of the objective function with respect to all of the model and statistical parameters. .94 47 144 (5) (*) Optimization algorithm Quadratic Min. the heteroscedasticity parameters. First. This generalizes the objective function to: u-1.0 (55. After determining that the HABM and AB measurements were probably not independent. 362.72 561. Convergence with successive quadratic minimization was obtained after four additional iterations and the objective was found to be The first group of investigators also used DASSL to solve the DAE model. sensitivity equationsgiven by Eq. 753.44 9.14 Page 36 January. The successive quadratic minimization algorithm described in Stewart and Sorensen (1981) was applied here.17 Computer system VAX 11/780 IBM 3081 Cyber 174/18 CDC 6500 IBM 3033 DEC-20 ‘Attempted solution (Damiano). Secondly. and general unknown covariance.67 1. 32.88 2. only two of the four “experimental’’ components.85 25. respectively. Eq. the problem was formulated without heteroscedasticity.44 10. No. 23 No.9 ( .41 21. of function evaluations CPU time.(1IT . To avoid the convergenceproblems discussed above.87 25.(X 10-7 E. Discrim.9 (. fixed. of iterations No.52 (4) 2. The results of the optimizations using M C and QN are displayed in Figures 2 and 3. Finally. Also. SUMMARY O~IM~ZATION OF RESULTS Participants (Refer to Table 1) (3) ~ (4) Marq. . This approach is a generalization of the Gauss-Newton method that allows for missing data. respectively.15 K T i transformation was chosen to prevent the parameters hs from varying over large orders of magnitude and to reduce the intercorrelation o the parameters in the rate constants.(1/T .84 17. with the result that it required much more computational effort than MC. *** Not reported due to excessive I /Oat execution.44 9. yi.80 656. The second stage proceeded from this solution with the “heteroscedastic” objective function given above and p . min 20 20** 19 38’ 21.04 2. at most two observations were used (HABM and AB) from each time point t.92 (3) 1.43 10.1.30 Marq.53 24 577 38. 20 was replaced by a full covariance matrix Yi/e with elements: Vuij= ~ .73 18.84 18.courtis normally equal to the parameter wij. the problem converged but the normal equations were ill-conditioned. log k2 = p2 .26 18.84 2.15 1. The model parameters were transformed . A summary of the results is given in Table 2.-lj-I n m u-1 u-li=l where 8{ = eU{/yui%/z. (342) kz (342).) -p4ITb log(&) = -p51Tb log(K3) = -pelT.233 QN .68 18. much tighter model tolerances were chosen for QN. .1/Tb)ps log(kl/k-l) = . After 15 iterations with Marquardt’s method.p 3 / T b .295. 1) .94 2. Several modifications were made to the maximum likelihood objective function. Both algorithms terminated far from the optimum after exhausting too much CPU time.009. H A and HABM. Tb 342. E.84 753. the f elementsof the measurement covariancematrix were added as regression parameters.20 1. linear constraints on the parameters.0) .84 Marq.295. Logarithmic transformations of the kinetic parameters were made for manipulation by the optimization algorithms. missing. ~ ~ y . ~ Y i / z y ~ ~Here .74 18..** 0. Finally.21 2.90 1.(342)K31Kz (1) 1. because of the data dependencies mentioned above.72 17 104 14.199. were used. First Solution for deletion when [HA] was missing.(X lop3) E ~ ( low3) X K1/K3 k-. and p .88 25. could not be estimated simultaneously. as follows: log k1 = p .1/Tb)p.38 18. after some trial runsthe error tolerances were adjusted automatically over the course of the optimization. [AB] was chosen - - TABLE SUMMARY OPTIMAL 3. As mentioned above.27) (MC) Equivalent value.48 19.05 1.89 18. 1986 AlChE Journal (Vol.78 17. OF SOLUTION VECTORS Participants Transformed Parameters k.(1/T . were held constant at 1. the diagonal variance in Eq.

Model with optimal parameter estimates from MC attempted solution. Y . 32. IY 50 160 150 200 250 SO0 350 400 450 500 . AlChE Journal (Vol. No. 1 - d A 1 . 1) January. 65: 550 rh=1 Figure 2.- .0 MOOEL FIT "/ I f a 1 . 1986 Page 37 .

Gradients were calculated using a central difference formula requiring twice as many function evaluations as gradient values. 4 . missing data were handled simply by using partial summationsin the objectivefunction. The revised model consisted of Eqs. according to the transformation: log k = This participant opted for the above transformation because a values tend to vary over orders of magnitude and their interaction with the activation energy leads to an ill-conditioned problem. Also. Eq. More will be said later about dependency in the model parameters.and ye. The solution was found by executingsix passes of the optimization algorithm. Eq. the model parameters were transformed to: The log ko parameters were calculated relative to a base temperature of 67OC.)/dt. by 112 (zUi+ yJ. Finally. Since a diagonal covariance matrix was assumed. 1 .and yl0. hand sides of Eqs. The differential equations were first integrated using the Harwell stiff integrating package. 1 and 6 as d(1og yl)/dtand d(1og y. The rotational discrimination technique of Faris and Law (1979) was used for optimization.19 f 0. In addition. The solution was thus found by careful monitoring and appropriate intervention by the investigator over the course of the convergencehistory. while the final YAB was 0. the objective function. To avoid additional problems with log yl. of y1 to ye. 23 was Page 38 January. 1986 AlChE Journal (Vol. T eliminate these problems. Then for fixed values . y4.No. This left the left . Here.12. Problems were observed in the numerical integration scheme o with ye becoming negative. with predictions for yz. Here the DAE model was transformed by noting that three of the six ODEs are dependent and can be reduced to algebraic equations. and y5. with all parameters varying in the last pass as the optimum is approached. and up that determine it are drastically different.y9.independent of pg as well. respectively. 21 was modified by replacing the model prediction yui. DCOlAD. y3. In each pass a subset of the kinetic and statistical parameten were allowed to vary. The parameter Y H A B M converged to its lower bound of zero.67 T + 273 The third investigator reduced the DAE model to three independent ODEs that provide values for yl.. 1 and 6 were divided by y1 and y. Here V. 1) . and yscalculated by material balances using yl. Second Solution Results from the second solution are shown in Figure 5 . respectively. 1was further modified to force the derivative d(1og yl)/dtto zero smoothly for small values of yl. is about the same value as the other variances but the parameters y. Eqs. 7 as a tear set and subsequently solvingfor yw. Third Solution log k o + ( E I0. Figure 4 shows how this solution fits the experimental data. and 6. The IMSL routine DGEAR was used to integrate the ODE'S and Eq. y3. this investigator also optimized the statistical parameters y and o and noted that the solution is not unique because two linear combinations of the model parameters can be identified. 32. which uses Gear's backward difference formulas. No confidence intervals were reported by this investigator because he felt the assumed error structure was inappropriate for this problem. a tight tolerance was required for the algebraic system to avoid numerical problems with the integration scheme.67558)- T . This is confirmed by his results in Appendix B. the original algebraic equations were solved using y and Eq.

I ? b Figure 4. J AlChE Journal (Vd. 1986 Page 39 . 32. Model wlth optimal parameter estimates from solution (1). 1) January. No.

The Harwell library routine (VBOIAD). theproblem required 104 function evaluationsbefore it converged. based on the Levenberg-Marquardt method.&?= 340 K) were the adjustable kinetic parameters and the parameters log (K. O and (E. the starting point was altered by changing the initial valueof &from lo-” to 10-l5. ti investigator reported hs good results using an unknown diagonal covariance matrix without heteroscedasticity. Some problems were encountered in the integration step with initial values of the parameter estimates. Page 40 January. was used to minimize the least-squaresproblem. The best results reprted by this investigator are shown in Table 2 and Figure 7. rather than the value of the model prediction.changed so that the variance was proportional to the measured value raised to a power. and several strategies for treating numerical difficulties./RT. were set to unity for the optimization runs and the Levenberg-Marquardt routine from the IMSL library was used to optimize the objective function. Moreover. 1986 AlChE Journal (Vol. Instead. Gradients were calculated by finite difference as described in the IMSL manual.0 x 10-l7. after modification to handle the sensitivity equations. DGEAR. However. T i substitution set the last term and the denominator of hs the second term in Eq. It contains a modified Marquardt algorithm with revisions by Fletcher (1971).. and still provide a value for the original objective function. More will be said log (K3/K2) Figure 6. These researchers refrained from using the likelihood function based on Eq. Further. which allowed him to optimize a simpler objectivefunction. The IMSL library subroutine. Also.An optimal solution could not be reported with the original objective function. Sensitivity equations were formulated and solved simultaneously with the model equations to compute the gradients of the residuals with respect to the model parameters. using the three equilibrium relations and assuming almost undissociated species and weak acids.0 x lo-” to 1. the rate constant parameters were reformulated using a transformation similar to that used in the second solution.Evenso. FifthSolution This participant reduced the six ODE’s and four algebraic equations to four ODES by substituting for the equilibrium relationships and dependent state variables. 32. All gradients were calculated by finite difference. Fourth Solution by changing the initial estimate of K2 from 1. they reduced the DAE model to three coupled ODES. was used for this task. a partial summation of this function was used to account for the missing data. No.) (T. A partial summation was used in the objective function to account for the missing data. The objective function takes the general form given by the second term in Eq. Figure 6 displays the solution found by this investigator. The optimization routine was developed by Klaus and Rippin (1979). Model with optimal parameter estimates from solution (3). a gradient projection technique for handling bounds. Log k . Zeroes were substituted for missing data in calculating the objective function. they used a simple least-squares function and excluded the derived BM measurements from the analysis. The model was solved using a semiimplicitthird-order Runge-Kutta method by Michelsen (1976).IK3)and were given starting guesses of zero. stating that the dependencies in the data do not make thischoice suitable. 23 equal to a constant. 24. y. The heteroscedasticityparameters. but these were resolved These participants elected to reduce the system of ODE’s to only threeby writing total mass balances. 20. 1) .

however. Thus the unknown diagonal covariance assumption covariance should be replaced with an unknown general covariance. we note that three species were measured as experimental data and normalized to meet a mass balance constraint. these data dependencies and the model deficiency mentioned above clearly violate the assumed-error structure for which the maximum likelihood function was derived. K . Consequently. This immediately eliminates the equilibrium relations and reduces the model to eight parameters and three ODE’S. A l q while the justification for the heteroscedasticity assumption is weak. one also encounters convergence problems. His solution to the modified model yielded an improved maximum likelihood gmolikg. Thus. AlChE Journal (Vol. 32. A summary of their comments along with a few other points is given below. the complexity of the objective function greatly inhs creases. To avoid this problem. objective function.z . the transformed parameters in Table 3 are very similar. Although the original optimal parameter values vary a great deal for these solutions (seeAppendix B). After solving the regression hs lo problem. SUITABILITY OF THE KINETIC MODEL From the solutions described above. . Thus the errors at long times will be biased by the model and render the error assumptions invalid. No. level off at about 3 . it is readily seen from the kinetic mechanism that [HA]is predicted by the model to vanish at steady state. and K3. First. The first and fourth investigators observed that these measurements were not independent even if only two components were used in the objective function.4 x gmollkg. a number of observations can be made about the kinetic model and how it should be treated for parameter estimation. + 2 5 I 0 5 3 J Figure 7. during the optimization. With their solutions. Clearly the addition of the reverse step helps to remedy the model deficiency. 1) January. ti assumption was a s consistent with the results. Model with optimal parameter estimates from solution (4). although this step is not apparent from prior knowledge of this reaction. The fourth component was derived from a relation on [BW that generally does not hold. Through analysis of the Hessian matrix after spectral decomposition. 1986 Page 41 . The model can also be reduced to deal with a smaller parameter set through physical arguments. Consequently. several investigators submitted very detailed analyses about the suitability of the model. The [HA]measurements. In Table 3 we present values for K1/K3. Second.about this transformation in the following sections. yuj)yi for yujyi in the tors substituted either zuIyj or 1 2 2 .ko-1K3/K2 and the remaining parameters for the five solutions. Finally.. The second investigator attempted to remedy this situation by making the second reaction reversible and introducing new parameters for k . The fifth investigator performed part of this reduction by assuming that the small equilibrium constants would lead to [ H + ] = 0. The first investigators used [max( yuj)]7j in their calculations. severalinvestigatorsnoted dependenciesamong the kinetic parameters. He also optimum and led to final [HA]of about found the modified model much easier to solve. y1 is forced to zero regardless of the actual error structure. Results for this solution are presented in Figure 8. as was done by the first investigator. the second and third investiga/(. Kz. T i becomes especially serious when [HA] goes to zero and the objective function becomes unbounded. the second investigator noted a “two-dimensional dependency” among the parameters k-. This not only leads to nonunique optimal solutions but also yields an H matrix that becomes singular.

complete conversion of H A to HABM occurs before any A B is formed. To obtain lower. Kirkby. First. 23. - = f = = = = F g = = h H = coefficientsin ODE solver partitioned matrix in DAE system activation energies experimental error vector for experiment u righthand side of ODE system righthand side of DAE system algebraic equations. The only explanation for this was offered by the fifth investigators who noted that. 32.. = 1 was selected. This suggests that an interactive approach and a great deal of experiencemay still be required to solve difficult parameter-estimation problems. Although it was not necessary for obtaining a solution. Investigators one and two used multiple restarts to converge from the original starting point.Figure 8. the parameters should be relatively uncorrelated and scaled so that they do not vary over many orders of magnitude. For the attempted solution these led to frequent convergence failures and poor performance. All but the third and fourth investigators used log transformations of the equilibrium constants and kinetic parameters centered about a mean base temperature. numerous difficulties were encountered especially for stiff sets of parameter values. Here. the last three investigators used different starting guesses in solvingthis problem. while the second kept the statistical parameters constant until he was close to the solution. 1) . a starting guess of K. e. for partial support of this work. b. helpful comments. and hence the convergenceof the problem. Second. Most investigators reduced the model to a system of ODE’S. This is easily seen in Figure 1.IK.This led to fewer convergence problems and more efficient solutions. also gradient vector step size for ODE solver Hessian matrix Page 42 January. Finally. Marks of the Dow Chemical Company who assisted in the problem formulation. DASSL was used to solve the DAE system. administered by the American Chemical Society. NOTATION a.. Agin. The use of different starting points suggests a more general approach that appears in all of the solutions. This helps to improve the conditioning. = 1 and K J K . more reasonable peak values of HABM. ACKNOWLEDGMENT The authors wish to express their gratitude to G. Special thanks go to the five participants whose submitted solutions. their preliminary runs dealt with simpler optimization problems which may have given better progress than Eq. and M. with the initial guesses in the Dow problem statement. Moreover. Model with optimal parameter estimates from solution (5)...No. c.. For the original model. from the above solutions it appears that efficient and userfriendly software is not yet available that automatically handles all of the difficulties encountered... L. a number of problems can also be avoided by transforming the model. we need to consider transformation of kinetic parameters. b. GUIDELINES FOR PARAMETER ESTIMATION The following conclusions serve as guidelines for tackling dynamic parameter-estimation problems. Here. the first investigators began with a likelihood function without heteroscedasticity. and interest made this paper possible. Finally. acknowledgment is made to the Donors of the Petroleum Research Fund. E E. 1986 AlChE Journal (Vol. Also.

18 (l). Can. J. 594 \ (1976). 1. Fletcher... 17.Engineering Foundation. Rippin ‘A New Flexible and Easy-to-UseGeneralPurpose Regression Program. and W. Stat. T.0 1. “A Modified Marquardt Subroutine for Nonlinear Least Squares. and Chem. W. England (1971). G . M. M. 133. 3.. Sci.”IEEE Tmns.al..131 (1981).. Seider. 22.0 x 1013 x 104 x 1013 x 104 x 11 05 x 104 x 10-17 x lo-” x 10-17 The initial model conditions in addition to those given in the data sets are: Y5 ye = 0.”AAEREReport. “Some Software for Solving Ordinary Differential Equations. 32. and J. W. l? M.. and seoision received Ian. Box.” Comp. and D.1985. Murray. 2.0 1.Kz + [K. I? E.22. and J. Fariss. Practical OptirnizaCion. Efficient General-Purpose Method for the Integra“An tion of Stiff Ordinary Differential Equations.No. R. 0. “Guidelinesfor the Optimal Design of Experimentr to oes” Estimate Parameters in First-Order Kinetic M d l . “Bayesian Estimation of Common Parameters from Multiresponse Data with Missing Observations. 6799.to Improve Correlation of Experimental Data.. Engineering Foundation.0 1.0 2.. 105 (1979). N. Comput. New York .0 2. Eng. 8 (l). Kz y] yz 73 y 4 = = = = = = = = = = 2. (1974). and L.. 23. D. 1.. Law.0 4. Soremen. Manusrript received June5. No. L. Lapidus. (1977). Stewart.” Comp. Eng. Gear.89 (Jan. “Numerical Solution of Diffenmtial Equations-An Overview.iteration counter equilibrium constants rate constants maximum likelihood function number of measured variables number of experiments transformed parameter vector function values in ODE solver eigenvector matrix for rotational discrimination search directionfor 0 temperature time at which experiment u was taken covariance matrix model prediction measured variables Gill.5} = Y7 = 0 - 0 AlChE Journal (Vol. R..3 2.0 1.2 + 4K2y1(0)]0. G. Chan. “A New Method of Constrained Optimizationand a Comparison with Other Methods”ComputerJ..95 (1979). “AnEfficientComputationalTechnique for Generalized Application of Maximum Likelihd. Byme. W. Carnahan. E.” Ind.0. Eng. W (1981). Foundations of Computer-Aided Chemical Process Design. 42 (1965). 1) January.. petzold. H m e l l .0 = y7 Ys Y9 YlO 1/21 . Reilly.”Technometrics. Academic Press. Michelsen. W. Wright. R. “AdaptiveSemi-ImplicitRunge-Kutta tf Method for Solution of S i f Ordinary Differential Equations. 1971). et.. on Circuit Theory. 1. 1986 Page 43 . and V. H. New York. (1978). “Differential/AlgebraicEquations Are Not ODES. 3 (3). Nonlinear Pammeter Estimation.. “SimultaneousNumerical Solution of Differential-Algebraic Equations. D..0 2. E_.. B..0 1.403 (1981). “Solution of Stiff Differential s Equations and the U e of Embedding Techniques” I rL EC Fund. H. Chem.” AIChE l. Wilkes. 367 (1982).0 1. L. Prokopakis.225 (1981)... Fund.. C.0131 . Bimbaum. preexponential factor in rate constants heteroscedasticity parameters step length residual vector for experiment u tolerance for rotational discrimination parameter vector mean of measured variables eigenvalues of H Levenberg-Marquardt parameter transformed search direction variance coefficient for heteroscedasticity LITERATURE CITED APPENDIX A EXPERIMENTALDATA AND INITIAL CONDITIONS The initial parameter estimates are: ( Y ~ = 3 El (Y2 Ez Bard.614.1084. and M. Eng. Klaus.. I.. New York. Academic Pres. K.” Foundations of Computw-Aided Process Design. London (1981). C h . and C h a . Y. 55. Y A.”Siam 1.0 1. H . J.

7515 5.0081 0. vs.00 3.4068 1.2065 0.0104 0.42 124.0167 7.0931 0.6916 0.0225 0.75 580.0894 0.75 1.25 292.2026 8.9193 5.33 12.0000 0. 32.0556 1.0042 0.8818 5.0032 5.3325 7.5638 0.25 147.1231 6.50 6.2262 8.50 S.1941 1.75 5.50 53. T AT Time h TABLE RUN 2 CONCENTRATIONTIME A2.0660 1.0022 0.7021 5.1068 0.83 197.6408 0.0017 - Concentration (gmollkg) BM HABM 0.4859 0. vs.0362 1.8878 5.7222 5.0024 - 0.0895 0.25 316.00 11.00 8.50 1.5558 5.0039 8.5202 0.00 9.75 364.9985 5.9541 5.0267 0.7875 5.5673 1.9326 0.33 102.7641 5.6123 6.4760 0.0029 0.0171 0.9097 1.4604 0.6363 0.0059 1.08 171.1674 0.08 0.8440 7.0195 1.0234 0.5271 0.8931 0.9835 0.5646 0.0012 0.50 12.33 HA BM HABM AB 7.6596 1.1371 0.2309 6.3065 8.0086 0.75 4. DATA 67OC (cont'd) at AB Timeh HA Concentration (gmol/kg) EM HABM AB HA 1.8075 0.5656 0.7066 1.5288 0.0957 4.0485 1.5434 0.0047 0.9530 1.9751 0.50 3.0115 0.0650 0.4475 0.5354 0.5892 1.5305 0.5382 0.0084 5.7016 0.9186 5.7088 0.25 274.0049 0.8896 0.6837 0.5580 0.17 1.0155 0.00 1.83 30.6400 1.5939 5.6208 5.3200 8.7680 5.2158 7.5608 1.75 13.2899 6.8514 0.75 28.2437 8.00 5.5303 0.5295 0.3134 7.6467 0.1287 5.5617 0.5441 1.0145 0.5905 5.7833 5.75 3.1781 8.50 21.08 0.0528 0.5294 0.7112 0.0752 0.0440 1.25 553.33 27.1018 1.0365 1.oo00 0.9826 7.0260 1.0017 0.7944 5.2949 0.4075 1.5953 0.7212 5.0183 0.1935 0.50 4.0763 0.08 2.1036 0.25 842.4862 5.0026 0.25 124.6672 0.5261 0.8064 0.1220 6.5138 0.7915 0.2730 8.5133 1.0881 1.5465 O.5246 0.67 83.7775 5.9744 1.0026 0.6680 5.5516 5.2074 0.6143 1.0574 1.7065 0.75 8.0200 0.75 460.2929 1.0141 1.4796 5.2954 8.8957 0.9793 0.0412 0.33 228.0598 0.6305 1.1004 1.7870 5.33 293.5336 0.7468 0.6494 0.5276 0.0444 1.83 23.0050 0.5631 5.0036 0.50 16.9237 0.1174 0.25 340.75 29.5397 0.0082 0.5785 0.6960 1. T ~ E AT Concentration (gmollkg) Time h 0.00 0.75 507.0042 - 0. No.0199 1.0518 0.0059 0.0459 1.0027 0.9402 1.0636 5.25 106.5320 0.0096 0.5315 0.3364 6.6402 0.0381 1.7147 0.0015 0.7644 5.0110 0.25 196.7396 0.6593 5.5736 5.00 4.33 270.25 442.5877 0.6722 5.9956 1.0326 1.4857 5.7774 5.0970 1.0298 0.9425 0.9556 0.6918 0.7912 5.3861 7.7706 5.0946 1.0049 0.25 52.7735 5.3849 6.0028 0.75 412.0391 0.83 148.5173 0.7086 0.8037 5.0923 0.0403 1.0244 0.0310 1.0192 0.00 0.9720 0.1265 8.0046 - 0.0312 0.0952 1.5602 0.6977 7.0085 0.58 1.0166 1.5234 0.1370 1.0088 0.0597 TABLE RUN2 CONCENTRATION MDATA 67OC A2.5268 0.6216 0.0268 0.1745 0. VS.0003 1.8821 0.0059 0.7495 0.1012 1.75 46.05 21.6485 0.2522 0.4268 0.83 51.0401 1.33 5.2308 1.5686 0.5587 0.7628 0.75 483.8048 5.4792 5.8086 5.8643 6.TABLE RUN1 CONCENTRATION w DATA 40OC Al.6871 0.0015 0.7063 0.4799 5.75 240.0026 0.25 673.8389 0.7869 0.7519 6.5139 5.7596 5.6294 0.6397 0.0332 1.5069 0. 1986 AlChE Journal (Vol.0945 - 0.0024 0.0551 1.6351 5.4790 5.6690 0.0186 0.1793 0.0963 1.0118 0.25 219.0322 0.8171 0. 1) .1007 1.50 9.2277 8.4843 5.6047 0.7874 5. TIME AT Concentration (gmollkg) Time h HA BM HABM AB 0.7745 0.9155 0.0228 0.0077 0.50 2.2229 0.0305 1.5472 5.50 11.50 13.7704 5.8285 5.33 0.0396 1.9332 0.0oO6 - 93.0616 1.0470 0.0095 0.33 Page 44 January.0060 0.6497 1.7659 0.9464 0.0180 0.3763 0.50 14.6176 5.1495 6.5302 0.7680 5.5627 0.25 386.0022 0.75 8.5773 0.0626 0.0135 0.4858 5.0692 1.7268 0.58 2.00 0.5991 5.6131 5.6452 0.33 3.8556 5.6584 7.0340 1.50 5.75 651.0083 0.5542 0.8492 0.7263 0.0030 8.0669 TABLE RUN3 CONCENTFIATION DAIX lOOOC A3.42 0.00 10.08 1.25 172.25 76.00 2.0320 0.6927 0.6826 1. vs.5617 0.5572 0.00 7.5316 0.3546 8.50 10.0229 1.1336 0.0445 0.0074 0.

95)*10-' (5.666.0 2.393 20.2972 0.7875 f 44.822 f 9.0 x 1.32)010-~ (-2. Esnmm Parameter ff1 .0 1.0 x 104 1.0 x 10-17 1.8745 x 1.00036)*104 * 0. AND ESTIMATES Initial Estimate Final Estimate Pll PlZ PI3 P14 * t * Om* 0. x 104 x 10-17 x 10-11 x 10-18 AlChE Journal (Vd. CARACOTSIOS.997 2.627 2.0 2.0 f 0.0.0.627 2.313 f (1. &mm Initial Estimate Final Estimate ffZ E Z ff-1 Parameter Pl Pz P3 P 4 P5 Pe P 7 P H P9 PO l 1. TABLE B7.1938 1.200'* 0.000 6. INVESTIGATOR (3).5 To TABLE INVESTIGATOR B2. No.8165 (1. DAMUNO.7884 x 10-18 0.00515)*101* * 0. (2).APPENDIX B: INITIAL AND FINAL PARAMETER ESTIMATES WITH 95% CONFIDENCE INTERVALS TABLE B4.00266)*10-18 * 0.1938 1. '95% confidence intervals not reported.0 x 104 1.0.0 2.0 2.0943)*104 0.057 0.0558 50 10 E-1 Kl K Z K3 2.104 -39.0 x 1013 104 1013 104 1015 104 10-17 10-17 10-17 (1. Parameter In l q El ln k ? E Z In b-.0525 2.8911 x 2.8221 x 10'3 x x x x 14 0 1011 104 1015 E-1 lnK.08 f 0.8397 & (2.0 x 10-11 1.39.0181 -2.392 f 19.00076)*104 0.6215 x 1012 lo4 1012 104 1W TABLE B6.0 2.00186)*1012 0.5150 -38.78 f 17. E-I 1n(K1/K3) 1n(K3/KZ) INVESTIGATOR (5).104 30.0 x 10-17 1.837. VILLADSEN.000** 9.10-3 0.0 1.0110 x 1.0 x 10-11 1. Ez l m- Initial Estimate Final Estimate 30.06)*10-4 TABLE B5.8961 1.876 x 10-14 1.0132)*10-17 0.104 10.0 x 10-17 '95% confidence intervals not estimated.0 x WZ 03 w4 '95% confidence intervals not estimated.1901 (1. _____ INVESTIGATOR (l).4829 k * 0.0 x 2.210 f 553.070.0 0.960 x 1.104 35.3708 x 1.393.340K Initial Estimate Final Estimate El ffZ Ez ff-1 2.065 0.1352 0.3 x x x x x 1013 104 1013 104 1015 104 10-17 10-11 10-17 1.8476 x 5.0 x LO x 1.72 f 2.0 4.3 x x x x x 1013 104 1013 104 10'5 7. Parameter ffl INVESTIGATOR (4).533 f (1.164 1000 0.9075 x 1. MICHELSEN.0 x 10-17 1.0 x 2.0 x 1. ESTIMATES Final Estimate ~~ Initial Estimate El TABLE B1. AND SORENSEN.594 f (2.779 -39.022 20.8003 x 2. **95% confidenceintervals arenotestimated. ESTIMATES ON Initial Estimate Final Estimate Parameter Parameter lm l El lna.7486 7.1852 1.000 1.0 13.1796 2.0 4.2052 1.033 222.837 f 1.022 20.0046 x 3.3969 x 6.06393 4.493 f 211 9. 32.388 . ESTIMATES MC Parameter ffl Initial Estimate Final Estimate E-1 Kl K Z K3 Y1 YZ Y3 Y4 2.0404 (8.7958 5. DAMIANO.575 x 10-16 4.9016 7. 1) January.0 10.2271 -0.0 2. lnK.0 1.8412 (2.000 0 0 0.190 f 0.230 f 579 (1.070.82 25.0 4.0 x 1. RIPPIN.173 2.02473 El ff2 Ez ff-I 2.1658 f 12. 5.2282 x 1.0 x 4. TABLE B3. lnK3 f 0.821 2.0955 (1.166 f 5.019 1.8835 f 27.0615 0.0 10. AHN.144 -34.0 x 2.0473)*104 f 5.1944 1.0296 x 1.0 2.2263 f (2.00897)*1016 * 0.143 0.264 0.8255 k (1.7965 f 1.39 f 1.144 28.393.457 f 233 -3.00024)*104 0.391 0. 1986 Page 45 .03105 0. STEWART.3845 x 1013 104 1013 14 0 1015 104 10-17 10-11 IO-IW W1 E-I Kl K Z K 3 2.3 x x x x x 1013 104 1013 104 1015 1.0 0 0.0 2.0196)*10-19 20.0 x 1. THIS STUDY.9 0.01 f 18.700 f 0.3570f 0.6046 x 104 2.885 f 8. SARUP.0254).3 'Not specified.3 x 2. FARISS.0 13.ESTIMATES ~ ~ ~~~ THIS STUDY.8149 7.0 8.667 -33.792.118 1.0 2.850 f 0.

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd