You are on page 1of 21




PII: S0016-0032(97)00004-5

Insl. Vol. 3358, No. 2, pp. 259-279, 1998
B” 1997 The Franklm Institute
Published by Elsevier Science Ltd
Printed in Great Britam
0016-0032/98 %19.00+0.00

Design of Experiments


Departments of Biostatistics
Arbor, MI, U.S.A.


and Physiology,

The University

of Michigan,


10 July 1996; accepted 3 January 1997)

In this article the author reuiews and presents the theory involved in the design of
experiments for parameter estimation, a process of importance for pharmacokinetics, the study qf
intermediary metabolism and other sections of biological research. The design of experiments
involves both formal and informal aspects. The formal theories of identifiability of parameters and
model distinguishability, and the generation of optimal sampling schedules, play major roles. qfter
presenting these theories, both the formal and informal aspects are studied in the examination of a
simple but realistic example. 0 1997 The Franklin Institute. Published by Elsevier Science Ltd


The term ‘design of experiments’ covers a large number of activities. Some are
strongly dependent on experience in a particular field and are so informal as to be
labeled ‘intuitive’. Others depend on formal mathematical developments such as in
optimization theory. Furthermore, the term has distinctly different meanings in statistics and in systems theory.
The goal of this paper is first, to provide an overview that distinguishes between the
different uses of the term ‘design of experiments’, and then to concentrate on the design
of experiments in the context of systems theory, where the theories of identifiability,
optimal sampling theory and model distinguishability play major roles. In the main,
the paper is concerned with the design of input-output experiments in which the
number of samples, though possibly large, is more usually limited.
The design of experiments



In a general sense, the design of experiments involves all stages in the choice of which
experiments to use to test hypotheses. That includes the choice of experimental subjects,
operations to be carried out and measurements to be made, as well as the choice of
measuring instruments. All of that is strongly dependent on the state of knowledge in
the field and on the technology available to make measurements. Many experiments

*To whom all correspondence
should be addressed
48104, U.S.A. Tel: (313) 663-4783.


at: 490 Huntington

Drive, Ann Arbor,



J. A. Jacquez

are now feasible that could not be carried out even 10 years ago. As done in the dayto-day operation of a laboratory, much of this is intuitive; it comes out of laboratory
meetings in which progress and problems in a project are discussed, usually at least
It is important to realize that the major early activities in the development of any
field consist of gathering data and ordering and classifying it in an attempt to define
the operational entities which can serve as conceptual units in forming a meaningful
picture of the structure of that part of the universe. It is only when that activity is well
underway, so that some knowledge and hypotheses about the structure of the field has
accumulated, that one can really design experiments.
The design of experiments is the operational side of the scientific method which is
based on a few principles. One principle, often labeled Popperian (l),
is that one cannot
prove hypotheses, one can only disprove hypotheses. However, that idea goes further
back and was well stated by Fisher (2) in his book The Design of Experiments. Thus
the recommended strategy for scientific investigation is to design experiments with a
view to disproving a hypothesis at hand. Less well known is the method of Strong
Inference (3) due to Platt. If one can formulate a complete set of alternative hypotheses
about an issue, then one can design experiments to systematically disprove one after
another of the hypotheses until only one non-falsifiable hypothesis remains. That one
must be true.
The role of models and modeling
Models First, let us distinguish between two different types of models because what

a physiologist or biochemist calls a model is quite different from what a statistician
calls a model. The physiological literature now distinguishes between two extremes in
a spectrum of models. One extreme has been called ‘models of systems’, the other
‘models of data’ (4). A model of system is one in which we know the equations that
describe the action of the basic physical laws that drive the system. For that reason, I
prefer the term, models of process, and will use that term. In contrast, a model of data
is one that fits the data without reference to the basic processes that generate the data.
However, it is possible to have a model of data that is generated by a model of process.
Also, it is possible to have a model of a system which is a model of process for part of
the system and a model of data for other parts of the system.
Exploring with models For most problems there is some information on the structure
and function of the system as well as some data in the literature. Given that, and the
availability of simulation software, one can generate models that incorporate the
available information and run them to see how they respond to different experiments
designed to test hypotheses that are current in the field. Simulation allows one to
explore how good candidate models are in explaining known data and how well they
perform in testing hypotheses. That sort of exploratory modeling can play an important
role in the planning process.
I believe that efficiency in the experimental sciences involves judiciously combining
modeling and actual experimentation. Laboratory experiments are expensive in time
and resources. Although one may have to run a few experiments to obtain preliminary
estimates of parameters and to check techniques, it is generally inefficient to rush to
carry out many experiments. It is far more efficient to formally list hypotheses and the

often called design of experiments. Major issues are to eliminate bias by proper randomization. This paper is not concerned with the statistical design of experiments but with design for parameter estimation of models of process. model distinguishability and generate optimal sampling designs so as to do fewer but more efficient experiments. It is important to recognize that: we do experiments on systems in the real world and then interpret the data by analysis of models of the experiments done on models of the systems. For given initial conditions and input to the model. The next few sections are devoted to the presentation of the theory of a number of techniques that are components of the design of experiments. identifiability and methods of checking identifiability. A model of the system represents the current hypotheses of the structure. there. The field of statistics has developed a body of theory. balance in treatment comparisons and the generation of designs such as factorial designs. stratification. Bu. After an introduction to basic systems theory. as well as some specialist texts (2. It is concerned with models of data from experiments in which one compares the effects of two or more treatments. We are concerned with models of experiments. the time course of change in the vector of state variables is usually given by a set of . It should be noted that in engineering. what we call the model of an experiment is called the system. It owes much to the early work of Fisher (2). Design of experiments -pass one First we have to distinguish between the statistical design of experiments and design of experiments in systems work. A model of the experiment is a model of the experiment done on the model of the system. u. To do that. Basic systems theory In the biological sciences we try to obtain models of process of the system of interest in order to analyze the outcomes of experiments. estimability and optimal sampling schedules and model distinguishability are presented and then these are used in an example as an illustration. Then one can check identifiability. Let x be the vector of state variables of the model. It arose first in the analysis of experiments in agriculture. The inputs are linear combinations of the components of the vector u. which is quite different from the subject matter of this paper.e. choice of proper controls. It is a field that has been well worked and is treated in many standard texts in statistics. The inputs in the experiment are often described as the product of a matrix B and a vector of possible inputs.Design of Experiments 261 models of possible experiments and then to run the experiments on the models of the system. 2. the terminology is different. we can think in terms of two stages in the modeling process. i.5). rate laws and values of the parameters of the system. The emphasis is on formal methods of randomization in allocating treatments so as to optimize between-treatment comparisons. Statistical design of experiments is the basis for extensive developments on the analysis of variance. 1.

.t). The observational parameters are functions of the basic parameters that are uniquely determined by the observation function. j = 1.Bu.J. 7) with constant transfer coefficients. q is the vector of compartment sizes and K is the matrix of transfer coefficients...t) = G(t.B. . k3 k. To fully specify an experiment. one-product enzyme reaction. If the model is a compartment model (6. S+E -k Fig. q. . The actual observations. are samples of the response function at different times with added experimental errors of measurement.... we also have to give the observations.. The observations are usually specified by giving the vector function of the state variables that describes what is measured y = G(x. the response function (observation function) being the output. G(t.. 2 ES‘k one-product E+P 4 enzyme reaction.. zj = y(t. the inputs Bu and the observational matrix.+) = G(t.O) is the observation function as a function of time and the basic parameters.?I. A. 8). as shown in Fig. Compartmental models shall be used frequently in this review and in the development of the theory that follows.x. If the observations are linear combinations of the compartments. 1. the equation corresponding to Eq (1) is: 4 = Kq+Bu. Consider a one-substrate. Notice that G is written in two ways. zj at time ti. Zdentifiability An introductory example A simple example from enzyme kinetics illustrates the basic features of the identifiability problem (6.. is the vector of measurement errors at sample time tj. q(0) = q.4)+t.Q. . the observation function is given by Eq (5) in which C is the observation matrix: y = cq. x(0) = x. C. One-substrate. The components of K are the basic kinetic or structural parameters of the model. In engineering it is common to refer to input-output experiments. (5) Basic parameters could also be introduced by the experimental design by way of the initial conditions. (2) We call y the observation function or the response function. 8. (1) where 0 is a vector of the basic kinetic parameters and x0 gives the initial conditions. in the information sense. I.$) is the observation function as a function of time and parameters 4 called the observational parameters. (4) In Eq (4). G(t. (3) where t. Jacquez 262 differential equations: Jo = F(x. .Bu.

as given by: 4%= VW = k. If the rate of formation of the intermediate complex. k.. If we know E. is rapid in relation to the rate of formation of product and substrate. that parameter can be changed without affecting the observations.. Such a parameter is insensible in the experiment and hence is called an insensible parameter.. $4=K. does not even appear in the equations for I#+and c$~.. Suppose we do only the forwards velocity experiment and estimate 4.. Kmbare functions of the basic kinetic parameters.+[sI VMb[Pl > ub = Knlb+ [PI ’ The parameters Vmf.. k3 is uniquely determined by the equation for 4. no matter how accurately we determine $. &=K.. k. or the initial rate of formation of substrate at a series of concentrations of product. kZ. can one obtain estimates of all four basic kinetic parameters. and &. k.Then if we know E. 43 = v.& and @4= Knlb. the parameters can be classified as follows: .+ k. k.e. K. there were sensible parameters that were identifiable and others that were not. and k4 cannot be determined and k. if we do only the backwards velocity experiment.&.. the 8. and &. and &. In the above example. However. It is clear that only if one knows E. a sensible parameter may or may not be uniquely determined (identifiable) by the experiment.=F. If the observational parameters are not functions of a particular basic parameter. k... and Vm.Wb = (7) W.. is uniquely determined by the equation for c#+. and & = K. i = 1. E.+. cannot be determined and k4 does not even appear in the equations for 4. S. from the backwards velocity experiment one can estimate c#+= v. If a basic parameter does influence the observations in an experiment. However.. However.. is identifiable. is identifiable. can be basic kinetic parameters of the system model or parameters introduced by the experimental design. it is sensible by that experiment.. no matter how accurately we determine & and 4. The latter are called observational parameters and are denoted by the symbol $i. . Equation (6) gives the Michaelis-Menten equations for the forward and backward initial velocities l’f = VMisl K. In summary. i. and k. As can be seen from the example. P. it is well known that the initial velocities in the forwards and backwards directions show saturation kinetics in the substrate and product concentrations respectively. the observational parameters are functions of the basic kinetic parameters... at a series of substrate concentrations. Basic parameters may also be introduced by the experimental design. and k.Design of Experiments 263 Suppose one measures the initial velocity of the formation of product.. we estimate & and $. and one does both of the above experiments. = V. ES. Thus the basic parameters. Class&ation of parameters It is important to distinguish between the basic parameters and the parameters that are determinable by an experiment.r=y. one can estimate 4. I 4 From the forwards velocity experiment.. and of the total enzyme concentration. On the other hand.

For a given experiment. Thus.e. (b) Global identijiability. Observationalparameters: the observational parameters are determined by the experimental design and are functions of a basic parameter set. Jacquez 1. Additional information may be needed to decide which one of the values is appropriate for the physiological system you are working on. IdentiJiability (a) Local identijability. in the almost everywhere sense (9). i. The qualification. A. a parameter is not identifiable but setting the values of one or more other parameters makes it identifiable. they may be: 1. a posteriori identifiability is included under the term estimability which is defined later. sensible. 2. do not influence the observations. the parameter is locally identifiable. Thus a parameter could be globally identifiable almost everywhere but only locally identifiable for a few special values. influence the observations in the experiment. it does not depend on the values of the parameters.e. If for an experiment. (c) Structural ident$abiZity. they may be: a. Local identifiability includes cases of symmetry in models in which two or more parameters play equivalent roles. A property of a parameter is structural if it holds almost everywhere in parameter space. Basicparameters: the basic parameters are the system invariants (kinetic parameters of the system) plus possibly some parameters introduced by the experimental design. The qualifier ‘structural’ applied to a property means the property is generic. that parameter is globally identifiable for that experiment. i. remove it from the parameter set. b. the parameter is identifiable conditioned on the parameters that are preset. insensible. By ‘setting a parameter’ we mean that we assign a value to it and then treat it as known. In that case. 2. i. so their values can be interchanged. identifiable. the experiment model is globally identifiable.e. (a priori identifiability): definitions The identifiability we have discussed so far is concerned with the question of uniqueness of solutions for the basic parameters from the observation function of a given experiment. In some of the literature that is also called a priori identifiability to distinguish it from a posteriori identifiability. ‘almost everywhere’ means that the property might not hold on a special subset of measure zero. global identifiability is a subcategory of local identifiability. The term unique identifiability is equivalent to global identifiability. . If all of the parameters are identifiable but at least one is not globally identifiable. all of the parameters of a model are globally identifiable.264 J. (e) Conditional identijiability. If for an experiment model. The various types of (a priori) identifiability that have been defined in the literature are defined below in (a)-(f). the model is only locally identifiable. If the observation function for an experiment determines a finite number of values for a parameter.e. nonidentifiable. i. Here the term identifiability is used for a priori identifiability. (d) Model identzjiability. If the observation function determines exactly one solution value for a parameter in the entire parameter space.

The number of unknown parameters must not . the terms linear and nonlinear have entirely different meanings. Godfrey (12). Methods for linear systems with constant coefficients Topologicalproperties For compartmental systems. Suppose x is a state variable and the solution of the differential equation is of the form x = A.e”z’. They provide necessary but not sufficient conditions for identifiability. (ll). but as soon as the models become more complex that is no longer possible. some simple topological proper- ties of the connection diagram should be checked first. Before considering that. For that reason I call it estimability and will deal with it in more detail under estimability and optima1 sampling design. In contrast. The values of nonidentifiable parameters often are constrained to fall in intervals. and therefore the observational parameters. For a linear system. DiStefano (10) used the term interval identifiability to describe the restriction of a parameter to a subspace by the constraints of a problem. If the interval is small enough so that a parameter is ‘identifiable for practical purposes’. one can often determine identifiability by inspection of the observation function. the rates of change of the state variables of nonlinear systems are given by nonlinear differential equations. Notice that (a priori) identifiability is concerned only with whether or not the observation function.e”l’+A. Input and output reachability. For more details see the books by Carson et al. are nonlinear parameters. In contrast. which means that the response to a sum of two inputs equals the sum of the responses to the individual inputs. Methods of checking identifiability For fairly simple problems.Design of Experiments 265 (f) Interval iden@ability and quasi-iden@ability. many of the parameters appear nonlinearly in the solutions. DiStefano calls that quasi-identifiability. Jacquez (6) and Walter (9). Condition on number of parameters. a posteriori identifiability is concerned with the estimability of parameters for particular samples. and A2 appear linearly and are linear parameters whereas . the rates of change of the state variables are given by linear differential equations. It has nothing to do with actual samples or sampling errors.1. let us make clear the distinction between linear and nonlinear systems and linear and nonlinear parameters. uniquely define the basic parameters. When applied to the parameters of a system. (9) A. Such systems have the superposition or input linearity property. 2. and 2. A number of methods available for checking identifiability are summarized in this section. and superposition does not hold. There must be a path from some experimental input to each of the compartments of the mode1 and there must be a path from each compartment to some observation site. 1. they then refer to the way the parameters appear in the solutions for the state variables or the observation functions. Even for linear systems. The methods differ for linear and nonlinear systems.

Two methods are available. Jacquez exceed a number which depends on the topology of the system. Take Laplace transforms of the system differential equations and solve the resulting algebraic equations for the transforms of the state variables. The modal matrix method The matrix whose columns are the eigenvectors is called the modal matrix.r. although it becomes quite cumbersome with large models. Whereas for linear systems one can substitute impulsive inputs for the experimental inputs for the analysis of identifiability. A program called PRIDE is now available which uses the transfer function approach plus topological properties to express the coefficients from the transfer function in terms of the cycles and paths connecting the inputs and outputs of an experiment and uses that to test whether the parameters are globally or locally identifiable (13). of course. On the other hand. for nonlinear systems one must analyze the input-output experiment for the actual inputs used. The similarity transformation method Consider a compartmental system for which the coefficient matrix K has been subjected to a similarity transformation to give a system with a coefficient matrix P-'KP.If the only P that satisfies those requirements is the identity matrix.266 J. Recall that under a similarity transformation. That is a drawback. one looks at the response function to see if the eigenvalues and the components of the modal matrix are identifiable. both are. That allows one to use impulsive inputs in checking identifiability even if the actual input in the experiment is not an impulse. . Impose on P-'KP all the structural constraints on K and require that the response function of the system with matrix P-'KP be the same as that of the system with matrix K. one can work out which parameters are identifiable and which are not. note that if a linear model is identifiable with some input in an experiment. see Carson et al. 4i. the eigenvalues do not change. all parameters are globally identifiable. less has been done on nonlinear systems. three methods have received most attention. Methods for nonlinear systems Although there is a large literature on identifiability for linear systems with constant coefficients. it is identifiable from impulsive inputs into the same compartments. functions of the basic parameters. Then write the Laplace transform for the observation function.where P is nonsingular. (10 For checking parameter identifiability. are the observational parameters which are functions of the basic parameters. In this approach. s is the transform variable and the coefficients. This method is used less often than the previous two. A. That gives a set of nonlinear algebraic equations in the basic parameters. If a P # I satisfies the requirements. The Laplace transform or transfer function method This method is simple in theory and is the most widely used. experience shows that frequently the introduction of nonlinearities makes a formerly nonidentifiable model identifiable for a given experiment. The hard part is to determine which of the basic parameters are uniquely determined by this set of nonlinear equations. First. That will be of the form Here.

t@f 6. 4i =f. The method works for linear and nonlinear systems.. compartmental or noncompartmental. which are identifiable functions of the basic parameters. Although there may be an infinite number of coefficients. 4. Let there be p basic parameters and assume we have estimates. Local ident$ability at a point It is natural to develop the theory of identifiability in terms of two levels of parameters. For the parameters set at these estimates. Zii = GXt. treat as though they were the correct values for the parameters. We develop the basic theory for checking identifiability at a point. without having to generate the observational parameters as explicit functions of the basic parameters.. As one adds coefficients from terms of higher and higher order.. for the moment. There are many similarities with the corresponding theory for the estimation of parameters. Similarity transformation The method of similarity transformations has been extended to nonlinear systems (15) but so far there has been little experience with the method. For problems of even moderate magnitude the algebraic work involved in finding the 4i and solving the equations may become limiting.. Furthermore.xO. x(0) = x.) = Yi(t.(e. the basic parameters. (12) The actual observations at point j are. and check identifiability on the functional relations.E$. Recall that the model describing the dynamics of the experiment is k = F(x. An important finding is that if one has initial estimates of the basic parameters one can determine local identifiability numerically at the initial estimates directly. eventually one reaches coefficients that are no longer independent of the preceeding ones.. only a finite number are independent.B. we assume that there is only one observation function and develop the least squares theories.. @. The observation functions (response functions) are: y = G(x..>+ tz/. assume it is zero mean with variance gfj To keep things simple. The coefficients of the expansion are functions of the basic parameters and are the observational parameters...Bu.Design of Experiments 267 Taylor series A method (14) used widely depends on expanding the observation function in a Taylor’s series around t = 0 +. for linear systems it gives structural local identifiability. Let x be the vector of state variables. and the observational parameters. assuming we have fairly good initial estimates of the parameters which we shall. (11) where 0 is a vector of basic parameters and x0 gives the initial conditions.. For problems of low dimensionality it is easy to generate the & explicitly as functions of the 8.. let us develop the two in parallel.$).t).Bu. calculate a set of values of the observation function . 0.e.. Since we shall need the latter shortly for the discussion of estimability and optimal sampling design..t) = G(t&) = G(Q)..). (13) where cij is the error of measurement of Gi(t.

Here g is the sensitivity matrix aGy ~ g= -acp . a4 .G. For small deviations in the parameters. ... . they are truncation errors in the expansion. Ol. one can see that. Notice that the ej are not measurement errors.G.. we drop them. . 0: 0 (18) .+ej yj = Gj’+. ej-+O in order (A@)’as A0+0. .’ k (15b) Notice that yj. The two sums of squares are: Se. = i j=l c 1 2 L zj-G. ae* .. . ae. find the least squares estimate of Aok.$‘. on . . z 0. 0 0 . Take derivatives of the sums of squares with respect to the A8... _aG:: .-+hek . (17) aG" n 3% and EC-’is the diagonal matrix of the inverses of the variances of the measurement errors -01 0: . A. . . linearize the (14) The superscript ’means the term is to be evaluated at the known values (the estimates) of the parameters.. . Next. (164 for identifiability.268 J. and grC-‘gA$ = grC-‘6 = g%-‘(z-G”). since the ej play no role in the theory. Furthermore. (16b) for estimation of parameters. %k U. From this development. = tj.. but zj. to obtain the normal equations... observation function in the parameters f&. Jacquez at n points in time for n>p. gTgA8 = 0. around the known values gAO. 0 .

optimize in this case means to minimize some measure of the variances and covariances of the estimates of 8*. otherwise.We don’t need we don’t want to commit resources to an experiment estimate are not identifiable! 2. N2p samples.e. if det(gTg) = 0. 2. Estimability In this paper. 8*. Indeed. one can increase the sample size. There are qualitative and quantitative aspects to estimability. To do that. Then (det(grg) = O)=(det(gTC-‘g) the values of y at a series of to do the experiment. parameter estimates are almost always correlated and correlations between estimates of the parameters degrades their value.e. The problem is to pick the times of the N samples to optimize the estimation of 8*. even when the variances of the estimates are relatively small. the model is not locally identifiable identifiable but not all.7j. one cannot obtain estimates even though the parameters are identifiable. 17). i. (19) so obviously one cannot estimate parameters that are not identifiable. are to be taken in a sampling interval [0.Design of Experimen ts 269 1. That is covered in the next section. That is usually referred to as a posteriori identifiability. . we are given: 1. For that. a prior estimate. of the p-vector of parameters 0. diagonal so that the correlations between the estimates of the parameters are zero. optimal placement of the samples turns out to be more important. estimation of the variances of the estimates and analysis of the impact of correlations between the estimates. we would like estimates with small variances and with no correlations between the estimates of the parameters. It includes a posteriori identifiability. for an identifiability check we only need to calculate points at the initial estimates for the Bi. Optimal sampling schedules Assume we have a specific model for an input-output experiment. From a quantitative viewpoint. beyond the question of (a priori) identifiability (16. Ideally one would like to obtain a diagonal covariance matrix. Unfortunately. the variances for the measurement errors. the term estimability is used as a general term to cover the various issues involved in evaluating the quality of estimates of parameters. we have a model of a system on which we do a specific input-output experiment. with small diagonal entries. it should be obvious that one must take samples at at least as many points as there are parameters. if the parameters we want to - some parameters may be = 0). but increasing the number of samples is not enough. measurements of the observation function. This method has been programmed in a series of programs called IDENT (16). i. From the qualitative viewpoint. 3.

2. However. Two complications have not been explicitly introduced in order to keep the derivations simple. in [0. i. Let us assume the parameters are identifiable. A. they are used to reduce the parameter set. For introductory reviews see Refs (19. Prior information on the system may appear in additional equations of constraint. 2. they are included. Any choice of points t. If the equations of constraint depend on state variables.. The points ti. geometric spacings often come closer to optimal designs. one needs at most p@ + 1)/2 + 1 distinct points of support in a sampling design (22). they are part of the model equations and are solved with the dynamical equations. Terminology and some background We start with definitions of important terms.. a design with points not far from the optimal design is not far from optimal. If the model is misspecified. it should be stressed that an optimal design holds for the given model of an input-output experiment. if we really knew the values of the parameters there would be no point to doing the experiment to estimate them! Hence the need for a prior estimate which can then be used in a sequential estimation scheme. the optimal design places k samples at each of the points of support.. If the equations of constraint are in terms of parameters only. However. although with some increase in complexity of the presentation. 21). Parameters that have to be estimated may be introduced in the observations and not be present in the system model (see Jacquez and Perry (16)). Points of support..N} are the points of support of the design.. 20. Since the theory for estimation is for estimation of parameters that appear in the observations. 1. Thus. Theory of parameter estimation The theory for parameter estimation has already been presented in Eqs (1 l)-( 19).. Equation (16b) is the equation for estimation of deviations around the initial estimate . ie{l. where k is an integer. it is likely that the design will not be optimal... with or without the explicit appearance of some parameters.7’j is a sampling design. Jacquez Optimal sampling schedules for nonlinear parameters depend on the values of the parameters. It has been shown that forp parameters. For more detailed presentations of the theory of optimal sampling design see Fedorov (22) Landaw (23) and Walter and Pronzato (24).e. they can be handled without any basic change in the theory. Obviously..270 J. experience with compartmental models shows that optimal designs usually require only p points of support (23). In addition. Fortunately. use the prior estimate to obtain an optimal sampling schedule and then do the experiment to obtain a better estimate and repeat the process (18). optimal sampling schedules are rarely sharply optimal.. if one takes N = kp samples. Finally.t. it is worth noting and emphasizing that the points of support in an experimental design are usually far from uniformly spaced. Sampling design.. 1. Thus we need to know the values of the parameters to obtain an optimal sampling schedule.

We want the determinant of (g’C_‘g) to be large. D-optimal designs are the most widely used. An important property of D-optimal designs is that they are independent of the units (scales) chosen for the parameters. Eq (16b) have no solution. The problem then is to choose some objective function of I = g’X_‘g and find the sample that optimizes this objective function. the model is unidentifiable if the determinant of (g’g) is zero. gTXP’g is the Fisher information matrix. on the sampling design. i. ’ C-optimal designs are also independent of the scales of the parameters. So minimizing the determinant of II’ minimizes the volumes of the confidence ellipsoids. One can see intuitively why (g?-‘g) is so important. Thus minimizing the determinant of (g%‘g)-‘. Z. C-optimal designs These minimize the trace of CII’. i. that is equivalent to minimizing the average variance of the estimates of the parameters. around the minimum of S. This is equivalent to minimizing the length of the principal axis of the confidence ellipsoid in parameter space. The major ones are given below. A-optimal designs Minimize the sum of the diagonal elements (the trace) of II’. E-optimal design Minimize the maximum eigenvalue of II’.e.e. Optimal sampling designs It is obvious that the entries in (g’C_‘g) depend on the times at which samples are taken. However. provided that the determinant of (g?-‘g) is not small. Unfortunately. If the determinant of (gTC-‘g) = 0. or maximizing the determinant of g?‘g minimizes the volumes of these ellipsoids. and increasing Idecreases the covariance matrix. The determinant of II’ is proportional to the volumes of the confidence ellipsoids around the minimum of the sum of squares in parameter space. Important points to note are as follows. That is what a D-optimal design does. Such designs are called D-optimal designs. the confidence ellipsoids. where C is a diagonal matrix whose entries are the inverse squares of the values of the parameters at the minimum in S. If the determinant of g%‘g is nonzero. i. it uses the sensitivity matrix weighted by the inverse error variances and clearly one wants samples placed so as to increase the sensitivity of the estimates to change in the parameters. (g’c-‘g))’ is proportional to the covariance matrix of the estimates of 0 and to obtain good estimates of 8”we want the covariance matrix to be small.e. The determinant of (g%-‘g))’ is proportional to the volume of the ellipsoids of constant uncertainty. The criterion used widely is to maximize the determinant of I. A-optimal designs are not independent of the scales used for the parameters. c (at/.Design of Experiments 271 of 8. other design criteria have also been used. If the errors of measurement are not large one expects (z-GO) to be small and so At? will be small. The result is to minimize the average squared coefficient of variation. D-optimal designs II’ is an estimate of the covariance matrix of the estimates of 8. because then small changes in z have very small effects on the estimates of 6. so it is worth examining their properties . In fact.

Thus it is possible for the confidence ellipsoids to be long and narrow in some directions. That allows for differences between the lengths of the principal axes of the ellipsoids but by aligning the principal axes with the parameter axes it reduces the off-diagonal terms of the Hessian. Do this for all p points of support. We do not pursue that here but look at a simpler problem that is much closer to actual practice in the design of experiments. there is no problem with using lo&300 or more points. Starting with the first point of support do the following.7’j in a geometrically increasing spacing rather than to divide the interval into N equal subintervals. Keep the one with the largest determinant. Two programs are available that calculate D-optimal sampling designs for multiple input and multiple output experiments. Another approach. The method is more obvious for one observation function. That method converges fairly rapidly. but have the same input-output response over some class of admissible inputs for a given inputoutput experiment (27-30).23). minimizes the angle between the ellipsoid axes and the reference axes in the space of the basic parameters./l/7.T]. In an area such as pharmacokinetics or physiological systems modeling. that implies high correlations between estimates of some of the basic parameters. spread the p points over the lattice and calculate the determinant of g%‘g. For an initial choice of p points of support. there are usually only a few competing models. Place the ith point of support on each lattice point between the (i. Jacquez in more detail. T] provides a simple and direct approach to generating optimal sampling designs.272 J. After sweeping through all points of support on the lattice.e. A. Both programs use variants of the numerical method just described and both handle multiple inputs and multiple outputs. The latter is an extension of the IDENT programs (16). with the power of modern computers a systematic search of the interval [0. and calculate g%-‘g for each of the designs so obtained.. These . E-optimal designs tend to do that. successively. On the interval [O. OSSMIMO (26) and OPTIM. The matrix of second derivatives of G with respect to the parameters is called the Hessian. the M-optimality of Nathanson and Saidel(25). i. place a lattice of N+ 1 > 3p points.1)th and the (i+ 1)th points of support. repeat the process. Model distinguishability A theoretical problem related to identifiability is concerned with constructing all models that have different structures from a given or candidate model. the correlations between the estimates of the parameters. where yi is the ith eigenvalue of the Hessian (6). it is better to choose an initial partition that divides the interval [0. One can show that the principal axes of the sum of squares ellipsoids are along the eigenvectors of the Hessian and are intersected by the sum of squares surface at distances proportional to . The idea is to find all models that could not be distinguished by the experiment under consideration. OSSMIMO is written in FORTRAN whereas OPTIM is written in C. Numerical methods Although there is an extensive and often complicated body of theory on optimal sampling designs (22. For some purposes it might be better to give up some of the volume of the minimum ellipsoids in exchange for ellipsoids that are closer to spheres so as to decrease the correlations between basic parameters.

Many metabolites and drugs equilibrate slowly enough between blood and the interstitial fluid in the organs so that the amount in the blood plasma acts as one compartment and the amount in the interstitial spaces acts as another compartment. and do that for all feasible experiments. 2(a) and (b). and if more than one. An example To illustrate the many aspects of the design of experiments that have been covered.)% +62q2. is for excretion by way of the urine but ko2 is the coefficient for uptake by cells and metabolic conversion to some other material.. the equations for the system model are 41 = -(& 42 = +k*. pick the experiment that gives the greatest difference between input-output responses of the models. a simple but realistic example will now be given (31). For Fig. There are two possible compartmental models for this system which are shown in Fig. In Fig.. (20) -k. 2(b) as our system model. there are only two main models. On the other hand. which is the best? The basic idea is to compare the input-output responses of the models for a particular experiment.Design of Experiments 273 are the models that are plausible in terms of what is known of the anatomical structure and of the basic biochemical and physiological mechanisms at work in the system. 2(b). The transfer coefficients are constants. We want to choose the system model in Fig. 2. k. we want to choose the model in Fig.q. kz. . That is an iterative process whose basic unit of iteration is to compare input-output responses for a particular experiment for two models. 2(a) if koz is zero or if it is so small that the rate of removal by this pathway is not detectable within the errors of measurement.2q2r where qi is the total amount in compartment system model are (21) i. can one or more of the feasible experiments distinguish between the models. if entry into cells is significant. The system models For Fig. Suppose we have such a material but we do not know whether or not it enters cells and is metabolized there. Finally. Often. 2(a). node 1 is the plasma compartment and node 2 is the interstitial compartment. Thus an important issue for the experimentor is. 2. Two possible compartmental (b) models. given that there are two or three competing models and a few experiments that can be done. the equations for the (4 Fig.

and q2. (21). Models of process and models of data The equation sets {Eqs (20). Two experiment models. i. which is experiment 1 on the system model in Fig. are Eqs (20))(21) plus Eqs (24)-(25). (24) (25)) and {Eqs (22). Figure 3 shows the experiment models. I/. we find that for both experiment models the observation function is of the same form . The equations for the experiment models are Eqs (20)-(21) and Eqs (22)-(23) plus equations for the inputs and the observations. The equations for the experiment model in Fig.e.(t) = q. The observation function is the concentration y.e. is the volume of distribution in the plasma. (23). Notice that here is an example of a basic parameter.. and measuring the concentration in compartment 1. solve these equations to obtain The experiment models Now we define an experiment. the observation function. If we had initial conditions on q. Jacquez (a) W Fig. 3. which is experiment 1 on the system model in Fig. i. we could the time courses of q.9. an IV injection of a bolus at t = 0. 2(a). Thus the equations for the experiment model in Fig. i. 4.(O)= 1. A. 2(b) are Eqs (22)-(23) plus Eqs (24)-(25).e. (22) (23) -v%2+~12h. and q2. (24) in compartment I 1 (25) where V. that is introduced by the experimental design. If we solve the equations. (24) (25)) are models of process because they describe the basic processes going on in experiment 1. 3(b). The heavy arrows going into compartments 1 represent the inputs and the heavy arrows coming out of compartments 1 represent the observations (outputs in engineering terminology). experiment 1. 3(a). 42 = k2. The unit impulsive inputs are given by the initial conditions they give. which consists of putting a unit impulse into compartment 1 at t = 0.274 J. 41= -(kx+~2h+k2.

k. from c$.’ (31) That again gives four observational parameters $1 = l/V 42 = (ko2+k. and none of the four basic kinetic parameters are identifiable. so all of the basic parameters are globally (uniquely) identifiable.2+k.. 7 1 If we try to just fit the data with an equation of the form of Eq (26).+W. 3(a). 44 = k&i. take Laplace transforms of Eqs (20t(21) and solve for Q.k02+k0.. over a limited range.Design of Experiments Yl(O= 275 41(t)= A.k. s2+(ko. = s + (km+ k.’ (30) so the Laplace transform of the observation function becomes 4 VI + (km+ kdl V.)s+k0.k02+k0. That too would be a model of data but one not derived from a model of process. That gives Q.z+kz.. If we do the same for the experiment model in Fig.k.+k. k.k...2+k2. = s2+(ko.+k..2+k02k2. IdentiJiability Let us use the Laplace transform method.)S+k0. The set of Eq (29) has unique solutions for V. ko. . s2+(ko.e”1’+&“2’. the model of data is derived from models of process. (32) but these are functions of five basic parameters. First.z+kz. 3(b) Q. for the experiment model in Fig.. = 4 f’.2+h 4.+k. If we did not know the models of process and just looked at the data. it looks as if it could be fitted with a polynomial in t. = l/V 42 = WV... and k. is uniquely identifiable..k..2YV. = s+k. + kd f’. $3 = kx+ku+k. we have a model of data.+koz+k.)S+k”. the Laplace transform of q.2’ (27) Thus the Laplace transform of the observation function is Y.. Now only V. (29) 43 = ko.)s+k0.2+k2.+koz+k.2+k02k2.) s2+(k. Y.. = &&.z+kz.2 (28) That gives four observational parameters: 4. In this case.

Which model? It is important to realize that up to this point there has been no reason to do any experiments in the laboratory. Jacquez k21 k21 g=y g=yko2 Fig. the basic kinetic parameters cannot be estimated because they are not identifiable by experiment 1. = q3 and if the impulsive input into compartment 1 at t = 0 is 1 unit of material. Experiment models as in Fig. The new experiment then is the same as experiment 1 with the addition of measurement of the amount in compartment 3. = Q3 = ko. 2(b). is. Eq (26). one cannot sample compartment 2 or the outflow from it. (35) Thus we have another observational parameter to add to Eqs (29) and (32). I’. within the error of measurement of yz(t). that means we add a compartment that collects the outflow from compartment 1. In modeling terms. 3 but with an additional compartment 3. Thus. (34) The Laplace transform of y. for both system models. one can collect the urine and measure the amount excreted as a function of time. 2(b) identifiable? Practically. Can one modify the experiment to make all of the kinetic parameters of the system model in Fig. if the outflow from compartment 1 is all by way of the urinary output. = 93. 2(a) and 2(b). f& Y2(0 = 1. 2(a) but the limit must be less than 1 for the system model in Fig.276 J. as shown in Fig. 4. 4.Q. That means the system model has another equation. the diagram for this new experiment 2. and there is no way that experiment 1 can distinguish between the two possible system models. Analyses of identifiability and model distinguishability . However. 2(b) become uniquely identifiable. A. So we choose to do experiment 2.. However. experiment 2 provides the additional measurement. The conclusion then is that if the system model in Fig. Fig. the observational function is a double exponential decay. experiment 2 can distinguish between the two system models. Model distinguishability For experiment 1. the equation for q3. With that change in the experimental design all of the kinetic parameters of the system model in Fig. for the system model in Fig. 2(b) is correct. (33) and another observation function has to be added to the equations for the experiment model 43 Y2 = kl141. y.

If the estimates are somewhat different from the preliminary estimates and/or . Assume that the experiment has been done and it turns out that. y. 2(a) is correct.Design of Experimen ts 217 are done on the experiment models. in order to decide which is the better model and also to determine how far out in time to take samples in an optimal experiment design and to obtain preliminary estimates of the parameters. optimal designs are rarely sharply optimal.(t).e. 4(a). T. So the answer is to group four sample points around each optimal point of support.g. To decrease the estimation errors.. i. e. That design will almost always place four samples at each of the four points of support of the design. help to obtain preliminary estimates for the enlarged model. that of the experiment model in Fig. the variances of the c. replicates plus the residuals from the fits in the preliminary experiment(s) can be used to estimate the variance. however. as near as makes no difference. we use OPTIM or OSSMIMO to come up with an optimal sampling design. However. not all on the points of support for the optimal design based on the experiment model in Fig. If not. i. If model in Fig.(t) but with different coefficients. would set an upper limit. how many samples can be handled without degrading the sampling technique. If measurement error is say 2%. Notice that the concentration in compartment 1. Technically that is usually not a practical design because of the difficulty of taking four independent samples simultaneously. 4(a). Considerations of experimental technique. Then the extra points in the design. Finally. In addition. all parameters are identifiable from experiment 1. which will be optimal for the preliminary estimates of the parameters. model distinguishability depends on taking y2 out far enough so one can estimate whether or not it approaches 1 in the limit. one needs to take more. so why not fall back to the easier experiment. we would find that the correlations between the estimates of the parameters would be much less for experiment 2! The better decision is to continue to do experiment 2. Other information needed is the variance of the measurement error. Suppose we decide we can easily handle 16 samples over the period T.. leaving enough time between successive samples so as not to degrade the technique by hasty work. the experiment is run with the optimal design and the parameters are estimated. there is little point in taking more samples. and V. one that was too small and/or equilibrated so slowly with the plasma compartment that it was not picked up in earlier work. 3(a)? There is good reason not to do that! Experiment 2 gives independent estimates of both k. but now it is time to do a real experiment. the system model in Fig. of Eq (13). Previous experience with the experimental technique might provide estimates. one would also choose Tlarge enough to decide whether or not VI(t) is approaching 1 as t-+60. With the necessary preliminary information available. when the curve has fallen to a low level and is fairly flat.. That has an additional advantage. follows a double exponential decay and after some time. but y.e. Optimal sampling schedule and estimability Qualitative considerations of estimability tell us we need to take at least four samples to be able to estimate the four parameters of the experiment model in Fig. is also a double exponential with the same exponential terms as in y. a sampling design close to an optimal one is close to optimal. 2(a) is correct and we have an estimate of T. Suppose later work shows that there are really two peripheral compartments.

65. Compartmental Models and Their Application.. Design and Analysis of Experiments. 1993. 1978. Jacquez and C. P.. 35. L. Platt. Discussion and conclusion Many problems in pharmacokinetics and in the study of metabolism require that one estimate the parameters of the kinetic processes involved. After that. The Iowa State University Press. using the variances of the estimates as weights. R651-R664. and Finkelstein. Ames. 43-79. Landaw. Fed.those checks are done on the experiment models. E. (2) Fisher. 1987. 1983. Vol. (7) J. In updating the estimates in this sequential process. It should be stressed that checking parameter identifiability and model distinguishability does not require that one do any experiments .. (11) Carson. However. A.. (8) J. IO. Math. Science. Simon. 1983. Oxford. J. The Mathematical Modeling of Metabolic and Endocrine Systems. The Design of Experiments. pp. Am. the generation of optimal sampling schedules requires some preliminary estimates. 5th edn. Proc. (6) Jacquez. That conserves the resources required for experimentation . Cobelli. L.. far fewer experiments need to be done because the experiments that are done are more efficient. New York. DiStefano III and E. IdentiJiability of Parametric Models. 1983. SIAM Rev. 1996. Vol. (10)J. 2477-2480. Identifiability of parameters. Compartmental Analysis in Biology and Medicine. pp. J. (9) Walter. generation of optimal sampling schedules and running the experiment are done iteratively to obtain the parameter estimates. Pergamon.. J. 46. pp. pp. pp. J.. Academic Press.). DiStefano III. 146. . K. Biosciences. model distinguishability and the generation of optimal sampling schedules are the keystones of this approach.. John Wiley & Sons. A. C.experiments are costly! Furthermore. K. 1964. Vol. A. (5) Gill. 3rd edn. Jacquez if they have unacceptably large variances. and noncompartmental modeling. The design of experiments to do that involves formally modeling the system under investigation and the experiments that one proposes to do on the system. J. 347-353... R. M. Vol. A. The gains from this approach to experimentation should be clear. I.278 J. MI. (4) J. New York. Jacquez. New York. 5148. A Bayesian approach to updating could be used. Vols l-3. 1987. Ann Arbor. repeat the process. “Multiexponential. for which preliminary experiments may have to be done. “Identifiability: the first step in parameter estimation”. (ed. it minimizes the use of animals in experiments and that is becoming an ever stronger consideration as the opposition mounts to the use of animals in research. multicompartmental. 1984. (12) Godfrey. 246. “Complete parameter bounds and quasiidentifiability conditions for a class of unidentifiable linear systems”. 1949. Generally. (3) J. Strong inference”. Hafner Publishing. Harper and Rowe. “Qualitative theory of compartmental systems”. Vol. Methodological limitations and physiological interpretations”. R. New York. Physiol. The Logic ofScientific Discovery. References (1) Popper. A. take into account all estimates obtained. BioMedware. 1968. E.

R. Vol. R259-R265. Vol. Vol. “Qualitative and quantitative experiment design for phenomenological models . (17) J. Invest. UCLA. Biosciences. (26) C. A. Vol. 240. 739756. Identifiability and distinguishability testing via computer algebra”. K. J. L. 195-213.. 1987. pp. (22) Fedorov.. 248. Fed. A. A. Proc. V. Math. 119..D. J. ed. Math. K. 217-248. pp. Biosciences. 1980. J. J. and Control. Biomed. III. LeCourtier. A.. ed. 84-90. Vajda. D’Argenio. Math. (23) Landaw. approach to iden(15) S. (18) J. Theory of Optimal Experiments. H. Godfrey and S. PRIDE: a program to test a priori global identifiability of linear compartmental models. Math. N. “Norepinephrine metabolism in humans”. Danish Automation Society. E. pp. pp. Biosciences. 80. 1981. Estimation.. pp. “Optimized blood sampling protocols and sequential design of kinetic experiments”.. (28) M. 258. In Variability in Drug Therapy: Description. (14) H. King. Landaw. P. Vol. 1990. 10th IFAC Symposium on System Identification. Ruggeri. J. New York. Raksanyi.. “Similarity transformation tifiability analysis of nonlinear compartmental models”. 1994. 1985.. DiStefano. J. Saidel.. Sattier. “Numerical parameter identifiability and estimability: integrating identifiability. . Vol. Jacquez and T. 187-200. (21) Landaw. 2499256. Vol. J.. J. Thesis. C. M. Rowland et al. (24) E. Rabitz. J. Ph. 103. Z. 39. New York.. R. Jacquez and P. Vol. Godfrey. and Cobelli. Cobelli. Vol. In Proc.a survey”. Vol. J. and optimal sampling design”.. (31) 0. 1985. Biosciences. et al. pp. Perry. Physiol. Pharmacokin. pp. V. M. 1981. 25-30. Engng. “Optimal design for individual parameter estimation in pharmacokinetics. “System identifiability based on the powerseries expansion of the solution”. DiStefano III. Linares. “Optimal sampling times for pharmacokinetic experiments”. Zhang. 9. M. “A methodology for compartmental model indistinguishability”. Walter and A. A. “Design and optimization of tracer experiments in physiology and medicine”. Q. Soderstrom. 26. Math. (20) J. Pronzato.Design of Experimen ts 219 (13) Saccomani. (29) A. Indistinguishability and identifiability analysis of linear compartmental models”. 93. C. 1989. 41. Pohjanpalo. 201-227. pp. L. Automatica. 77. Nathanson and G. pp. “Multiple-objective criteria for optimal experimental design: application to ferrokinetics”. Biosciences. “Parameter estimation: local identifiability of parameters”. R378-R386. pp. BME32. estimability. Vol. M. Raven Press. Academic Press. J. 1994. and applications to endocrine-metabolic and pharmacokinetic models”. Walter and L. 77-95. 96. E. (30) L. Chapman and K. Chapman. IEEE Trans. Indistinguishability for a class of nonlinear compartmental models”. DiStefano III and E.. Copenhagen. Y. 1332-1341. S. Vol. Vol. E. Vol. 141-164. Biosciences.. Physiol. Clin. Blanke and T. 1991. Math. SYSZD 94. Am. Math. Biosciences. 1978. pp. lnd Vajda. 77. M. pp. Greif. 1985. Vol. R. Collins and P. D’Angio. (25) M. 77-95. (16) J. pp. 1985. M. “Optimal design of multioutput sampling schedules . Godfrey and H. pp. Venot. 1972. Jacquez. Zech. (27) M. J. R. A. 3. M. E727-E736. 1980. Am. pp. 1989. Physiol. pp. Am. Biopharm. Vol. J. 21-33. 1990. 1985. (19) D. Optimal experimental design for biologic compartmental systems with applications to pharmacokinetics. 245-266.