Shuhua Hu
Center for Research in Scientic Computation
North Carolina State University
Raleigh, NC
March 15, 2007 1
background
Background
Model
statistical model: X = h(t; q) +
h: mathematical model such as ODE model, PDE model, algebraic model, etc.
: random variable with some probability distribution such as normal distribution.
X is a random variable.
Under the assumption of being i.i.d N(0,
2
), we have
probability distribution model: g(x) =
1
2
exp
_
(xh(t;q))
2
2
2
_
, where = (q, ).
g: probability density function of x depending on parameter .
includes mathematical model parameter q and statistical model parameter .
Risk
Modeling error (in terms of uncertainty assumption)
Specied inappropriate parametric probability distribution for the data at hand.
March 15, 2007 2
background
Estimation error
2
=
2
. .
bias
+
2
. .
variance
: parameter vector for the full reality model.
: is the projection of onto the parameter space of the approximating model
k
.
2
assym.
2
k
, where E(
2
k
) = k.
March 15, 2007 3
background
Principle of Parsimony (with same data set)
Akaike Information Criterion
March 15, 2007 4
KL information
KullbackLeibler Information
Information lost when approximating model is used to approximate the full reality.
Continuous Case
I(f, g()) =
_
f(x) log
_
f(x)
g(x)
_
dx
=
_
f(x) log(f(x))dx
_
f(x) log(g(x))dx
. .
relative KL information
f: full reality or truth in terms of a probability distribution.
g: approximating model in terms of a probability distribution.
: parameter vector in the approximating model g.
Remark
I(f, g) 0, with I(f, g) = 0 if and only if f = g almost everywhere.
I(f, g) = I(g, f), which implies KL information is not the real distance.
March 15, 2007 5
AIC
Akaike Information Criterion (1973)
Motivation
The truth f is unknown.
The parameter in g must be estimated from the empirical data y.
Data y is generated from f(x), i.e. realization for random variable X.
(y): estimator of . It is a random variable.
I(f, g(
(y)))]
E
y
[I(f, g(
(y)))] =
_
f(x) log(f(x))dx
_
f(y)
__
f(x) log(g(x
(y)))dx
_
dy
. .
E
y
E
x
[log(g(x
(y)))]
.
G: collection of admissible models (in terms of probability density functions).
is MLE estimate based on model g and data y.
y is the random sample from the density function f(x).
Model Selection Criterion
Maximizing
gG
E
y
E
x
[log(g(x
(y)))]
March 15, 2007 7
AIC
Key Result
An approximately unbiased estimate of E
y
E
x
[log(g(x
y)) k
L: likelihood function.
: maximum likelihood estimate of .
k: number of estimated parameters (including the variance).
Remark
Good model : the model that is close to f in the sense of having a small KL value.
March 15, 2007 8
AIC
Maximum Likelihood Case
AIC = 2 log L(
y)
bias
+ 2k
variance
Calculate AIC value for each model with the same data set, and the best model is
the one with minimum AIC value.
The value of AIC depends on data y, which leads to model selection uncertainty.
LeastSquares Case
Assumption: i.i.d. normally distributed errors
AIC = nlog
_
RSS
n
_
+ 2k
RSS is estimated residual of tted model.
March 15, 2007 9
TIC
Takeuchis Information Criterion (1976)
useful in cases where the model is not particular close to truth.
Model Selection Criterion
Maximizing
gG
E
y
E
x
[log(g(x
(y)))]
Key Result
An approximately unbiased estimator of E
y
E
x
[log(g(x
y)) tr(J(
0
)I(
0
)
1
)
J(
0
) = E
f
_
_
log(g(x))
_ _
log(g(x))
_
T
_
=
0
I(
0
) = E
f
_
2
log(g(x))
j
_
=
0
Remark
If g f, then I(
0
) = J(
0
). Hence tr(J(
0
)I(
0
)
1
) = k.
If g is close to f, then tr(J(
0
)I(
0
)
1
) k.
March 15, 2007 10
TIC
TIC
TIC = 2 log(L(
y)) + 2tr(
J(
)[
I(
)]
1
),
where
I(
) and
J(
I(
) =
2
log(g(x
))
2
estimate of I(
0
)
J(
) =
n
i=1
_
log(g(x
i

))
_ _
log(g(x
i

))
_
T
estimate of J(
0
)
Remark
Attractive in theory.
Rarely used in practice because we need a very large sample size to obtain good estimates
for both I(
0
) and J(
0
).
March 15, 2007 11
AIC
c
A Small Sample AIC
use in the case where the sample size is small relative to the number of parameters
rule of thumb: n/k < 40
Univariate Case
Assumption: i.i.d normal error distribution with the truth contained in the model set.
AIC
c
= AIC +
2k(k + 1)
n k 1
. .
biascorrection
Remark
The biascorrection term varies by type of model (e.g., normal, exponential, Poisson).
In practice, AIC
c
is generally suitable unless the underlying probability distribution is
extremely nonnormal, especially in terms of being strongly skewed.
March 15, 2007 12
AIC
c
Multivariate Case
Assumption: each row of is i.i.d N(0, ).
AIC
c
= AIC + 2
k(
k + 1 + p)
n
k 1 p
Applying to the multivariate case:
Y = TB + , where Y R
np
, T R
n
k
, B R
kp
.
p: total number of components.
n: number of independent multivariate observations, each with p nonindependent compo
nents.
k: total number of unknown parameters and k =
kp + p(p + 1)/2.
Remark
Bedrick and Tsai in [1] claimed that this result can be extended to the multivariate non
linear regression model.
March 15, 2007 13
AIC dierence
AIC Dierences, Likelihood of a Model, Akaike Weights
AIC dierences
Information loss when tted model is used rather than the best approximating model
i
= AIC
i
AIC
min
AIC
min
: AIC values for the best model in the set.
Likelihood of a Model
Useful in making inference concerning the relative strength of evidence for each of the
models in the set
L(g
i
y) exp
_
1
2
i
_
, where means is proportional to.
Akaike Weights
Weight of evidence in favor of model i being the best approximating model in the set
w
i
=
exp(
1
2
i
)
R
r=1
exp(
1
2
r
)
March 15, 2007 14
condence set
Condence Set for KL Best Model
Three Heuristic Approaches (see [4])
Based on the Akaike weights w
i
To obtain a 95% condence set on the actual KL best model, summing the Akaike weights
from largest to smallest until that sum is just 0.95, and the corresponding subset of
models is the condence set on the KL best model.
Based on AIC dierence
i
0
i
2, substantial support,
4
i
7, considerable less support,
i
> 10, essentially no support.
Remark
Particularly useful for nested models, may break down when the model set is large.
The guideline values may be somewhat larger for nonnested models.
Motivated by likelihoodbased inference
The condence set of models is all models for which the ratio
L(g
i
y)
L(g
min
y)
> , where might be chosen as
1
8
.
March 15, 2007 15
multimodel inference
Multimodel Inference
Unconditional Variance Estimator
var(
) =
_
R
i=1
w
i
_
var(
i
g
i
) + (
)
2
_
2
is a parameter in common to all R models.
i
means that the parameter is estimated based on model g
i
,
is a modelaveraged estimate
R
i=1
w
i
i
.
Remark
Unconditional means not conditional on any particular model, but still conditional on
the full set of models considered.
If is a parameter in common to only a subset of the R models, then w
i
must be recal
culated based on just these models (thus these new weights must satisfy
w
i
= 1).
Use unconditional variance unless the selected model is strongly supported (for example,
w
min
> 0.9).
March 15, 2007 16
summary
Summary of Akaike Information Criteria
Advantages
Valid for both nested and nonnested models.
Compare models with dierent error distribution.
Avoid multiple testing issues.
Selected Model
The model with minimum AIC value.
Specic to given data set.
Pitfall in Using Akaike Information Criteria
Can not be used to compare models of dierent data sets.
For example, if nonlinear regression model g
1
is tted to a data set with n = 140 obser
vations, one cannot validly compare it with model g
2
when 7 outliers have been deleted,
leaving only n = 133.
March 15, 2007 17
summary
Should use the same response variables for all the candidate models.
For example, if there was interest in the normal and lognormal model forms, the models
would have to be expressed, respectively, as
g
1
(x, ) =
1
2
exp
_
(x )
2
2
2
_
, g
2
(x, ) =
1
x
2
exp
_
(log(x) )
2
2
2
_
,
instead of
g
1
(x, ) =
1
2
exp
_
(x )
2
2
2
_
, g
2
(log(x), ) =
1
2
exp
_
(log(x) )
2
2
2
_
.
Do not mix null hypothesis testing with information criterion.
Information criterion is not a test, so avoid use of signicant and not signicant,
or rejected and not rejected in reporting results.
Do not use AIC to rank models in the set and then test whether the best model is
signicantly better than the secondbest model.
Should retain all the components of each likelihood in comparing dierent
probability distributions.
March 15, 2007 18
reference
References
[1] E.J. Bedrick and C.L. Tsai, Model Selection for Multivariate Regression in Small Samples,
Biometrics, 50 (1994), 226231.
[2] H. Bozdogan, Model Selection and Akaikes Information Criterion (AIC): The General Theory
and Its Analytical Extensions, Psychometrika, 52 (1987), 345370.
[3] H. Bozdogan, Akaikes Information Criterion and Recent Developments in Information Com
plexity, Journal of Mathematical Psychology, 44 (2000), 6291.
[4] K.P. Burnham and D.R. Anderson, Model Selection and Inference: A Practical
InformationTheoretical Approach, (1998), New York: SpringerVerlag.
[5] K.P. Burnham and D.R. Anderson, Multimodel Inference: Understanding AIC and BIC in
Model Selection, Sociological methods and research, 33 (2004), 261304.
[6] C.M. Hurvich and C.L. Tsai, Regression and Time Series Model Selection in Small Samples,
Biometrika, 76 (1989).
March 15, 2007 19