Rajen D. Shah
r.shah@statslab.cam.ac.uk
Course webpage:
http://www.statslab.cam.ac.uk/~rds37/modern_stat_methods.html
In this course we will study a selection of important modern statistical methods. This
selection is heavily biased towards my own interests, but I hope it will nevertheless give you
a flavour of some of the most important recent methodological developments in statistics.
Over the last 25 years, the sorts of datasets that statisticians have been challenged to
study have changed greatly. Where in the past, we were used to datasets with many observations with a few carefully chosen variables, we are now seeing datasets where the number
of variables can run into the thousands and greatly exceed the number of observations. For
example, with microarray data, we typically have gene expression values measured for several thousands of genes, but only for a few hundred tissue samples. The classical statistical
methods are often simply not applicable in these highdimensional situations.
The course is divided into 4 chapters (of unequal size). The first chapter is on ridge
regression (an important generalisation of ordinary least squares) and the kernel trick (one
of the most important ideas in machine learning).
Chapter 2 is on the Lasso and its extensions. The Lasso has been at the centre of much
of the developments that have occurred in highdimensional statistics, and will allow us to
perform regression in the seemingly hopeless situation when the number of parameters we
are trying to estimate is larger than the number of observations.
In chapter 3 we will introduce graphical modelling. Where the previous two chapters
consider methods for relating a particular response to a large collection of (explanatory)
variables, graphical modelling will give us a way of understanding relationships between
the variables themselves.
Statistics is not only about developing methods that can predict well in the presence
of noise, but also about assessing the uncertainty in our predictions and estimates. In
the final chapter we will tackle the problem of how to handle performing thousands of
hypothesis tests at the same time.
Before we begin the course proper, we will briefly review two key classical statistical
methods: ordinary least squares and maximum likelihood estimation. This will help to set
the scene and provide a warmup for the modern methods to come later.
Classical statistics
Ordinary least squares
Imagine data are available in the form of observations (Yi , xi ) R Rp , i = 1, . . . , n, and
the aim is to infer a simple regression function relating the average value of a response, Yi ,
and a collection of predictors or variables, xi . This is an example of regression analysis,
one of the most important tasks in statistics.
A linear model for the data assumes that it is generated according to
Y = X 0 + ,
(0.0.1)
where Y Rn is the vector of responses; X Rnp is the predictor matrix (or design
matrix) with ith row xTi ; Rn represents random error; and 0 Rp is the unknown
vector of coefficients.
Provided p n, a sensible way to estimate is by ordinary least squares (OLS). This
yields an estimator OLS with
OLS := arg min kY Xk22 = (X T X)1 X T Y,
(0.0.2)
Rp
The GaussMarkov theorem states that OLS is the best linear unbiased estimator in
our setting: for any other estimator that is linear in Y (so = AY for some fixed matrix
A), we have
Var 0 ,2 (OLS )
Var 0 ,2 ()
is positive semidefinite.
A very useful quantity in the context of maximum likelihood estimation is the Fisher
information matrix with jkth (1 j, k p) entry
2
ijk () := E
`() .
j k
It can be thought of as a measure of how hard it is to estimate when it is the true
parameter value. The CramerRao lower bound states that if is an unbiased estimator
of , then under regularity conditions,
i1 ()
Var ()
is positive semidefinite.
A remarkable fact about maximum likelihood estimators (MLEs) is that (under quite
general conditions) they are asymptotically normally distributed, asymptotically unbiased
and asymptotically achieve the CramerRao lower bound.
Assume that the Fisher information matrix when there are n observations, i(n) () (where
we have made the dependence on n explicit) satisfies i(n) ()/n I() for some positive
definite matrix I. Then denoting the maximum likelihood estimator of when there are
n observations by (n) , under regularity conditions, as the number of observations n
we have
(n)
d
n( ) Nd (0, I 1 ()).
Returning to our linear model, if we assume in addition that i N (0, 2 ), then the
loglikelihood for (, 2 ) is
n
1 X
n
2
(yi xTi )2 .
`(, ) = log( ) 2
2
2 i=1
2
We see that the maximum likelihood estimate of and OLS coincide. It is easy to check
that
2 T
X X
0
2
i(, ) =
.
0
n 4 /2
Np (0, n(X T X)1 );
The general theory for MLEs would suggest that approximately n()
in fact it is straightforward to show that this distributional result is exact.
iii
Chapter 1
Ridge regression and the kernel trick
Let us revisit the linear model with
Yi = xTi 0 + i .
For unbiased estimators of 0 , their variance gives a way of comparing their quality in
the relevant quantity is
terms of squared error loss. For a potentially biased estimator, ,
+ E()
0 }{ E()
+ E()
0 }T ]
E 0 ,2 {( 0 )( 0 )T } = E[{ E()
+ {E( 0 )}{E( 0 )}T ,
= Var()
a sum of squared bias and variance terms. A crucial part of the optimality arguments
for OLS and MLEs was unbiasedness. Do there exist biased methods whose variance is is
reduced compared to OLS such that their overall prediction error is lower? The emphatic
answer is yes and in fact the use of biased estimators is essential in dealing with the
large p small ntype settings that are becoming increasingly more prevalent in modern
data analysis. In the first two chapters well explore an extremely important method
for variance reduction based on penalisation which can produce estimators that in many
situations greatly outperform OLS and MLEs.
1.1
One way to reduce the variance of OLS is to shrink the estimated coefficients towards 0.
Ridge regression does this by solving the following optimisation problem
2
2
R
(
R
, ) = arg min {kY 1 Xk2 + kk2 }.
(,)RRp
Here 1 is an nvector of 1s. We see that the usual OLS objective is penalised by an
additional term proportional to kk22 . The parameter 0, which controls the severity of
the penalty and therefore the degree of the shrinkage towards 0, is known as a regularisation
parameter or tuning parameter. We have explicitly included an intercept term which is
1
not penalised. The reason for this is that if it were omitted, the estimator would then not
be location equivariant. However, X is not invariant under scale transformations so it
is standard practice to centre each column of X (hence
making them orthogonal to the
intercept term) and then scale them to have `2 norm n.
R
:= Pn Yi /n,
It is straightforward to
show
that
after
this
standardisation
of
X,
=
Y
i=1
P
so we may assume that ni=1 Yi = 0 by replacing Yi by Yi Y and then we can remove
from our objective function. In this case
R = (X T X + I)1 X T Y.
In this form, we can see how the addition of the I term helps to stabilise the estimator.
Note that when X does not have full column rank (such as in highdimensional situations),
we can still compute this estimator. On the other hand, when X does have full column
rank, we have the following theorem.
Theorem 1. For sufficiently small (depending on 0 and 2 ),
E(OLS 0 )(OLS 0 )T E(R 0 )(R 0 )T
is positive definite.
Proof. First we compute the bias of R . We drop the subscript and superscript R for
convenience.
0 = (X T X + I)1 X T X 0 0
E()
= (X T X + I)1 (X T X + I I) 0 0
= (X T X + I)1 0 .
The theorem says that R beats OLS provided is chosen appropriately. To be able
to use ridge regression effectively, we need a way of selecting a good we will come to
this very shortly. What the theorem doesnt really tell us is in what situations we expect
ridge regression to perform well. To understand that, we will turn to one of the key matrix
decompositions used in statistics, the singular value decomposition (SVD).
1.1.1
jj
j=1
where we have used the notation (that we shall use throughout the course) that Uj is the
jth column of U . For comparison, the fitted values from OLS are
X OLS = X(X T X)1 X T Y = U U T Y.
Both OLS and ridge regression compute the coordinates of Y with respect to the columns
2
2
of U . Ridge regression then shrinks these coordinates by the factors Djj
/(Djj
+ ); if Djj
is small, the amount of shrinkage will be larger.
To interpret this further, note that the SVD is intimately connected with Principal
Component Analysis (PCA). Consider v Rp with kvk2 = 1. Since the columns of X have
had their means subtracted, the sample variance of Xv Rn , is
1 T T
1
v X Xv = v T V D2 V T v.
n
n
Writing a = V T v, so kak2 = 1, we have
X
1
1 2
1 T
1
1X 2 2
v V D 2 V T v = aT D 2 a =
aj Djj D11
a2j = D11
.
n
n
n j
n
n
j
3
2
As kXV1 k22 /n = D11
/n, V1 determines the linear combination of the columns of X which
has the largest sample variance, when the coefficients of the linear combination are constrained to have `2 norm 1. XV1 = D11 U1 is known as the first principal component of
2
/n, subject to being
X. Subsequent principal components have maximum variance Djj
orthogonal to all earlier ones.
Returning to ridge regression, we see that it shrinks Y most in the smaller principal
components of X. Thus it will work well when most of the signal is in the large principal
components of X. We now turn to the problem of choosing .
1.2
vfold crossvalidation
Crossvalidation is a general technique for selecting a good regression method from among
several competing regression methods. We illustrate the principle with ridge regression,
where we have a family of regression methods given by different values.
So far, we have considered the matrix of predictors X as fixed and nonrandom. However, in many cases, it makes sense to think of it as random. Let us assume that our data
are i.i.d. pairs (xi , Yi ), i = 1, . . . , n. Then ideally, we might want to pick a value such
that
1
E(kY x T R (X, Y )k22 X, Y )
(1.2.1)
n
is minimised. Here (x , Y ) Rp R is independent of (X, Y ) and has the same distribution
as (x1 , Y1 ), and we have made the dependence of R on the training data (X, Y ) explicit.
This is such that conditional on the original training data, it minimises the expected
prediction error on a new observation drawn from the same distribution as the training
data.
A less ambitious goal is to find a value to minimise the expected prediction error,
1
E{E(kY x T R (X, Y )k22 X, Y )}
n
(1.2.2)
where compared with (1.2.1), we have taken a further expectation over the training set.
We still have no way of computing (1.2.2) directly, but we can attempt to estimate it.
The idea of vfold crossvalidation is to split the data into v groups or folds of roughly
equal size: (X (1) , Y (1) ), . . . , (X (v) , Y (v) ). Let (X (k) , Y (k) ) be all the data except that in
the kth fold. For each on a grid of values, we compute R (X (k) , Y (k) ): the ridge
regression estimate based on all the data except the kth fold. Writing (i) for the fold to
which (xi , Yi ) belongs, we choose the value of that minimises
n
CV () =
1X
{Yi xTi R (X ((i)) , Y ((i)) )}2 .
n i=1
Writing CV for the minimiser, our final estimate of 0 can then be RCV (X, Y ).
(1.2.3)
1.3
(1.3.1)
(1.3.2)
Now suppose that we believe the signal depends quadratically on the predictors:
X
Yi = xTi +
xik xil kl + i .
k,l
We can still use ridge regression provided we work with an enlarged set of predictors
xi1 , . . . , xip , xi1 xi1 , . . . , xi1 xip , xi2 xi1 , . . . , xi2 xip , . . . , xip xip .
5
This will give us O(p2 ) predictors. Our new approach to computing fitted values would
therefore have complexity O(n2 p2 + n3 ), which could be rather costly if p is large.
However, rather than first creating all the additional predictors and then computing
the new K matrix, we can attempt to directly compute K. To this end consider
2
X
T
2
(1 + xi xj ) = 1 +
xik xjk
k
=1+2
xik xjk +
k,l
(1, 2xi1 , . . . , 2xip , xi1 xi1 , . . . , xi1 xip , xi2 xi1 , . . . , xi2 xip , . . . , xip xip )T .
(1.3.3)
Thus if we set
Kij = (1 + xTi xj )2
(1.3.4)
and plug this into the formula for the fitted values, it is exactly as if we had performed
ridge regression on an enlarged set of variables given by (1.3.3). Now computing K using
(1.3.4) would require only p operations per entry, so O(n2 p) operations in total. It thus
seems we have improved things by a factor of p using our new approach.
This computational shortcut is not without its shortcomings. Notice weve had to use
a slightly peculiar scaling of the main effects, and the interactions of the form xik xil k 6= l
appear twice in (1.3.3). Nevertheless it is a useful trick and more importantly for us it
serves to illustrate some general points.
Since ridge regression only depends on inner products between observations, rather
than fitting nonlinear models by first mapping the original data xi Rp to (xi ) Rd
(say) using some feature map (which could, for example introduce quadratic effects),
we can instead try to directly compute k(xi , xj ) = h(xi ), (xj )i.
In fact instead of thinking in terms of feature maps, we can instead try to think about
an appropriate measure of similarity k(xi , xj ) between observations. Modelling in this
fashion is sometimes much easier.
We will now formalise and extend what we have learnt with this example.
1.4
Kernels
We have seen how a model with quadratic effects can be fitted very efficiently by replacing
the inner product matrix (known as the Gram matrix ) XX T in (1.3.2) with the matrix
in (1.3.4). It is then natural to ask what other nonlinear models can be fitted efficiently
using this sort of approach.
We wont answer this question directly, but instead we will try to understand the sorts
of similarity measures k that can be represented as inner products between transformations
of the original data.
That is we will study the similarity measures k : X X R from the input space X
to R for which there exists a feature map : X H where H is some (real) inner product
space with
k(x, x0 ) = h(x), (x0 )i.
(1.4.1)
Recall that an inner product space is a real vector space H endowed with a map h, i :
H H R that obeys the following properties.
(i) Symmetry: hu, vi = hv, ui.
(ii) Linearity: for a, b R hau + bw, vi = ahu, vi + bhw, vi.
(iii) Positivedefiniteness: hu, ui 0 with equality if and only if u = 0.
Definition 1. A positive definite kernel or more simply a kernel (for brevity) k is a
symmetric map k : X X R for which for all n N and all x1 , . . . , xn X , the matrix
K with entries
Kij = k(xi , xj )
is positive semidefinite.
A kernel is a little like an inner product, but need not be bilinear in general. However,
a form of the CauchySchwarz inequality does hold for kernels.
Proposition 2.
k(x, x0 )2 k(x, x)k(x0 , x0 ).
Proof. The matrix
k(x, x) k(x, x0 )
k(x0 , x) k(x0 , x0 )
i,j
X
i (xi ),
j (xj ) 0.
Showing that every kernel admits a representation of the form (1.4.1) is slightly more
involved, and we delay this until after we have studied some examples.
7
1.4.1
Examples
k2 (x, x ) = exp(x x / ) =
X
(xT x0 / 2 )r
r=0
r!
and using (i) of Proposition 4 shows that k2 is a kernel. Finally observing that k = k1 k2
and using (ii) shows that the Gaussian kernel is indeed a kernel.
Jaccard similarity kernel. Take X to be the set of all subsets of {1, . . . , p}. For
x, x0 X with x x0 =
6 define
k(x, x0 ) =
x x0 
x x0 
and if x x0 = then set k(x, x0 ) = 1. Showing that this is a kernel is left to the example
sheet.
8
1.4.2
Theorem 5. For every kernel k there exists a feature map taking values in some inner
product space H such that
k(x, x0 ) = h(x), (x0 )i.
(1.4.2)
Proof. We will take H to be the vector space of functions of the form
f () =
n
X
i k(, xi ),
(1.4.3)
i=1
(1.4.4)
m
X
j k(, xj )
(1.4.5)
j=1
n X
m
X
i j k(xi , xj ).
(1.4.6)
i=1 j=1
We need to check this is welldefined as the representations of f and g in (1.4.3) and (1.4.5)
need not be unique. To this end, note that
n X
m
X
i j k(xi , xj ) =
n
X
i=1 j=1
i=1
i g(xi ) =
m
X
j f (xj ).
(1.4.7)
j=1
The first equality shows that the inner product does not depend on the particular expansion
of g whilst the second equality shows that it also does not depend on the expansion of f .
Thus the inner product is welldefined.
First we check that with defined as in (1.4.4) we do have relationship (1.4.2). Observe
that
n
X
hk(, x), f i =
i k(xi , x) = f (x),
(1.4.8)
i=1
so in particular we have
h(x), (x0 )i = hk(, x), k(, x0 )i = k(x, x0 ).
It remains to show that it is indeed an inner product. It is clearly symmetric and (1.4.7)
shows linearity. We now need to show positive definiteness.
9
i k(xi , xj )j 0
(1.4.9)
i,j
(1.4.10)
which would show that if hf, f i = 0 then necessarily f = 0; the final property we need
to show that h, i is an inner product. However, in order to use the traditional Cauchy
Schwarz inequality we need to first know were dealing with an inner product, which is
precisely what were trying to show!
Although we havent shown that h, i is an inner product, we do have enough information to show that it is itself a kernel. We may then appeal to Proposition 2 to obtain
(1.4.10). With this in mind, we argue as follows. Given functions f1 , . . . , fm and coefficients
1 , . . . , m R, we have
X
X
X
i hfi , fj ij =
i fi ,
j fj 0
i,j
10
Chapter 2
The Lasso and beyond
2.1
Model selection
In many modern datasets, there are reasons to believe there are many more variables
present than are necessary to explain the response. Let S be the set S = {k : k0 6= 0} and
suppose s := S p.
The mean squared prediction error (MSPE) of OLS is
1
1
EkX 0 X OLS k22 = E{( 0 OLS )T X T X( 0 OLS )}
n
n
1
= E[tr{( 0 OLS )( 0 OLS )T X T X}]
n
1
= tr[E{( 0 OLS )( 0 OLS )T }X T X]
n
1
p
= tr(Var(OLS )X T X) = 2 .
n
n
If we could identify S and then fit a linear model using just these variables, wed obtain
an MSPE of 2 s/n which could be substantially smaller than 2 p/n. Furthermore, it can
be shown that parameter estimates from the reduced model are more accurate. The smaller
model would also be easier to interpret.
We now briefly review some classical model selection strategies.
Best subset regression
A natural approach to finding S is to consider all 2p possible regression procedures each
involving regressing the response on a different sets of explanatory variables XM where
M is a subset of {1, . . . , p}. We can then pick the best regression procedure using crossvalidation (say). For general design matrices, this involves an exhaustive search over all
subsets, so this is not really feasible for p > 50.
11
Forward selection
This can be seen as a greedy way of performing best subsets regression. Given a target
model size m (the tuning parameter), this works as follows.
1. Start by fitting an intercept only model.
2. Add to the current model the predictor variable that reduces the residual sum of
squares the most.
3. Continue step 2 until m predictor variables have been selected.
2.2
The Least absolute shrinkage and selection operator (Lasso) estimates 0 by L , where
(
L , L ) minimise
1
kY 1 Xk22 + kk1
(2.2.1)
2n
P
over (, ) R Rp . Here kk1 is the `1 norm of : kk1 = pk=1 k .
Like ridge regression, L shrinks the OLS estimate towards the origin, but there is
an important difference. The `1 penalty can force some of the estimated coefficients to be
exactly 0. In this way the Lasso can perform simultaneous variable selection and parameter
estimation. As we did with ridge regression, we can centre and scale the X matrix, and
also centre Y and thus remove from the objective. Define
Q () =
1
kY Xk22 + kk1 .
2n
(2.2.2)
12
2.2.1
A remarkable property of the Lasso is that even when p n, it can still perform well in
terms of prediction error. Suppose the columns of X have been centred and scaled (as we
will always assume from now on unless stated otherwise) and assume the normal linear
model (where we have already centred Y ),
Y = X 0 + 1
(2.2.3)
2 /2
/2.
Proof.
Z
Z
2
1
1
x x2 /2
er /2
x2 /2
1 (r) =
e
dx
e
dx = .
2 r
2 r r
r 2
p
2
Thus provided r 2/, the result is true. Now let f (r) = 1 (r) er /2 /2. Note
that f (0) = 0 and
r
2
2
rer /2
er /2
2
0
f (r) = (r) +
=
r
0
2
2
p
p
for r 2/. By the mean value theorem, for 0 < r 2/, f (r)/r 0, so f (r) 0.
13
2.2.2
In order to study the Lasso in detail, it will be helpful to review some basic facts from
optimisation and convex analysis.
Convexity
A set A Rd is convex if
x, y A (1 t)x + ty A
(2.2.4)
where g : A Rb . Suppose the optimal value is c R. The Lagrangian for this problem
is defined as
L(x, ) = f (x) + T g(x)
where Rb . Note that
inf L(x, )
xA
inf
L(x, ) = c
xA:g(x)=0
for all . The Lagrangian method involves finding a such that the minimising x on the
LHS satisfies g(x ) = 0. This x must then be a minimiser in the original problem (2.2.4).
Subgradients
We now take A to be Rd . A vector v Rd is a subgradient of f at x if
f (y) f (x) + v T (y x)
for all y Rd .
Proof.
f (y) f (x ) for all y Rd f (y) f (x ) + 0T (y x) for all y Rd
0 f (x ).
Let us now compute the subdifferential of the `1 norm. First note that k k1 : Rd R
is convex. Indeed it is a norm so the triangle inequality gives ktx + (1 t)yk1 tkxk1 +
(1 t)kyk1 . We introduce some notation that will be helpful here and throughout the rest
of the course.
For x Rd and A = {k1 , . . . , km } {1, . . . , d} with k1 < < km , by xA we will mean
(xk1 , . . . , xkm )T . Similarly if X has d columns we will write XA for
XA = (Xk1 Xkm ).
Further in this context, by Ac , we will mean {1, . . . , d} \ A. Note these column and
component extraction operations will always be considered to have taken place first before
any further operations on the matrix, so for example XAT = (XA )T . Finally, define
1 if x1 < 0
sgn(x1 ) = 0
if x1 = 0
1
if x1 > 0,
and
sgn(x) = (sgn(x1 ), . . . , sgn(xd ))T .
Proposition 12. For x Rd let A = {j : xj 6= 0}. Then
kxk1 = {v Rd : kvk 1 and vA = sgn(xA )}
Proof. If v kxk1 then kyk1 kxk1 + v T (y x) for all y Rd . By taking yAc = xAc = 0
and then yA = xA , we get two equations
kyA k1 kxA k1 + vAT (yA xA ) for all yA RA
Ac 
(2.2.5)
(2.2.6)
If v satisfies (2.2.5) and (2.2.6), then v kxk1 . From (2.2.5) we get that vA kxA k1 =
{sgn(xA )} as k k1 is differentiable at xA . Next we claim that (2.2.6) holds if and only
if kvAc k 1. Indeed if kvAc k 1 then kyAc k1 kvAc k kyAc k1 vAT c yAc in view of
Holders inequality. Now suppose there exists j Ac with vj  > 1. Take y with yA = xA ,
yj = sgn(vj ), yAc \{j} = 0. Then
kyAc k1 < vj  = vAT c yAc ,
which is a contradiction.
2.2.3
Equipped with these tools from convex analysis, we can now fully characterise the solutions
to the Lasso. We have that L is a Lasso solution if and only if 0 Q (L ), which is
equivalent to
1 T
X (Y X L ) =
,
n
L
L
for with k
k 1 and writing S = {k : ,k
6= 0}, S = sgn(,
).
S
Now although its still not clear whether Lasso solutions are unique, it is straightforward
to show that Lasso solutions exist and that fitted values are unique.
Proposition 13.
(ii) X L is unique.
Proof.
Q () Q(0) =
kY k22
min
Q ()
2n
:kk1 >kY k22 /(2n)
But on the LHS we are minimising a continuous function over a closed and bounded
(and therefore compact) set: thus a minimiser exists. [When = 0, the fitted values
are simply the projection of Y on to the column space of X.]
(ii) Fix and suppose (1) and (2) are two Lasso solutions giving an optimal objective
value of c . Now for t (0, 1), by strict convexity of k k22 ,
kY tX (1) (1 t)X (1) k22 tkY X (1) k22 + (1 t)kY X (2) k22 ,
with equality if and only if X (1) = X (2) . Since k k1 is also convex, we see that
c Q (t(1) + (1 t)(2) )
= kY tX (1) (1 t)X (2) k2 /(2n) + kt(1) + (1 t)(2) k1
2
(1)
2
tkY X k2 /(2n) + (1 t)kY X (2) k22 /(2n) + kt(1) + (1 t)(2) k1
t{kY X (1) k22 /(2n) + k(1) k1 } + (1 t){kY X (2) k22 /(2n) + k(2) k1 }
(1)
(2)
= tQ(
) + (1 t)Q(
)c .
16
2.2.4
Variable selection
Consider now the noiseless version of the highdimensional linear model (2.2.3), Y =
X 0 . The case with noise can dealt with by similar arguments to those well use below
when we work on an event that kX T k is small (see example sheet).
Let S = {k : k0 6= 0}, N = {1, . . . , p} \ S and assume wlog that S = {1, . . . , s}, and
also that rank(XS ) = s.
Theorem 14. Let > 0 and define = XNT XS (XST XS )1 sgn(S0 ). If kk 1 and for
k S,
(2.2.7)
k0  > sgn(S0 )T [{ n1 XST XS }1 ]k ,
then there exists a Lasso solution L with sgn(L ) = sgn( 0 ). As a partial converse, if
there exists a Lasso solution L with sgn(L ) = sgn( 0 ), then kk 1.
Remark 1. We can interpret kk as the maximum in absolute value over k N of the
dot product of sgn(S0 ) and (XST XS )1 XST Xk , the coefficient vector obtained by regressing
Xk on XS . The condition kk 1 is known as the irrepresentable condition.
Proof. Fix > 0 and write = L and S = {k : k 6= 0} for convenience. The KKT
conditions for the Lasso give
1 T
=
X X( 0 )
n
17
where k
k 1 and S = sgn(S ). We can expand this into
0
1 XST XS XST XN
S S
= S .
T
T
N
n XN XS X N XN
N
(2.2.8)
2.2.5
Consider the highdimensional linear model with noise (2.2.3) and let S, s and N be
defined as in the previous section. As we have noted before, in an artificial situation where
S is known, we could apply OLS on XS and have an MSPE of 2 s/n. Under a socalled
compatibility condition on the design matrix, we can obtain a similar MSPE for the Lasso.
The Compatibility Condition
Define
2 =
inf
Rp :S 6=0, kN k1 3kS k1
1
kXk22
n
,
1
kS k21
s
where we take 0. The compatibility condition is that 2 > 0. Note that if we restrict
the infimum not to be over kN k1 3kS k1 , but actually enforce kN k1 = 0, then if the
minimum eigenvalue of n1 XST XS , cmin , is positive, then 2 > 0. Indeed, then
inf
1
kXS S k22
n
R :S 6=0
kS k22
inf
p
18
= cmin > 0.
Theorem 15.
the Lasso solution
p Suppose the compatibility condition holds and let be(A
2 /81)
with = A log(p)/n for A > 0. Then with probability at least 1 p
, we have
2
2
2
1
2 + k 0 k1 16 s = 16A log(p) s .
kX( 0 )k
2
n
2
2
n
0
a + kN N
k1 3kS0 S k1
a + k 0 k1 4k 0 S k1 ,
S
kX( 0 )k2 ,
n
using the compatibility condition with 0 . From this we get
1
4
s
0
kX( )k2
n
and substituting this into the RHS of (2.2.10) we get the result.
19
(2.2.10)
2.2.6
Computation
One of the most efficient ways of computing Lasso solutions is to use a optimisation technique called coordinate descent. This is a quite general way of minimising a function
f : Rd R of the form
d
X
f (x) = g(x) +
hj (xj )
j=1
(m1)
x1
(m)
(m)
x2
(m1)
, . . . , xd
(m1)
(m1)
, . . . , xd
..
.
(m)
xd
(m)
(m)
(m)
Tseng (2001) proves that provided A0 = {x : f (x) f (x(0) )} is compact, then every
converging subsequence of x(m) will converge to a minimiser of f . [Note as x(m) A0 for
all m, a converging subsequence must exist by BolzanoWeierstrass].
We can replace individual coordinates by blocks of coordinates and the same result
holds. That is if x = (x1 , . . . , xB ) where now xb Rdb and
f (x) = g(x) +
B
X
hb (xb )
b=1
with g convex and differentiable and each hb : Rdb R convex, then block coordinate
descent can be used.
We often want to solve the Lasso on a grid of values 0 > > L (for the purposes
of crossvalidation for example). To do this, we can first solve for 0 , and then solve at
subsequent grid points by using the solution at the previous grid points as an initial guess
(known as a warm start). An active set strategy can further speed up computation. This
works as follows: For l = 1, . . . , L
1. Initialise Al = {k : Ll1,k 6= 0}.
2. Perform coordinate descent only on coordinates in Al obtaining a solution (all
components k with k
/ Al are set to zero).
2.3
We can add an `1 penalty to many other loglikelihoods besides that arising from the normal
linear model. For Lassopenalised generalised linear models, such as logistic regression,
similar theoretical results to those we have obtained are available and computations can
proceed in a similar fashion to above.
2.3.1
Structural penalties
The Lasso penalty encourages the estimated coefficients to be shrunk towards 0 and sometimes exactly to 0. Other penalty functions can be constructed to encourage different types
of sparsity. Suppose we have a partition G1 , . . . , Gq of {1, . . . , p} (so qk=1 Gk = {1, . . . , p},
Gj Gk = for j 6= k). The group Lasso penalty (Yuan & Lin, 2006) is given by
q
X
mj kGj k2 .
j=1
2.3.2
One potential drawback of the Lasso is that the same shrinkage effect that sets many
estimated coefficients exactly to zero also shrinks all nonzero estimated coefficients towards
L
zero. One possible solution is to take S = {k : ,k
6= 0} and then reestimate S0 by OLS
regression on XS .
Another option is to reestimate using the Lasso on XS ; this procedure is known as
the relaxed Lasso (Meinshausen, 2006). The adaptive Lasso takes an initial estimate of 0 ,
init (e.g. from the Lasso) and then performs weighted Lasso regression:
X k 
1
adapt
2
= arg min
kY Xk2 +
,
init 
2n
Rp :Sc =0

k
init
kS
init
21
= arg min
p
X
1
(m1)
2
0
kY Xk2 +
p (k
)k  .
2n
k=1
(a u)+
1{u>} ,
a1
where a is an additional parameter typically set at 3.7 (this can be motivated by a Bayesian
argument).
22
Chapter 3
Graphical modelling and causal
inference
So far we have considered methods for relating a particular response to a large collection
of explanatory variables, and we have been primarily interested in predicting the response
given the covariates.
In some settings however, we do not have a distinguished response variable and instead
we would like to better understand relationships between all the variables. In other situations, rather than being able to predict variables, we would like to understand causal
relationships between them. Representing relationships between random variables through
graphs will be an important tool in tackling these problems.
3.1
Graphs
23
Z1
Z2
Z3
Z4
If instead we have E = {(1, 2), (2, 1), (2, 4), (4, 2)} we get the undirected graph
Z1
Z2
Z3
Z4
24
Proposition 16. Given a DAG G with V = {1, . . . , p}, we say that a permutation of V
is a topological (or causal) ordering of the variables if it satisfies
(j) < (k)
whenever k de(j).
3.2
We would like to understand which variables may be related to each other. Trying to find
pairs of variables that are independent and so unlikely to be related to each other is not
necessarily a good way to proceed as each variable may be correlated with a large number
of variables without being directly related to them. A better approach is to use conditional
independence.
Definition 3. If X, Y and Z are random vectors with a joint density fXY Z (w.r.t. a
product measure ) then we say X is conditionally independent of Y given Z, and write
X Y Z
if
fXY Z (x, yz) = fXZ (xz)fY Z (yz).
Equivalently
X Y Z fXY Z (xy, z) = fXZ (xz).
25
We will first look at how undirected graphs can be used to visualise conditional independencies between random variables; thus in the next few subsections by graph we will
mean undirected graph.
Let Z = (Z1 , . . . , Zp )T be a collection of random variables with joint law P and consider
a graph G = (V, E) where V = {1, . . . , p}. Some notation: let k and jk when in
subscripts denote the sets {1, . . . , p} \ {k} and {1, . . . , p} \ {j, k} respectively.
Definition 4. We say that P satisfies the pairwise Markov property w.r.t. G if for any pair
j, k V with j 6= k and {j, k}
/ E,
Zj Zk Zjk .
Note that the complete graph that has edges between every pair of vertices will satisfy
the pairwise Markov property for any P . The minimal graph satisfying the pairwise Markov
property w.r.t. a given P is called the conditional independence graph (CIG) for P .
Definition 5. We say P satisfies the global Markov property w.r.t. G if for any triple
(A, B, S) of disjoint subsets of V such that S separates A from B, we have
ZA ZB ZS .
Proposition 17. If P has a positive density (w.r.t. some product measure) then if it
satisfies the pairwise Markov property w.r.t. a graph G, it also satisfies the global Markov
property w.r.t. G and vice versa.
3.3
Estimating the CIG given samples from P is a difficult task in general. However, in the
case where P is multivariate Gaussian, things simplify considerably as we shall see. We
begin with some notation. For a matrix M Rpp , and sets A, B {1, . . . , p}, let MA,B
be the A B submatrix of M consisting of those rows and columns of M indexed by
the sets A and B respectively. The submatrix extraction operation is always performed
T
first (so e.g. Mk,k
= (Mk,k )T ).
3.3.1
Normal conditionals
Now let Z Np (, ) with positive definite. Note A,A is also positive definite for any
A.
Proposition 18.
1
ZA ZB = zB NA (A + A,B 1
B,B (zB B ), A,A A,B B,B B,A )
26
= A,A A,B 1
B,B B,A .
Since M ZB is a function of ZB and ZA M ZB is normally distributed, we have the
result.
3.3.2
Nodewise regression
Specialising to the case where A = {k} and B = Ac we see that when conditioning on
Zk = zk , we may write
T
Zk = mk + zk
1
k,k k,k + k ,
where
mk = k k,k 1
k,k k
k Zk = zk N (0, k,k k,k 1
k,k k,k ).
Note that if the jth element of the vector of coefficients 1
k,k k,k is zero, then the
distribution of Zk conditional on Zk will not depend at all on the jth component of Zk .
Then if that jth component was Zj 0 , we would have that Zk Zk = zk has the same
distribution as Zk Zj 0 k = zj 0 k , so Zk Zj Zj 0 k .
i.i.d.
Thus given x1 , . . . , xn Z and writing
xT1
X = ... ,
xTn
we may estimate the coefficient vector 1
k,k k,k by regressing Xk on X{k}c and including
an intercept term.
The technique of nodewise regression (Meinshausen & B
uhlmann, 2006) involves performing such a regression for each variable, using the Lasso. There are two options for
populating our estimate of the CIG with edges based on the Lasso estimates. Writing Sk
27
for the selected set of variables when regressing Xk on X{k}c , we can use the OR rule
and put an edge between vertices j and k if and only if k Sj or j Sk . An alternative is
the AND rule where we put an edge between j and k if and only if k Sj and j Sk .
Another popular approach to estimating the CIG works by first directly estimating ,
as well now see.
3.3.3
The following facts about blockwise inversion of matrices will help us to interpret the mean
and variance in Proposition 18.
Proposition 19. Let M Rpp be a symmetric positive definite matrix and suppose
P Q
M=
QT R
with P and R square matrices. The Schur complement of R is P QR1 QT =: S. We
have that S is positive definite and
S 1
S 1 QR1
1
M =
.
R1 QT S 1 R1 + R1 QT S 1 QR1
Furthermore det(M ) = det(S)det(R).
Let = 1 be the precision matrix. We see that Var(ZA ZAc ) = 1
A,A . Moreover,
considering the case when A = {j, k}, we have
1
kk jk
.
Var(Zjk Zjk ) =
det(A,A ) jk jj
Thus
Zk Zj Zjk jk = 0.
This motivates another approach to estimating the CIG.
3.3.4
1
T 1
(z ) (z ) .
2
n
1X
`(, ) = log det()
(xi )T (xi ).
2
2 i=1
28
Write
X
= 1
X
xi ,
n i=1
1X
i X)
T.
S=
(xi X)(x
n i=1
Then
n
X
(xi ) (xi ) =
i=1
n
X
i=1
n
X
+X
)T (xi X
+X
)
(xi X
T (xi X)
+ n(X
)T (X
)
(xi X)
i=1
n
X
T (X
).
+2
(xi X)
i=1
Also,
n
n
X
X
T (xi X)
=
T (xi X)}
(xi X)
tr{(xi X)
i=1
i=1
n
X
i X)
T }
tr{(xi X)(x
i=1
= ntr(S).
Thus
n
)T (X
)}
`(, ) = {tr(S) log det() + (X
2
and
n
maxp `(, ) = {tr(S) log det()}.
R
2
M L can be obtained by solving
Hence the maximum likelihood estimate of ,
min { log det() + tr(S)},
:0
where 0 means is positive definite. One can show that the objective is convex and
we are minimising over a convex set. As
:0
P
where kk1 = j,k jk ; this results in a sparse estimate of the precision matrix from
which an estimate of the CIG can be constructed.
29
3.4
Example 3.4.1. Consider the following (totally artificial) SEM which has whether you
are taking this course (Z1 = 1) depending on whether you went to the statistics catch up
lecture (Z2 = 1) and whether you have heard about machine learning (Z3 = 1). Suppose
Z3 = 3 Bern(0.25)
Z2 = 1{0.52 (1+Z3 )>0.25}
Z1 = 1{0.51 (Z2 +Z3 )>0.25}
2 U [0, 1]
1 U [0, 1].
Z3
Z1
Note that an SEM for Z determines its law. Indeed using a topological ordering for
the associated DAG, we can write each Zk as a function of 1 (1) , 1 (2) , . . . , 1 ((k)) .
Importantly, though, we can use it to tell us much more than simply the law of Z: for
example we can query properties of the distribution of Z after having set a particular
component to any given value. This is what we study next.
30
3.5
Interventions
Given an SEM S, we can replace one (or more) of the structural equations by a new
structural equation, for example for a chosen variable k we could replace the structural
k (Z , k ). This gives us a new structural equation
equation Zk = hk (ZPk , k ) by Zk = h
Pk
1 U [0, 1].
9
Thus P(Z1 = 1do(Z2 = 1)) = 14 34 + 34 12 = 16
. On the other hand,
X
P(Z1 = 1Z2 = 1) =
P(Z1 = 1Z2 = 1, Z3 = j)P(Z3 = jZ2 = 1)
j{0,1}
X
1
P(Z1 = 1Z2 = 1, Z3 = j)P(Z2 = 1Z3 = j)P(Z3 = j)
P(Z2 = 1)
j{0,1}
1
331 113
= 13 31
+
4
4
4
224
+
44
42
7
9
=
6= .
12
16
3.6
The DAG of an SEM can encode a number of conditional independencies present in the
law of the random vector Z. To understand this, we first introduce Markov properties on
DAGs similar to the Markov properties on undirected graphs we have already studied.
Let P be the joint law of Z and suppose it has a density f .
Definition 7. Given a DAG G, we say P satisfies the
(i) Markov factorisation property w.r.t. the DAG G if
f (z1 , . . . , zp ) =
p
Y
k=1
31
f (zk zpa(k) ).
(ii) global Markov property w.r.t. the DAG G if for all disjoint A, B, S {1, . . . , p},
A, B dseparated by S ZA ZB ZS .
Theorem 20. If P has a density f (with respect to a product measure), then all Markov
properties in definition 7 are equivalent.
In view of this, we will henceforth use the term Markov to mean global Markov.
Proposition 21. Let P be the law of an SEM with DAG G. Then P obeys the Markov
factorisation property w.r.t. G.
Thus we can read off from the DAG of an SEM a great deal of information concerning the
distribution it generates. We can use this to help us calculate the effects of interventions.
We have seen now how an SEM can be used to not only query properties of the joint
distribution, but also to determine the effects of certain perturbations to the system. In
many settings, we may not have a prespecified SEM to work with, but instead wed like to
learn the DAG from observational data. This is the problem we turn to next.
3.7
Given a sample of observations from P , we would like to determine the DAG which generated it. We can think of this task in terms of two subtasks: firstly we need to understand
how to extract information concerning P from a sample, which is a traditional statistical
question of the sort we are used to; secondly, given P itself, we need to relate this to the
DAG which generated it. The latter problem is unique to casual inference and we discuss
this first.
3.7.1
Three obstacles
There are three obstacles to causal structure learning. The first two are more immediate
but the last is somewhat subtle.
Causal minimality
We know that if P is generated by an SEM with DAG G, then P will be Markov w.r.t. G.
Conversely, one can show that if P is Markov w.r.t. a DAG G, then there is also an SEM
with DAG G that could have generated P . But P will be Markov w.r.t. a great number of
DAGs, e.g. Z1 and Z2 being independent can be represented by
Z1 = 0 Z2 + 1 = 1 ,
Z2 = 2 .
Z2
Z1
Z2
+
.
2 + 1
+ (2 + 1)
=
2
2
2
2
+ + ( + 1) + ( + 1) + 2 + 1
If + = 0 e.g. if = 1, , = 1, then Z1 Z3 . We claim that in this case P 0 can
also be generated by the SEM
Z1 = 1
Z2 = Z1 +
Z3 + 2
Z3 = 3 .
Here the j are independent with 1 N (0, 1), 3 N (0, 2),
= 1/2 and 3 N (0, 1/2).
Writing the DAGs for the two SEMs above as G and G, note that P 0 satisfies causal
Definition 10. We say P is faithful to the DAG G if it is Markov w.r.t. G and for all
disjoint A, B, S {1, . . . , p},
A, B dseparated by S ZA ZB ZS .
Faithfulness demands that all conditional independencies in P are represented in the
3.7.2
The PC algorithm
Proposition 23. If nodes j and k in a DAG G are adjacent, then no set can dseparate
them. If they are not adjacent and is a topological order with (j) < (k), then they are
dseparated by pa(k).
Proof. Consider a path j = j1 , . . . , jm = k. We may assume we dont have jm1 k
as otherwise the path would be blocked since jm1 pa(k). Let l be the largest l0 with
jl0 1 jl0 jl0 +1 ; this must exist as otherwise we would have a directed path from k to j
contradicting the topological ordering. In order for the path to be active, jl0 must have a
descendant in pa(k), but this would introduce a cycle.
This shows in particular that any nonadjacent nodes must have a dseparating set. If
we assume that P is faithful w.r.t. a DAG G, we can check whether nodes j and k are
adjacent in G by testing whether there is a set S with Zj Zk ZS . If there is no such set
S, j and k must be adjacent. This allows us to recover the skeleton of G.
Proposition 24. Suppose we have a triple of nodes j, k, l in a DAG and the only nonadjacent pair is j, k (i.e. in the skeleton j l k).
(i) If the nodes are in a vstructure (j l k) then no S that dseparates j and k can
contain l.
(ii) If there exists an S that dseparates j and k and l
/ S, then we must have j l k.
Proof. For (i) note that any set containing l cannot block the path j, l, k. For (ii) note we
know that the path j, l, k is blocked by S, so we must have j l k.
This last result then allows us to find the vstructures given the skeleton and a dseparating set S(j, k) corresponding to each absent edge. Given a skeleton and vstructures,
it may be possible to orient further edges by making using the acyclicity of DAGs; we do
not cover this here.
34
Sample version
The sample version of the PC algorithm replaces the querying of conditional independence
with a conditional independence test applied to data x1 , . . . , xn . The level of the test
will be a tuning parameter of the method. If the data are assumed to be multivariate
normal, the (sample) partial correlation can be used to test conditional independence since
if Zj Zk ZS then
Corr(Zj , Zk ZS ) := jkS = 0.
35
To compute the sample partial correlation, we regress Xj and Xk on XS and compute the
correlation between the resulting residuals.
36
Chapter 4
Multiple testing
In many modern applications, we may be interested in testing many hypotheses simultaneously. Suppose we are interested in testing null hypotheses H1 , . . . , Hm of which m0
are true and m m0 are not (we do not mention the alternative hypotheses explicitly).
Consider the following contingency table:
Claimed nonsignificant
Total
N00
N10
N01
N11
m0
m m0
Total
mR
37
m0
.
m
P(pi /m)
iI0
m0
.
m
4.1
38
Theorem 26. The closed testing procedure makes no false rejections with probability 1.
In particular it controls the FWER at level .
Proof. Assume I0 is not empty (as otherwise no rejection can be false anyway). Define the
events
A = {at least one false rejection} {N01 1},
B = {reject HI0 with the local test} = {I0 = 1}.
In order for there to be a false rejection, we must have rejected HI0 with the local test.
Thus B A, so
FWER P(A) = P(A B) = P(B)P(AB) P(I0 = 1) .
Different choices for the local tests give rise to different testing procedures. Holms
procedure takes I to be the Bonferroni test i.e.
(
1 if miniI pi I
I =
0 otherwise.
It can be shown (see example sheet) that Holms procedure amounts to ordering the pvalues p1 , . . . , pm as p(1) p(m) with corresponding hypothesis tests H(1) , . . . , H(m) ,
so (i) is the index of the ith smallest pvalue, and then performing the following.
Step 1. If p(1) /m reject H(1) , and go to step 2. Otherwise accept H(1) , . . . , H(m) and
stop.
Step i. If p(i) /(mi+1), reject H(i) and go to step i+1. Otherwise accept H(i) , . . . , H(m) .
Step m. If p(m) , reject H(m) . Otherwise accept H(m) .
The pvalues are visited in ascending order and rejected until the first time a pvalue exceeds
a given critical value. This sort of approach is known (slightly confusingly) as a stepdown
procedure.
4.2
A different approach to multiple testing does not try to control the FWER, but instead
attempts to control the false discovery rate (FDR) defined by
FDR = E(FDP)
N01
,
FDP =
max(R, 1)
39
where FDP is the false discovery proportion. Note the maximum in the denominator is to
ensure division by zero does not occur. The FDR was introduced by Benjamini & Hochberg
(1995), and it is now widely used across science, particularly biostatistics.
The BenjaminiHochberg procedure attempts to control the FDR at level and works
as follows. Let
i
k = max i : p(i)
.
m
(j
+
1)
\i
,
ki = max j : p(j)
m
\i
,p
>
for all s > r
m (r1)
m (s1)
m
r
= pi
, Ri = r 1 .
m
40
Thus
FDR = E
N01
max(R, 1)
m
X N01
=
E
1{R=r}
r
r=1
X
m
X
1
=
E
1{pi r/m} 1{R=r}
r
r=1
iI
0
=
=
m
X
r=1
m
X
r=1
1X
P(pi r/m, R = r)
r iI
0
1X
P(pi r/m)P(Ri = r 1)
r iI
m iI
0
m0
=
.
m
m
XX
P(Ri = r 1)
r=1
41