You are on page 1of 44

Modern Statistical Methods

Rajen D. Shah

r.shah@statslab.cam.ac.uk
Course webpage:

http://www.statslab.cam.ac.uk/~rds37/modern_stat_methods.html
In this course we will study a selection of important modern statistical methods. This
selection is heavily biased towards my own interests, but I hope it will nevertheless give you
a flavour of some of the most important recent methodological developments in statistics.
Over the last 25 years, the sorts of datasets that statisticians have been challenged to
study have changed greatly. Where in the past, we were used to datasets with many observations with a few carefully chosen variables, we are now seeing datasets where the number
of variables can run into the thousands and greatly exceed the number of observations. For
example, with microarray data, we typically have gene expression values measured for several thousands of genes, but only for a few hundred tissue samples. The classical statistical
methods are often simply not applicable in these high-dimensional situations.
The course is divided into 4 chapters (of unequal size). The first chapter is on ridge
regression (an important generalisation of ordinary least squares) and the kernel trick (one
of the most important ideas in machine learning).
Chapter 2 is on the Lasso and its extensions. The Lasso has been at the centre of much
of the developments that have occurred in high-dimensional statistics, and will allow us to
perform regression in the seemingly hopeless situation when the number of parameters we
are trying to estimate is larger than the number of observations.
In chapter 3 we will introduce graphical modelling. Where the previous two chapters
consider methods for relating a particular response to a large collection of (explanatory)
variables, graphical modelling will give us a way of understanding relationships between
the variables themselves.
Statistics is not only about developing methods that can predict well in the presence
of noise, but also about assessing the uncertainty in our predictions and estimates. In
the final chapter we will tackle the problem of how to handle performing thousands of
hypothesis tests at the same time.
Before we begin the course proper, we will briefly review two key classical statistical
methods: ordinary least squares and maximum likelihood estimation. This will help to set
the scene and provide a warm-up for the modern methods to come later.

Classical statistics
Ordinary least squares
Imagine data are available in the form of observations (Yi , xi ) R Rp , i = 1, . . . , n, and
the aim is to infer a simple regression function relating the average value of a response, Yi ,
and a collection of predictors or variables, xi . This is an example of regression analysis,
one of the most important tasks in statistics.
A linear model for the data assumes that it is generated according to
Y = X 0 + ,

(0.0.1)

where Y Rn is the vector of responses; X Rnp is the predictor matrix (or design
matrix) with ith row xTi ; Rn represents random error; and 0 Rp is the unknown
vector of coefficients.
Provided p  n, a sensible way to estimate is by ordinary least squares (OLS). This
yields an estimator OLS with
OLS := arg min kY Xk22 = (X T X)1 X T Y,

(0.0.2)

Rp

provided X has full column rank.


Under the assumptions that (i) E(i ) = 0 and (ii) Cov(i , j ) = 2 1{i=j} , we have that:
E 0 ,2 (OLS ) = E{(X T X)1 X T (X 0 + )} = 0 .
Var 0 ,2 (OLS ) = (X T X)1 X T Var()X(X T X)1 = (X T X)1 .

The GaussMarkov theorem states that OLS is the best linear unbiased estimator in
our setting: for any other estimator that is linear in Y (so = AY for some fixed matrix
A), we have
Var 0 ,2 (OLS )
Var 0 ,2 ()
is positive semi-definite.

Maximum likelihood estimation


The method of least squares is just one way to construct as estimator. A more general
technique is that of maximum likelihood estimation. Here given data y Rn that we take
as a realisation of a random variable Y , we specify its density f (y; ) up to some unknown
vector of parameters Rd , where is the parameter space. The likelihood function
is a function of for each fixed y given by
L() := L(; y) = c(y)f (y; ),
where c(y) is an arbitrary constant of proportionality. The maximum likelihood estimate
of maximises the likelihood, or equivalently it maximises the log-likelihood
`() := `(; y) = log f (y; ) + log(c(y)).
ii

A very useful quantity in the context of maximum likelihood estimation is the Fisher
information matrix with jkth (1 j, k p) entry


2
ijk () := E
`() .
j k
It can be thought of as a measure of how hard it is to estimate when it is the true
parameter value. The CramerRao lower bound states that if is an unbiased estimator
of , then under regularity conditions,
i1 ()
Var ()
is positive semi-definite.
A remarkable fact about maximum likelihood estimators (MLEs) is that (under quite
general conditions) they are asymptotically normally distributed, asymptotically unbiased
and asymptotically achieve the CramerRao lower bound.
Assume that the Fisher information matrix when there are n observations, i(n) () (where
we have made the dependence on n explicit) satisfies i(n) ()/n I() for some positive
definite matrix I. Then denoting the maximum likelihood estimator of when there are
n observations by (n) , under regularity conditions, as the number of observations n
we have
(n)
d
n( ) Nd (0, I 1 ()).
Returning to our linear model, if we assume in addition that i N (0, 2 ), then the
log-likelihood for (, 2 ) is
n
1 X
n
2
(yi xTi )2 .
`(, ) = log( ) 2
2
2 i=1
2

We see that the maximum likelihood estimate of and OLS coincide. It is easy to check
that
 2 T

X X
0
2
i(, ) =
.
0
n 4 /2

Np (0, n(X T X)1 );
The general theory for MLEs would suggest that approximately n()
in fact it is straight-forward to show that this distributional result is exact.

iii

Chapter 1
Ridge regression and the kernel trick
Let us revisit the linear model with
Yi = xTi 0 + i .
For unbiased estimators of 0 , their variance gives a way of comparing their quality in
the relevant quantity is
terms of squared error loss. For a potentially biased estimator, ,
+ E()
0 }{ E()
+ E()
0 }T ]
E 0 ,2 {( 0 )( 0 )T } = E[{ E()
+ {E( 0 )}{E( 0 )}T ,
= Var()
a sum of squared bias and variance terms. A crucial part of the optimality arguments
for OLS and MLEs was unbiasedness. Do there exist biased methods whose variance is is
reduced compared to OLS such that their overall prediction error is lower? The emphatic
answer is yes and in fact the use of biased estimators is essential in dealing with the
large p small n-type settings that are becoming increasingly more prevalent in modern
data analysis. In the first two chapters well explore an extremely important method
for variance reduction based on penalisation which can produce estimators that in many
situations greatly out-perform OLS and MLEs.

1.1

Ridge regression (Hoerl and Kenard, 1970)

One way to reduce the variance of OLS is to shrink the estimated coefficients towards 0.
Ridge regression does this by solving the following optimisation problem
2
2
R
(
R
, ) = arg min {kY 1 Xk2 + kk2 }.
(,)RRp

Here 1 is an n-vector of 1s. We see that the usual OLS objective is penalised by an
additional term proportional to kk22 . The parameter 0, which controls the severity of
the penalty and therefore the degree of the shrinkage towards 0, is known as a regularisation
parameter or tuning parameter. We have explicitly included an intercept term which is
1

not penalised. The reason for this is that if it were omitted, the estimator would then not
be location equivariant. However, X is not invariant under scale transformations so it
is standard practice to centre each column of X (hence
making them orthogonal to the
intercept term) and then scale them to have `2 -norm n.
R
:= Pn Yi /n,
It is straightforward to
show
that
after
this
standardisation
of
X,

=
Y

i=1
P
so we may assume that ni=1 Yi = 0 by replacing Yi by Yi Y and then we can remove
from our objective function. In this case
R = (X T X + I)1 X T Y.
In this form, we can see how the addition of the I term helps to stabilise the estimator.
Note that when X does not have full column rank (such as in high-dimensional situations),
we can still compute this estimator. On the other hand, when X does have full column
rank, we have the following theorem.
Theorem 1. For sufficiently small (depending on 0 and 2 ),
E(OLS 0 )(OLS 0 )T E(R 0 )(R 0 )T
is positive definite.
Proof. First we compute the bias of R . We drop the subscript and superscript R for
convenience.
0 = (X T X + I)1 X T X 0 0
E()
= (X T X + I)1 (X T X + I I) 0 0
= (X T X + I)1 0 .

Now we look at the variance of .


= E{(X T X + I)1 X T }{(X T X + I)1 X T }T
Var()
= 2 (X T X + I)1 X T X(X T X + I)1 .
Thus E(OLS 0 )(OLS 0 )T E( 0 )( 0 )T is equal to
T

2 (X T X)1 2 (X T X + I)1 X T X(X T X + I)1 2 (X T X + I)1 0 0 (X T X + I)1 .


After some simplification, we see that this is equal to
T

(X T X + I)1 [ 2 {2I + (X T X)1 } 0 0 ](X T X + I)1 .


Thus E(OLS 0 )(OLS 0 )T E( 0 )( 0 )T is positive definite for > 0 if and
only if
T
2 {2I + (X T X)1 } 0 0
is positive definite, which is true for > 0 sufficiently small (we can take 0 < <
2 2 /k 0 k22 ).
2

The theorem says that R beats OLS provided is chosen appropriately. To be able
to use ridge regression effectively, we need a way of selecting a good we will come to
this very shortly. What the theorem doesnt really tell us is in what situations we expect
ridge regression to perform well. To understand that, we will turn to one of the key matrix
decompositions used in statistics, the singular value decomposition (SVD).

1.1.1

The singular value decomposition

The singular value decomposition (SVD) is a generalisation of an eigendecomposition of a


square matrix. We can factorise any X Rnp into its SVD
X = U DV T ,
where the columns of U Rnp are orthonormal, the columns of V Rpp are orthonormal
(so V is an orthogonal matrix), and D Rpp is diagonal with D11 D22 Dpp 0.
Note there are several other possibilities for the dimensions of U, D and V depending on
the rank of X, see example sheet 1 for details.
Taking X as our matrix of predictors, the fitted values from ridge regression are
X R = X(X T X + I)1 X T Y
= U DV T (V D2 V T + I)1 V DU T Y
= U D(D2 + I)1 DU T Y
p
2
X
Djj
UjT Y,
=
Uj 2
D
+

jj
j=1
where we have used the notation (that we shall use throughout the course) that Uj is the
jth column of U . For comparison, the fitted values from OLS are
X OLS = X(X T X)1 X T Y = U U T Y.
Both OLS and ridge regression compute the coordinates of Y with respect to the columns
2
2
of U . Ridge regression then shrinks these coordinates by the factors Djj
/(Djj
+ ); if Djj
is small, the amount of shrinkage will be larger.
To interpret this further, note that the SVD is intimately connected with Principal
Component Analysis (PCA). Consider v Rp with kvk2 = 1. Since the columns of X have
had their means subtracted, the sample variance of Xv Rn , is
1 T T
1
v X Xv = v T V D2 V T v.
n
n
Writing a = V T v, so kak2 = 1, we have
X
1
1 2
1 T
1
1X 2 2
v V D 2 V T v = aT D 2 a =
aj Djj D11
a2j = D11
.
n
n
n j
n
n
j
3

2
As kXV1 k22 /n = D11
/n, V1 determines the linear combination of the columns of X which
has the largest sample variance, when the coefficients of the linear combination are constrained to have `2 -norm 1. XV1 = D11 U1 is known as the first principal component of
2
/n, subject to being
X. Subsequent principal components have maximum variance Djj
orthogonal to all earlier ones.
Returning to ridge regression, we see that it shrinks Y most in the smaller principal
components of X. Thus it will work well when most of the signal is in the large principal
components of X. We now turn to the problem of choosing .

1.2

v-fold cross-validation

Cross-validation is a general technique for selecting a good regression method from among
several competing regression methods. We illustrate the principle with ridge regression,
where we have a family of regression methods given by different values.
So far, we have considered the matrix of predictors X as fixed and non-random. However, in many cases, it makes sense to think of it as random. Let us assume that our data
are i.i.d. pairs (xi , Yi ), i = 1, . . . , n. Then ideally, we might want to pick a value such
that
1
E(kY x T R (X, Y )k22 |X, Y )
(1.2.1)
n
is minimised. Here (x , Y ) Rp R is independent of (X, Y ) and has the same distribution
as (x1 , Y1 ), and we have made the dependence of R on the training data (X, Y ) explicit.
This is such that conditional on the original training data, it minimises the expected
prediction error on a new observation drawn from the same distribution as the training
data.
A less ambitious goal is to find a value to minimise the expected prediction error,
1
E{E(kY x T R (X, Y )k22 |X, Y )}
n

(1.2.2)

where compared with (1.2.1), we have taken a further expectation over the training set.
We still have no way of computing (1.2.2) directly, but we can attempt to estimate it.
The idea of v-fold cross-validation is to split the data into v groups or folds of roughly
equal size: (X (1) , Y (1) ), . . . , (X (v) , Y (v) ). Let (X (k) , Y (k) ) be all the data except that in
the kth fold. For each on a grid of values, we compute R (X (k) , Y (k) ): the ridge
regression estimate based on all the data except the kth fold. Writing (i) for the fold to
which (xi , Yi ) belongs, we choose the value of that minimises
n

CV () =

1X
{Yi xTi R (X ((i)) , Y ((i)) )}2 .
n i=1

Writing CV for the minimiser, our final estimate of 0 can then be RCV (X, Y ).

(1.2.3)

Note that for each i,


E{Yi xTi R (X ((i)) , Y ((i)) )}2 = E[E{Yi xTi R (X ((i)) , Y ((i)) )}2 |X ((i)) , Y ((i)) ].
(1.2.4)
This is precisely the expected prediction error in (1.2.2) but with the training data X, Y
replaced with a training data set of smaller size. If all the folds have the same size, then
CV () is an average of n identically distributed quantities, each with expected value as
in (1.2.4). However, the quantities being averaged are not independent as they share the
same data.
Thus cross-validation gives a biased estimate of the expected prediction error. The
amount of the bias depends on the size of the folds, the case when the v = n giving the
least biasthis is known as leave-one-out cross-validation. The quality of the estimate,
though, may be worse as the quantities being averaged in (1.2.3) will be highly positively
correlated. Typical choices of v are 5 or 10.

1.3

The kernel trick

The fitted values from ridge regression are


X(X T X + I)1 X T Y.

(1.3.1)

An alternative way of writing this is suggested by the following


X T (XX T + I) = (X T X + I)X T
(X T X + I)1 X T = X T (XX T + I)1
X(X T X + I)1 X T Y = XX T (XX T + I)1 Y.

(1.3.2)

Two remarks are in order:


Note while X T X is p p, XX T is n n. Computing fitted values using (1.3.1)
would require roughly O(np2 + p3 ) operations. If p  n this could be extremely
costly. However, our alternative formulation would only require roughly O(n2 p + n3 )
operations, which could be substantially smaller.
We see that the fitted values of ridge regression depend only on inner products
K = XX T between observations (note Kij = xTi xj ).

Now suppose that we believe the signal depends quadratically on the predictors:
X
Yi = xTi +
xik xil kl + i .
k,l

We can still use ridge regression provided we work with an enlarged set of predictors
xi1 , . . . , xip , xi1 xi1 , . . . , xi1 xip , xi2 xi1 , . . . , xi2 xip , . . . , xip xip .
5

This will give us O(p2 ) predictors. Our new approach to computing fitted values would
therefore have complexity O(n2 p2 + n3 ), which could be rather costly if p is large.
However, rather than first creating all the additional predictors and then computing
the new K matrix, we can attempt to directly compute K. To this end consider

2
X
T
2
(1 + xi xj ) = 1 +
xik xjk
k

=1+2

xik xjk +

xik xil xjk xjl .

k,l

Observe this amounts to an inner product between vectors of the form

(1, 2xi1 , . . . , 2xip , xi1 xi1 , . . . , xi1 xip , xi2 xi1 , . . . , xi2 xip , . . . , xip xip )T .

(1.3.3)

Thus if we set
Kij = (1 + xTi xj )2

(1.3.4)

and plug this into the formula for the fitted values, it is exactly as if we had performed
ridge regression on an enlarged set of variables given by (1.3.3). Now computing K using
(1.3.4) would require only p operations per entry, so O(n2 p) operations in total. It thus
seems we have improved things by a factor of p using our new approach.
This computational short-cut is not without its shortcomings. Notice weve had to use
a slightly peculiar scaling of the main effects, and the interactions of the form xik xil k 6= l
appear twice in (1.3.3). Nevertheless it is a useful trick and more importantly for us it
serves to illustrate some general points.
Since ridge regression only depends on inner products between observations, rather
than fitting non-linear models by first mapping the original data xi Rp to (xi ) Rd
(say) using some feature map (which could, for example introduce quadratic effects),
we can instead try to directly compute k(xi , xj ) = h(xi ), (xj )i.
In fact instead of thinking in terms of feature maps, we can instead try to think about
an appropriate measure of similarity k(xi , xj ) between observations. Modelling in this
fashion is sometimes much easier.

We will now formalise and extend what we have learnt with this example.

1.4

Kernels

We have seen how a model with quadratic effects can be fitted very efficiently by replacing
the inner product matrix (known as the Gram matrix ) XX T in (1.3.2) with the matrix
in (1.3.4). It is then natural to ask what other non-linear models can be fitted efficiently
using this sort of approach.

We wont answer this question directly, but instead we will try to understand the sorts
of similarity measures k that can be represented as inner products between transformations
of the original data.
That is we will study the similarity measures k : X X R from the input space X
to R for which there exists a feature map : X H where H is some (real) inner product
space with
k(x, x0 ) = h(x), (x0 )i.
(1.4.1)
Recall that an inner product space is a real vector space H endowed with a map h, i :
H H R that obeys the following properties.
(i) Symmetry: hu, vi = hv, ui.
(ii) Linearity: for a, b R hau + bw, vi = ahu, vi + bhw, vi.
(iii) Positive-definiteness: hu, ui 0 with equality if and only if u = 0.
Definition 1. A positive definite kernel or more simply a kernel (for brevity) k is a
symmetric map k : X X R for which for all n N and all x1 , . . . , xn X , the matrix
K with entries
Kij = k(xi , xj )
is positive semi-definite.
A kernel is a little like an inner product, but need not be bilinear in general. However,
a form of the CauchySchwarz inequality does hold for kernels.
Proposition 2.
k(x, x0 )2 k(x, x)k(x0 , x0 ).
Proof. The matrix



k(x, x) k(x, x0 )
k(x0 , x) k(x0 , x0 )

must be positive semi-definite so in particular its determinant must be non-negative.


First we show that any inner product of feature maps will give rise to a kernel.
Proposition 3. k defined by k(x, x0 ) = h(x), (x0 )i is a kernel.
Proof. Let x1 , . . . , xn X , 1 , . . . , n R and consider
X
X
i k(xi , xj )j =
i h(xi ), (xj )ij
i,j

i,j

X

i (xi ),


j (xj ) 0.

Showing that every kernel admits a representation of the form (1.4.1) is slightly more
involved, and we delay this until after we have studied some examples.
7

1.4.1

Examples

Proposition 4. Suppose k1 , k2 , . . . are kernels.


(i) If 1 , 2 0 then 1 k1 + 2 k2 is a kernel. If limm km (x, x0 ) =: k(x, x0 ) exists for
all x, x0 X , then k is a kernel.
(ii) The pointwise product k = k1 k2 is a kernel.
Linear kernel. k(x, x0 ) = xT x0 .
Polynomial kernel. k(x, x0 ) = (1 + xT x0 )d . To show this is a kernel, we can simply note
that 1 + xT x0 gives a kernel owing to the fact that 1 is a kernel and (i) of Proposition 4.
Next (ii) and induction shows that k as defined above is a kernel.
Gaussian kernel. The highly popular Gaussian kernel is defined by


kx x0 k22
0
k(x, x ) = exp
.
2 2
For x close to x0 it is large whilst for x far from x0 the kernel quickly decays towards 0.
The additional parameter 2 known as the bandwidth controls the speed of the decay to
zero. Note it is less clear how one might find a corresponding feature map and indeed any
feature map that represents this must be infinite dimensional.
To show that it is a kernel first decompose kx x0 k22 = kxk22 + kx0 k22 2xT x0 . Note that
by Proposition 3,




kx0 k22
kxk22
0
exp
k1 (x, x ) = exp
2 2
2 2
is a kernel. Next writing
0

k2 (x, x ) = exp(x x / ) =

X
(xT x0 / 2 )r
r=0

r!

and using (i) of Proposition 4 shows that k2 is a kernel. Finally observing that k = k1 k2
and using (ii) shows that the Gaussian kernel is indeed a kernel.
Jaccard similarity kernel. Take X to be the set of all subsets of {1, . . . , p}. For
x, x0 X with x x0 =
6 define
k(x, x0 ) =

|x x0 |
|x x0 |

and if x x0 = then set k(x, x0 ) = 1. Showing that this is a kernel is left to the example
sheet.
8

1.4.2

Feature maps from kernels

Theorem 5. For every kernel k there exists a feature map taking values in some inner
product space H such that
k(x, x0 ) = h(x), (x0 )i.
(1.4.2)
Proof. We will take H to be the vector space of functions of the form
f () =

n
X

i k(, xi ),

(1.4.3)

i=1

where n N, xi X and i R. Our feature map : X H will be


(x) = k(, x).

(1.4.4)

We now define an inner product on H. If f is given by (1.4.3) and


g() =

m
X

j k(, xj )

(1.4.5)

j=1

we define their inner product to be


hf, gi =

n X
m
X

i j k(xi , xj ).

(1.4.6)

i=1 j=1

We need to check this is well-defined as the representations of f and g in (1.4.3) and (1.4.5)
need not be unique. To this end, note that
n X
m
X

i j k(xi , xj ) =

n
X

i=1 j=1

i=1

i g(xi ) =

m
X

j f (xj ).

(1.4.7)

j=1

The first equality shows that the inner product does not depend on the particular expansion
of g whilst the second equality shows that it also does not depend on the expansion of f .
Thus the inner product is well-defined.
First we check that with defined as in (1.4.4) we do have relationship (1.4.2). Observe
that
n
X
hk(, x), f i =
i k(xi , x) = f (x),
(1.4.8)
i=1

so in particular we have
h(x), (x0 )i = hk(, x), k(, x0 )i = k(x, x0 ).
It remains to show that it is indeed an inner product. It is clearly symmetric and (1.4.7)
shows linearity. We now need to show positive definiteness.
9

First note that


hf, f i =

i k(xi , xj )j 0

(1.4.9)

i,j

by positive definiteness of the kernel. Now from (1.4.8),


f (x)2 = (hk(, x), f i)2 .
If we could use the CauchySchwarz inequality on the right-hand side, we would have
f (x)2 hk(, x), k(, x)ihf, f i,

(1.4.10)

which would show that if hf, f i = 0 then necessarily f = 0; the final property we need
to show that h, i is an inner product. However, in order to use the traditional Cauchy
Schwarz inequality we need to first know were dealing with an inner product, which is
precisely what were trying to show!
Although we havent shown that h, i is an inner product, we do have enough information to show that it is itself a kernel. We may then appeal to Proposition 2 to obtain
(1.4.10). With this in mind, we argue as follows. Given functions f1 , . . . , fm and coefficients
1 , . . . , m R, we have
X

X
X
i hfi , fj ij =
i fi ,
j fj 0
i,j

where we have used linearity and (1.4.9), showing that it is a kernel.

10

Chapter 2
The Lasso and beyond
2.1

Model selection

In many modern datasets, there are reasons to believe there are many more variables
present than are necessary to explain the response. Let S be the set S = {k : k0 6= 0} and
suppose s := |S|  p.
The mean squared prediction error (MSPE) of OLS is
1
1
EkX 0 X OLS k22 = E{( 0 OLS )T X T X( 0 OLS )}
n
n
1
= E[tr{( 0 OLS )( 0 OLS )T X T X}]
n
1
= tr[E{( 0 OLS )( 0 OLS )T }X T X]
n
1
p
= tr(Var(OLS )X T X) = 2 .
n
n
If we could identify S and then fit a linear model using just these variables, wed obtain
an MSPE of 2 s/n which could be substantially smaller than 2 p/n. Furthermore, it can
be shown that parameter estimates from the reduced model are more accurate. The smaller
model would also be easier to interpret.
We now briefly review some classical model selection strategies.
Best subset regression
A natural approach to finding S is to consider all 2p possible regression procedures each
involving regressing the response on a different sets of explanatory variables XM where
M is a subset of {1, . . . , p}. We can then pick the best regression procedure using crossvalidation (say). For general design matrices, this involves an exhaustive search over all
subsets, so this is not really feasible for p > 50.

11

Forward selection
This can be seen as a greedy way of performing best subsets regression. Given a target
model size m (the tuning parameter), this works as follows.
1. Start by fitting an intercept only model.
2. Add to the current model the predictor variable that reduces the residual sum of
squares the most.
3. Continue step 2 until m predictor variables have been selected.

2.2

The Lasso estimator (Tibshirani, 1996)

The Least absolute shrinkage and selection operator (Lasso) estimates 0 by L , where
(
L , L ) minimise
1
kY 1 Xk22 + kk1
(2.2.1)
2n
P
over (, ) R Rp . Here kk1 is the `1 -norm of : kk1 = pk=1 |k |.
Like ridge regression, L shrinks the OLS estimate towards the origin, but there is
an important difference. The `1 penalty can force some of the estimated coefficients to be
exactly 0. In this way the Lasso can perform simultaneous variable selection and parameter
estimation. As we did with ridge regression, we can centre and scale the X matrix, and
also centre Y and thus remove from the objective. Define
Q () =

1
kY Xk22 + kk1 .
2n

(2.2.2)

Now the minimiser(s) of Q () will also be the minimiser(s) of


kY Xk22 subject to kk1 kL k1 .
Similarly with the Ridge regression objective, we know that R minimises kY Xk22
subject to kk2 kR k2 .
Now the contours of the OLS objective kY Xk22 are ellipsoids centred at OLS ,
while the contours of kk22 are spheres centred at the origin, and the contours of kk1 are
diamonds centred at 0.
The important point to note is that the `1 ball { Rp : kk1 kL k1 } has corners
where some of the components are zero, and it is likely that the OLS contours will intersect
the `1 ball at such a corner.

12

2.2.1

Prediction error of the Lasso with no assumptions on the


design

A remarkable property of the Lasso is that even when p  n, it can still perform well in
terms of prediction error. Suppose the columns of X have been centred and scaled (as we
will always assume from now on unless stated otherwise) and assume the normal linear
model (where we have already centred Y ),
Y = X 0 + 1

(2.2.3)

where Nn (0, 2 I).


Theorem 6. Let be the Lasso solution when
r
log(p)
.
= A
n
2
With probability at least 1 p(A /21)
r
1
2 4A log(p) k 0 k1 .
kX( 0 )k
2
n
n
Proof. From the definition of we have
1
2 + kk
1 1 kY X 0 k2 + k 0 k1 .
kY X k
2
2
2n
2n
Rearranging,
1
2 1 T X( 0 ) + k 0 k1 kk
1.
kX( 0 )k
2
2n
n
Now |T X( 0 )| kX T k k 0 k1 . Let = {kX T k /n }. Lemma 7 shows
2
that P() 1 p(A /21) . Working on the event , we obtain
1
1 + k 0 k1 kk
1,
2 k 0 k
kX( 0 )k
2
2n
1
2 4k 0 k1 ,
kX( 0 )k
by the triangle inequality.
2
n
Lemma 7.
P(kX T k /n t) 1 exp{nt2 /(2 2 ) + log(p)}.
Proposition 8 (Normal tail bound). 1 (r) er

2 /2

/2.

Proof.
Z
Z
2
1
1
x x2 /2
er /2
x2 /2
1 (r) =
e
dx
e
dx = .
2 r
2 r r
r 2
p
2
Thus provided r 2/, the result is true. Now let f (r) = 1 (r) er /2 /2. Note
that f (0) = 0 and
r 

2
2
rer /2
er /2
2
0
f (r) = (r) +
=
r
0
2
2

p
p
for r 2/. By the mean value theorem, for 0 < r 2/, f (r)/r 0, so f (r) 0.
13

2.2.2

Some facts from optimisation theory and convex analysis

In order to study the Lasso in detail, it will be helpful to review some basic facts from
optimisation and convex analysis.
Convexity
A set A Rd is convex if
x, y A (1 t)x + ty A

for all t (0, 1).

A function f : A R (where A is convex) is convex if



f (1 t)x + ty (1 t)f (x) + tf (y)
for all x, y A and t (0, 1). It is strictly convex if the inequality is strict for all x, y A,
x 6= y and t (0, 1).
The Lagrangian method
Consider an optimisation problem of the form
minimise f (x), x A subject to g(x) = 0

(2.2.4)

where g : A Rb . Suppose the optimal value is c R. The Lagrangian for this problem
is defined as
L(x, ) = f (x) + T g(x)
where Rb . Note that
inf L(x, )

xA

inf

L(x, ) = c

xA:g(x)=0

for all . The Lagrangian method involves finding a such that the minimising x on the
LHS satisfies g(x ) = 0. This x must then be a minimiser in the original problem (2.2.4).
Subgradients
We now take A to be Rd . A vector v Rd is a subgradient of f at x if
f (y) f (x) + v T (y x)

for all y Rd .

The set of subgradients of f at x is called a subdifferential and denoted f (x).


In order to make use of subgradients, we will require the following two facts:
Proposition 9. Let f : Rd R be convex, and suppose f is differentiable at x. Then
f (x) = {f (x)}.
14

Proposition 10. Let f, g : Rd R be convex and let > 0. Then


(f + g)(x) = f (x) + g(x) = {v + w : v f (x), w g(x)},
(f )(x) = f (x) = {v : v f (x)}.
The following easy (but key) result is often referred to in the statistical literature as the
KarushKuhnTucker (KKT) conditions, though it is actually a much simplified version
of them.
Proposition 11. x arg min f (x) if and only if 0 f (x ).
xRd

Proof.
f (y) f (x ) for all y Rd f (y) f (x ) + 0T (y x) for all y Rd
0 f (x ).
Let us now compute the subdifferential of the `1 norm. First note that k k1 : Rd R
is convex. Indeed it is a norm so the triangle inequality gives ktx + (1 t)yk1 tkxk1 +
(1 t)kyk1 . We introduce some notation that will be helpful here and throughout the rest
of the course.
For x Rd and A = {k1 , . . . , km } {1, . . . , d} with k1 < < km , by xA we will mean
(xk1 , . . . , xkm )T . Similarly if X has d columns we will write XA for
XA = (Xk1 Xkm ).
Further in this context, by Ac , we will mean {1, . . . , d} \ A. Note these column and
component extraction operations will always be considered to have taken place first before
any further operations on the matrix, so for example XAT = (XA )T . Finally, define

1 if x1 < 0
sgn(x1 ) = 0
if x1 = 0

1
if x1 > 0,
and
sgn(x) = (sgn(x1 ), . . . , sgn(xd ))T .
Proposition 12. For x Rd let A = {j : xj 6= 0}. Then
kxk1 = {v Rd : kvk 1 and vA = sgn(xA )}
Proof. If v kxk1 then kyk1 kxk1 + v T (y x) for all y Rd . By taking yAc = xAc = 0
and then yA = xA , we get two equations
kyA k1 kxA k1 + vAT (yA xA ) for all yA R|A|
|Ac |

kyAc k1 vAT c yAc for all yAc R


15

(2.2.5)
(2.2.6)

If v satisfies (2.2.5) and (2.2.6), then v kxk1 . From (2.2.5) we get that vA kxA k1 =
{sgn(xA )} as k k1 is differentiable at xA . Next we claim that (2.2.6) holds if and only
if kvAc k 1. Indeed if kvAc k 1 then kyAc k1 kvAc k kyAc k1 vAT c yAc in view of
Holders inequality. Now suppose there exists j Ac with |vj | > 1. Take y with yA = xA ,
yj = sgn(vj ), yAc \{j} = 0. Then
kyAc k1 < |vj | = vAT c yAc ,
which is a contradiction.

2.2.3

Uniqueness of Lasso solutions

Equipped with these tools from convex analysis, we can now fully characterise the solutions
to the Lasso. We have that L is a Lasso solution if and only if 0 Q (L ), which is
equivalent to
1 T
X (Y X L ) =
,
n
L
L
for with k
k 1 and writing S = {k : ,k
6= 0}, S = sgn(,
).
S
Now although its still not clear whether Lasso solutions are unique, it is straightforward
to show that Lasso solutions exist and that fitted values are unique.
Proposition 13.

(i) Lasso solutions exist.

(ii) X L is unique.
Proof.

(i) Provided > 0,


min

:kk1 kY k22 /(2n)

Q () Q(0) =

kY k22

min
Q ()
2n
:kk1 >kY k22 /(2n)

But on the LHS we are minimising a continuous function over a closed and bounded
(and therefore compact) set: thus a minimiser exists. [When = 0, the fitted values
are simply the projection of Y on to the column space of X.]
(ii) Fix and suppose (1) and (2) are two Lasso solutions giving an optimal objective
value of c . Now for t (0, 1), by strict convexity of k k22 ,
kY tX (1) (1 t)X (1) k22 tkY X (1) k22 + (1 t)kY X (2) k22 ,
with equality if and only if X (1) = X (2) . Since k k1 is also convex, we see that
c Q (t(1) + (1 t)(2) )
= kY tX (1) (1 t)X (2) k2 /(2n) + kt(1) + (1 t)(2) k1

2
(1)
2
tkY X k2 /(2n) + (1 t)kY X (2) k22 /(2n) + kt(1) + (1 t)(2) k1
t{kY X (1) k22 /(2n) + k(1) k1 } + (1 t){kY X (2) k22 /(2n) + k(2) k1 }

(1)
(2)

= tQ(

) + (1 t)Q(

)c .

16

Equality must prevail throughout this chain of inequalities, so X (1) = X (2) .


Define the equicorrelation set E to be the set of k such that
1 T
|X (Y X L )| = .
n k
Further for k E, let signs sk satisfy
1 T
X (Y X L ) = sk .
n k
Note that E is well-defined since it only depends on the fitted values, which (as we have
just shown) are unique. Similarly the signs sk are well-defined. By the KKT conditions,
the equicorrelation set contains the set of non-zeroes of all Lasso solutions. Note that if
rank(XE ) = |E| then the Lasso solution must be unique: indeed if (1) and (2) are two
Lasso solutions, then as
(1)
(2)
XE (E E ) = 0
(1)
(2)
by linear independence of the columns of XE , E = E . One can show that when X
is drawn from a distribution absolutely continuous with respect to Lebesgue measure, we
must have rank(XE ) = |E| with probability 1.

2.2.4

Variable selection

Consider now the noiseless version of the high-dimensional linear model (2.2.3), Y =
X 0 . The case with noise can dealt with by similar arguments to those well use below
when we work on an event that kX T k is small (see example sheet).
Let S = {k : k0 6= 0}, N = {1, . . . , p} \ S and assume wlog that S = {1, . . . , s}, and
also that rank(XS ) = s.
Theorem 14. Let > 0 and define = XNT XS (XST XS )1 sgn(S0 ). If kk 1 and for
k S,
(2.2.7)
|k0 | > |sgn(S0 )T [{ n1 XST XS }1 ]k |,
then there exists a Lasso solution L with sgn(L ) = sgn( 0 ). As a partial converse, if
there exists a Lasso solution L with sgn(L ) = sgn( 0 ), then kk 1.
Remark 1. We can interpret kk as the maximum in absolute value over k N of the
dot product of sgn(S0 ) and (XST XS )1 XST Xk , the coefficient vector obtained by regressing
Xk on XS . The condition kk 1 is known as the irrepresentable condition.
Proof. Fix > 0 and write = L and S = {k : k 6= 0} for convenience. The KKT
conditions for the Lasso give
1 T
=
X X( 0 )

n
17

where k
k 1 and S = sgn(S ). We can expand this into

 0

 
1 XST XS XST XN
S S

= S .
T
T

N
n XN XS X N XN
N

(2.2.8)

= sgn( 0 ) then S = sgn( 0 ) and N = 0. The


We prove the converse first. If sgn()
S
top block of (2.2.8) gives
S0 S = ( n1 XST XS )1 sgn(S0 ).
Substituting this into the bottom block, we get
n1 XNT XS ( n1 XST XS )1 sgn(S0 ) =
N .
Thus as k
N k 1, we have kk 1.
For the positive statement, we need to find a and such that sgn(S ) = sgn(S0 ) and
N = 0, for which the KKT conditions hold. We claim that taking
(S , N ) = (S0 ( n1 XST XS )1 sgn(S0 ), 0)
(
S , N ) = (sgn(S0 ), )
satisfies (2.2.8). We only need to check that sgn(S0 ) = sgn(S ), but this follows from
(2.2.7).

2.2.5

Prediction and Estimation

Consider the high-dimensional linear model with noise (2.2.3) and let S, s and N be
defined as in the previous section. As we have noted before, in an artificial situation where
S is known, we could apply OLS on XS and have an MSPE of 2 s/n. Under a so-called
compatibility condition on the design matrix, we can obtain a similar MSPE for the Lasso.
The Compatibility Condition
Define
2 =

inf

Rp :S 6=0, kN k1 3kS k1

1
kXk22
n
,
1
kS k21
s

where we take 0. The compatibility condition is that 2 > 0. Note that if we restrict
the infimum not to be over kN k1 3kS k1 , but actually enforce kN k1 = 0, then if the
minimum eigenvalue of n1 XST XS , cmin , is positive, then 2 > 0. Indeed, then

kk1 = sgn(S )T S skS k2


by CauchySchwarz, so
1
kXk22
n
1
Rp :S 6=0, kN k1 =0
kS k21
s

inf

1
kXS S k22
n
R :S 6=0
kS k22

inf
p

18

= cmin > 0.


Theorem 15.
the Lasso solution
p Suppose the compatibility condition holds and let be(A
2 /81)
with = A log(p)/n for A > 0. Then with probability at least 1 p
, we have
2
2
2
1
2 + k 0 k1 16 s = 16A log(p) s .
kX( 0 )k
2
n
2
2
n

Proof. As in theorem 6 we start with the basic inequality:


1
1 1 T X( 0 ) + k 0 k1 .
kX( 0 )k22 + kk
2n
n
We work on the event = {2kX T k /n } where after applying Holders inequality,
we get
1
1 k 0 k1 + 2k 0 k1 .
kX( 0 )k22 + 2kk
(2.2.9)
n
2
It can be shown that P() 1 p(A /81) .
To motivate the rest of the proof, consider the following idea. We know
1
kX( 0 )k22 3k 0 k1 .
n
If we could get
c
3k 0 k1 kX( 0 )k2
n
for some constant c > 0, then we would have that kX( 0 )k22 /n c2 2 and also
1 c2 2 .
3k 0 k
Returning to the actual proof, write a = kX( 0 )k22 /(n). Then from (2.2.9) we can
derive the following string of inequalities:
a + 2(kN k1 + kS k1 ) kS S0 k1 + kN k1 + 2kS0 k1
a + kN k1 kS 0 k1 + 2k 0 k1 2kS k1
S

0
a + kN N
k1 3kS0 S k1
a + k 0 k1 4k 0 S k1 ,
S

the final inequality coming from adding kS0 S k1 to both sides.


Now using the compatibility condition we have
1
1 4k 0 S k1
kX( 0 )k22 + k 0 k
S
n
r
4 s

kX( 0 )k2 ,
n
using the compatibility condition with 0 . From this we get

1
4
s
0
kX( )k2

n
and substituting this into the RHS of (2.2.10) we get the result.
19

(2.2.10)

2.2.6

Computation

One of the most efficient ways of computing Lasso solutions is to use a optimisation technique called coordinate descent. This is a quite general way of minimising a function
f : Rd R of the form
d
X
f (x) = g(x) +
hj (xj )
j=1

where g is convex and differentiable and each hj : R R is convex. We start with an


initial guess of the minimiser x(0) (e.g. x(0) = 0) and repeat for m = 1, 2, . . .
(m)

(m1)

x1

= arg min f (x1 , x2


x1 R

(m)

(m)

x2

(m1)

, . . . , xd

(m1)

(m1)

= arg min f (x1 , x2 , x3


x2 R

, . . . , xd

..
.
(m)

xd

(m)

(m)

(m)

= arg min f (x1 , x2 , . . . , xd1 , xd ).


xd R

Tseng (2001) proves that provided A0 = {x : f (x) f (x(0) )} is compact, then every
converging subsequence of x(m) will converge to a minimiser of f . [Note as x(m) A0 for
all m, a converging subsequence must exist by BolzanoWeierstrass].
We can replace individual coordinates by blocks of coordinates and the same result
holds. That is if x = (x1 , . . . , xB ) where now xb Rdb and
f (x) = g(x) +

B
X

hb (xb )

b=1

with g convex and differentiable and each hb : Rdb R convex, then block coordinate
descent can be used.
We often want to solve the Lasso on a grid of values 0 > > L (for the purposes
of cross-validation for example). To do this, we can first solve for 0 , and then solve at
subsequent grid points by using the solution at the previous grid points as an initial guess
(known as a warm start). An active set strategy can further speed up computation. This
works as follows: For l = 1, . . . , L
1. Initialise Al = {k : Ll1,k 6= 0}.
2. Perform coordinate descent only on coordinates in Al obtaining a solution (all
components k with k
/ Al are set to zero).

3. Let V = {k : |XkT (Y X )|/n


> l }, the set of coordinates which violate the KKT
conditions when is taken as a candidate solution.
Else we update Al = Al V and return to 2.
4. If V is empty, then we set Ll = .
20

2.3

Extensions of the Lasso

We can add an `1 penalty to many other log-likelihoods besides that arising from the normal
linear model. For Lasso-penalised generalised linear models, such as logistic regression,
similar theoretical results to those we have obtained are available and computations can
proceed in a similar fashion to above.

2.3.1

Structural penalties

The Lasso penalty encourages the estimated coefficients to be shrunk towards 0 and sometimes exactly to 0. Other penalty functions can be constructed to encourage different types
of sparsity. Suppose we have a partition G1 , . . . , Gq of {1, . . . , p} (so qk=1 Gk = {1, . . . , p},
Gj Gk = for j 6= k). The group Lasso penalty (Yuan & Lin, 2006) is given by

q
X

mj kGj k2 .

j=1

The multipliers mj > 0 serve


pto balance cases where the groups are of very different sizes;
typically we choose mj = |Gj |. This penalty encourages either an entire group G to
have G = 0 or k 6= 0 for all k G. Such a property is useful when groups occur through
coding for categorical predictors or when expanding predictors using basis functions.

2.3.2

Reducing the bias of the Lasso

One potential drawback of the Lasso is that the same shrinkage effect that sets many
estimated coefficients exactly to zero also shrinks all non-zero estimated coefficients towards
L
zero. One possible solution is to take S = {k : ,k
6= 0} and then re-estimate S0 by OLS

regression on XS .
Another option is to re-estimate using the Lasso on XS ; this procedure is known as
the relaxed Lasso (Meinshausen, 2006). The adaptive Lasso takes an initial estimate of 0 ,
init (e.g. from the Lasso) and then performs weighted Lasso regression:

X |k | 
1
adapt
2

= arg min
kY Xk2 +
,
init |
2n
Rp :Sc =0
|

k
init
kS
init

where Sinit = {k : kinit 6= 0}.


This is closely related to approximating a non-convex penalty. With the latter approach,
we take a family of continuous functions:
p : [0, ) [0, )
that are differentiable on (0, ). Since the direct minimisation of the penalised objective
p
X
1
2
kY Xk2 +
p (|k |)
2n
k=1

21

is computationally intensive for p non-convex, we can use a local linear approximation


(Zou & Li, 2008). Consider a Taylor approximation
p (|k |) p (|k |) + p0 (|k |)(|k | |k |).
Based on this we can set (0) as the Lasso solution and then iteratively compute
(m)


= arg min


p
X
1
(m1)
2
0
kY Xk2 +
p (|k
|)|k | .
2n
k=1

This is a weighted Lasso regression. A prominent example of a non-convex penalty is the


SCAD penalty (Fan & Li 2001) which is defined by p (0) = 0 and for u > 0,
p0 (u) = 1{u} +

(a u)+
1{u>} ,
a1

where a is an additional parameter typically set at 3.7 (this can be motivated by a Bayesian
argument).

22

Chapter 3
Graphical modelling and causal
inference
So far we have considered methods for relating a particular response to a large collection
of explanatory variables, and we have been primarily interested in predicting the response
given the covariates.
In some settings however, we do not have a distinguished response variable and instead
we would like to better understand relationships between all the variables. In other situations, rather than being able to predict variables, we would like to understand causal
relationships between them. Representing relationships between random variables through
graphs will be an important tool in tackling these problems.

3.1

Graphs

Definition 2. A graph is a pair G = (V, E) where V is a set of vertices or nodes and


E V V with (v, v)
/ E for any v V is a set of edges.
Let Z = (Z1 , . . . , Zp )T be a collection of random variables. The graphs we will consider
will always have V = {1, . . . , p} so V indexes the random variables.
Let j, k V .
We say there is an edge between j and k and that j and k are adjacent if either
(j, k) E or (k, j) E.
An edge (j, k) is undirected if also (k, j) E; otherwise it is directed and we may
write j k to represent this.
If all edges in the graph are (un)directed we call it an (un)directed graph. We can
represent graphs as pictures: for example, we can draw the graph when p = 4 and
E = {(2, 1), (3, 4), (2, 3)} as

23

Z1

Z2

Z3

Z4

If instead we have E = {(1, 2), (2, 1), (2, 4), (4, 2)} we get the undirected graph
Z1

Z2

Z3

Z4

A graph G1 = (V1 , E1 ) is a subgraph of G = (V, E) if V1 V and E1 E and a


proper subgraph if either of these are proper inclusions.
Say j is a parent of k and k is a child of j if j k. The sets of parents and children
of k will be denoted pa(k) and ch(k) respectively.
A set of three nodes is called a v-structure if one node is a child of the two other
nodes, and these two nodes are not adjacent.
The skeleton of G is a copy of G with every edge replaced by an undirected edge.
A path from j to k is a sequence j = j1 , j2 , . . . , jm = k of (at least two) distinct
vertices such that jl and jl+1 are adjacent. Such a path is a directed path if jl jl+1
for all l. We then call k a descendant of j. The set of descendants of j will be denoted
de(j). If jl1 jl jl+1 , jl is called a collider (relative to the path).
A directed cycle is (almost) a directed path but with the start and end points the
same. A partially directed acyclic graph (PDAG) is a graph containing no directed
cycles. A directed acyclic graph (DAG) is a directed graph containing no directed
cycles.
In a DAG, a path between j1 and jm (j1 , j2 , . . . , jm ) is blocked by a set S with neither j1
nor jm in S whenever there is a node jl such that one of the following two possibilities
hold:

1. jl S and we dont have jl1 jl jl+1


2. jl1 jl jl+1 and neither jl nor any of its descendants are in S.
Given a triple of subsets of nodes A, B, S, we say S separates A from B if every path
from a node in A to a node in B contains a node in S.

24

If G is a DAG, given a triple of subsets of nodes A, B, S, we say S d-separates A from


B if S blocks every path from A to B.
The moralised graph of a DAG G is the undirected graph obtained by adding edges
between (marrying) the parents of each node and removing all edge directions.

Proposition 16. Given a DAG G with V = {1, . . . , p}, we say that a permutation of V
is a topological (or causal) ordering of the variables if it satisfies
(j) < (k)

whenever k de(j).

Every DAG has a topological ordering.


Proof. We use induction on the number of nodes p. Clearly the result is true when p = 1.
Now we show that in any DAG, we can find a node with no parents. Pick any node
and move to one of its parents, if possible. Then move to one of the new nodes parents,
and continue in this fashion. This procedure must terminate since no node can be visited
twice, or we would have found a cycle. The final node we visit must therefore have no
parents, which we call a source node.
Suppose then that p 2, and we know that all DAGs with p1 nodes have a topological
ordering. Find a source s (wlog s = p) and form a new DAG G with p1 nodes by removing
the source (and all edges emanating from it). Note we keep the labelling of the nodes in
this new DAG the same. This smaller DAG must have a topological order
. A topological
ordering for our original DAG is then given by (s) = 1 and (k) =
(k)+1 for k 6= s.

3.2

Conditional independence graphs

We would like to understand which variables may be related to each other. Trying to find
pairs of variables that are independent and so unlikely to be related to each other is not
necessarily a good way to proceed as each variable may be correlated with a large number
of variables without being directly related to them. A better approach is to use conditional
independence.
Definition 3. If X, Y and Z are random vectors with a joint density fXY Z (w.r.t. a
product measure ) then we say X is conditionally independent of Y given Z, and write
X Y |Z
if
fXY |Z (x, y|z) = fX|Z (x|z)fY |Z (y|z).
Equivalently
X Y |Z fX|Y Z (x|y, z) = fX|Z (x|z).

25

We will first look at how undirected graphs can be used to visualise conditional independencies between random variables; thus in the next few subsections by graph we will
mean undirected graph.
Let Z = (Z1 , . . . , Zp )T be a collection of random variables with joint law P and consider
a graph G = (V, E) where V = {1, . . . , p}. Some notation: let k and jk when in
subscripts denote the sets {1, . . . , p} \ {k} and {1, . . . , p} \ {j, k} respectively.
Definition 4. We say that P satisfies the pairwise Markov property w.r.t. G if for any pair
j, k V with j 6= k and {j, k}
/ E,
Zj Zk |Zjk .
Note that the complete graph that has edges between every pair of vertices will satisfy
the pairwise Markov property for any P . The minimal graph satisfying the pairwise Markov
property w.r.t. a given P is called the conditional independence graph (CIG) for P .
Definition 5. We say P satisfies the global Markov property w.r.t. G if for any triple
(A, B, S) of disjoint subsets of V such that S separates A from B, we have
ZA ZB |ZS .
Proposition 17. If P has a positive density (w.r.t. some product measure) then if it
satisfies the pairwise Markov property w.r.t. a graph G, it also satisfies the global Markov
property w.r.t. G and vice versa.

3.3

Gaussian graphical models

Estimating the CIG given samples from P is a difficult task in general. However, in the
case where P is multivariate Gaussian, things simplify considerably as we shall see. We
begin with some notation. For a matrix M Rpp , and sets A, B {1, . . . , p}, let MA,B
be the |A| |B| submatrix of M consisting of those rows and columns of M indexed by
the sets A and B respectively. The submatrix extraction operation is always performed
T
first (so e.g. Mk,k
= (Mk,k )T ).

3.3.1

Normal conditionals

Now let Z Np (, ) with positive definite. Note A,A is also positive definite for any
A.
Proposition 18.
1
ZA |ZB = zB N|A| (A + A,B 1
B,B (zB B ), A,A A,B B,B B,A )

26

Proof. Idea: write ZA = M ZB +(ZA M ZB ) with matrix M R|A||B| such that ZA M ZB


and ZB are independent, i.e. such that
Cov(ZB , ZA M ZB ) = B,A B,B M T = 0.
This occurs when we take M T = 1
B,B B,A . Because ZA M ZB and ZB are independent, the distribution of ZA M ZB conditional on ZB = zB is equal to its unconditional
distribution. Now
E(ZA M ZB ) = A A,B 1
B,B B
1
1
Var(ZA M ZB ) = A,A + A,B 1
B,B B,B B,B B,A 2A,B B,B B,A

= A,A A,B 1
B,B B,A .
Since M ZB is a function of ZB and ZA M ZB is normally distributed, we have the
result.

3.3.2

Nodewise regression

Specialising to the case where A = {k} and B = Ac we see that when conditioning on
Zk = zk , we may write
T
Zk = mk + zk
1
k,k k,k + k ,

where
mk = k k,k 1
k,k k
k |Zk = zk N (0, k,k k,k 1
k,k k,k ).
Note that if the jth element of the vector of coefficients 1
k,k k,k is zero, then the
distribution of Zk conditional on Zk will not depend at all on the jth component of Zk .
Then if that jth component was Zj 0 , we would have that Zk |Zk = zk has the same
distribution as Zk |Zj 0 k = zj 0 k , so Zk Zj |Zj 0 k .
i.i.d.
Thus given x1 , . . . , xn Z and writing

xT1

X = ... ,
xTn
we may estimate the coefficient vector 1
k,k k,k by regressing Xk on X{k}c and including
an intercept term.
The technique of nodewise regression (Meinshausen & B
uhlmann, 2006) involves performing such a regression for each variable, using the Lasso. There are two options for
populating our estimate of the CIG with edges based on the Lasso estimates. Writing Sk
27

for the selected set of variables when regressing Xk on X{k}c , we can use the OR rule
and put an edge between vertices j and k if and only if k Sj or j Sk . An alternative is
the AND rule where we put an edge between j and k if and only if k Sj and j Sk .
Another popular approach to estimating the CIG works by first directly estimating ,
as well now see.

3.3.3

The precision matrix and conditional independence

The following facts about blockwise inversion of matrices will help us to interpret the mean
and variance in Proposition 18.
Proposition 19. Let M Rpp be a symmetric positive definite matrix and suppose


P Q
M=
QT R
with P and R square matrices. The Schur complement of R is P QR1 QT =: S. We
have that S is positive definite and


S 1
S 1 QR1
1
M =
.
R1 QT S 1 R1 + R1 QT S 1 QR1
Furthermore det(M ) = det(S)det(R).
Let = 1 be the precision matrix. We see that Var(ZA |ZAc ) = 1
A,A . Moreover,
considering the case when A = {j, k}, we have


1
kk jk
.
Var(Zjk |Zjk ) =
det(A,A ) jk jj
Thus
Zk Zj |Zjk jk = 0.
This motivates another approach to estimating the CIG.

3.3.4

The Graphical Lasso

Recall that the density of Np (, ) is


1
f (z) =
exp
p/2
(2) det()1/2


1
T 1
(z ) (z ) .
2

The log-likelihood of (, ) based on an i.i.d. sample x1 , . . . , xn is


n

n
1X
`(, ) = log det()
(xi )T (xi ).
2
2 i=1
28

Write

X
= 1
X
xi ,
n i=1

1X
i X)
T.
S=
(xi X)(x
n i=1

Then
n
X

(xi ) (xi ) =

i=1

n
X
i=1
n
X

+X
)T (xi X
+X
)
(xi X
T (xi X)
+ n(X
)T (X
)
(xi X)

i=1
n
X
T (X
).
+2
(xi X)
i=1

Also,
n
n
X
X
T (xi X)
=
T (xi X)}

(xi X)
tr{(xi X)
i=1

i=1

n
X

i X)
T }
tr{(xi X)(x

i=1

= ntr(S).
Thus

n
)T (X
)}
`(, ) = {tr(S) log det() + (X
2

and

n
maxp `(, ) = {tr(S) log det()}.
R
2
M L can be obtained by solving
Hence the maximum likelihood estimate of ,
min { log det() + tr(S)},

:0

where  0 means is positive definite. One can show that the objective is convex and
we are minimising over a convex set. As

log det() = (1 )kj = (1 )jk ,


jk

tr(S) = Skj = Sjk ,


jk
M L = S 1 .
if X has full column rank so S is positive definite,
The graphical Lasso penalises the log-likelihood for and solves
min { log det() + tr(S) + kk1 },

:0

P
where kk1 = j,k |jk |; this results in a sparse estimate of the precision matrix from
which an estimate of the CIG can be constructed.
29

3.4

Structural equation models

Conditional independence graphs give us some understanding of the relationships between


variables. However they do not tell us how, if we were to set the kth variable to a particular
value, say 0.5, then how the distribution of the other values would be altered. Yet this is
often the sort of question that we would like to answer.
In order to reach this more ambitious goal, we introduce the notion of structural equation
models (SEMs). These give a way of representing the data generating process. We will now
have to make use of not just undirected graphs but other sorts of graphs (and particularly
DAGs), so by graph we will now mean any sort of graph satisfying definition 2.
Definition 6. A structural equation model S for a random vector Z Rp is a collection
of p equations
Zk = hk (ZPk , k ),
k = 1, . . . , p
where
1 , . . . , p are all independent random variables;
Pk {1, . . . , p} \ {k} are such that the graph with edges given by Pk being pa(k) is
a DAG.

Example 3.4.1. Consider the following (totally artificial) SEM which has whether you
are taking this course (Z1 = 1) depending on whether you went to the statistics catch up
lecture (Z2 = 1) and whether you have heard about machine learning (Z3 = 1). Suppose
Z3 = 3 Bern(0.25)
Z2 = 1{0.52 (1+Z3 )>0.25}
Z1 = 1{0.51 (Z2 +Z3 )>0.25}

2 U [0, 1]
1 U [0, 1].

The corresponding DAG is


Z2

Z3
Z1

Note that an SEM for Z determines its law. Indeed using a topological ordering for
the associated DAG, we can write each Zk as a function of 1 (1) , 1 (2) , . . . , 1 ((k)) .
Importantly, though, we can use it to tell us much more than simply the law of Z: for
example we can query properties of the distribution of Z after having set a particular
component to any given value. This is what we study next.

30

3.5

Interventions

Given an SEM S, we can replace one (or more) of the structural equations by a new
structural equation, for example for a chosen variable k we could replace the structural
k (Z , k ). This gives us a new structural equation
equation Zk = hk (ZPk , k ) by Zk = h
Pk

model S which in turn determines a new joint law for Z.


k (Z , k ) = a for some a R, so we are setting the value of Zk to
When we have h
Pk
be a, we call this a (perfect) intervention. Expectations and probabilities under this new
law for Z are written by adding |do(Zk = a) e.g. E(Zj |do(Zk = a)). Note that this will in
general be different from the conditional expectation E(Zj |Zk = a).
Example 3.4.1 continued. After the intervention do(Z2 = 1) (everyone is forced to go

to the statistics catch-up lecture), we have a new SEM S:


Z3 = 3 Bern(0.25)
Z2 = 1
Z1 = 1{0.51 (1+Z3 )>0.25}

1 U [0, 1].

9
Thus P(Z1 = 1|do(Z2 = 1)) = 14 34 + 34 12 = 16
. On the other hand,
X
P(Z1 = 1|Z2 = 1) =
P(Z1 = 1|Z2 = 1, Z3 = j)P(Z3 = j|Z2 = 1)
j{0,1}

X
1
P(Z1 = 1|Z2 = 1, Z3 = j)P(Z2 = 1|Z3 = j)P(Z3 = j)
P(Z2 = 1)
j{0,1}


1
331 113
= 13 31
+
4
4
4
224
+
44
42
7
9
=
6= .
12
16

3.6

The Markov properties on DAGs

The DAG of an SEM can encode a number of conditional independencies present in the
law of the random vector Z. To understand this, we first introduce Markov properties on
DAGs similar to the Markov properties on undirected graphs we have already studied.
Let P be the joint law of Z and suppose it has a density f .
Definition 7. Given a DAG G, we say P satisfies the
(i) Markov factorisation property w.r.t. the DAG G if
f (z1 , . . . , zp ) =

p
Y
k=1

31

f (zk |zpa(k) ).

(ii) global Markov property w.r.t. the DAG G if for all disjoint A, B, S {1, . . . , p},
A, B d-separated by S ZA ZB |ZS .
Theorem 20. If P has a density f (with respect to a product measure), then all Markov
properties in definition 7 are equivalent.
In view of this, we will henceforth use the term Markov to mean global Markov.
Proposition 21. Let P be the law of an SEM with DAG G. Then P obeys the Markov
factorisation property w.r.t. G.
Thus we can read off from the DAG of an SEM a great deal of information concerning the
distribution it generates. We can use this to help us calculate the effects of interventions.
We have seen now how an SEM can be used to not only query properties of the joint
distribution, but also to determine the effects of certain perturbations to the system. In
many settings, we may not have a prespecified SEM to work with, but instead wed like to
learn the DAG from observational data. This is the problem we turn to next.

3.7

Causal structure learning

Given a sample of observations from P , we would like to determine the DAG which generated it. We can think of this task in terms of two subtasks: firstly we need to understand
how to extract information concerning P from a sample, which is a traditional statistical
question of the sort we are used to; secondly, given P itself, we need to relate this to the
DAG which generated it. The latter problem is unique to casual inference and we discuss
this first.

3.7.1

Three obstacles

There are three obstacles to causal structure learning. The first two are more immediate
but the last is somewhat subtle.
Causal minimality
We know that if P is generated by an SEM with DAG G, then P will be Markov w.r.t. G.
Conversely, one can show that if P is Markov w.r.t. a DAG G, then there is also an SEM
with DAG G that could have generated P . But P will be Markov w.r.t. a great number of
DAGs, e.g. Z1 and Z2 being independent can be represented by
Z1 = 0 Z2 + 1 = 1 ,

Z2 = 2 .

This motivates the following definition.


Definition 8. P satisfies causal minimality with respect to G if it is (global) Markov w.r.t.
G but not to a proper subgraph of G with the same nodes.
32

Markov equivalent DAGs


It is possible for two different DAGs to satisfy the same collection of d-separations e.g.
Z1

Z2

Z1

Z2

For a DAG G, let


M(G) = {distributions P : P satisfies the global Markov property w.r.t. G}.
Definition 9. We say two DAGs G1 and G2 are Markov equivalent if M(G1 ) = M(G2 ).
Proposition 22. Two DAGs are Markov equivalent if and only if they have the same
skeleton and v-structures.
The set of all DAGs that are Markov equivalent to a DAG can be represented by a
completed PDAG (CPDAG) which contains an edge (j, k) if and only if one member of the
Markov equivalence class does. We can only ever hope to obtain the Markov equivalence
class i.e. the CPDAG of a DAG with which P satisfies causal minimality (unless we place
restrictions on the functional forms of the SEM equations).
Faithfulness
Consider the following SEM.
Z 1 = 1
Z2 = Z1 + 2
Z3 = Z1 + Z2 + 3 ,
where N3 (0, I). Then (Z1 , Z2 , Z3 ) N3 (0, ) = P 0 with

+
.
2 + 1
+ (2 + 1)
=
2
2
2
2
+ + ( + 1) + ( + 1) + 2 + 1
If + = 0 e.g. if = 1, , = 1, then Z1 Z3 . We claim that in this case P 0 can
also be generated by the SEM
Z1 = 1
Z2 = Z1 +
Z3 + 2
Z3 = 3 .
Here the j are independent with 1 N (0, 1), 3 N (0, 2),
= 1/2 and 3 N (0, 1/2).

Writing the DAGs for the two SEMs above as G and G, note that P 0 satisfies causal

minimality w.r.t. both G and G.


33

Definition 10. We say P is faithful to the DAG G if it is Markov w.r.t. G and for all
disjoint A, B, S {1, . . . , p},
A, B d-separated by S ZA ZB |ZS .
Faithfulness demands that all conditional independencies in P are represented in the

DAG. In our example P 0 is not faithful to G, but it is faithful to G.

3.7.2

The PC algorithm

Proposition 23. If nodes j and k in a DAG G are adjacent, then no set can d-separate
them. If they are not adjacent and is a topological order with (j) < (k), then they are
d-separated by pa(k).
Proof. Consider a path j = j1 , . . . , jm = k. We may assume we dont have jm1 k
as otherwise the path would be blocked since jm1 pa(k). Let l be the largest l0 with
jl0 1 jl0 jl0 +1 ; this must exist as otherwise we would have a directed path from k to j
contradicting the topological ordering. In order for the path to be active, jl0 must have a
descendant in pa(k), but this would introduce a cycle.
This shows in particular that any non-adjacent nodes must have a d-separating set. If
we assume that P is faithful w.r.t. a DAG G, we can check whether nodes j and k are
adjacent in G by testing whether there is a set S with Zj Zk |ZS . If there is no such set
S, j and k must be adjacent. This allows us to recover the skeleton of G.
Proposition 24. Suppose we have a triple of nodes j, k, l in a DAG and the only nonadjacent pair is j, k (i.e. in the skeleton j l k).
(i) If the nodes are in a v-structure (j l k) then no S that d-separates j and k can
contain l.
(ii) If there exists an S that d-separates j and k and l
/ S, then we must have j l k.
Proof. For (i) note that any set containing l cannot block the path j, l, k. For (ii) note we
know that the path j, l, k is blocked by S, so we must have j l k.
This last result then allows us to find the v-structures given the skeleton and a dseparating set S(j, k) corresponding to each absent edge. Given a skeleton and v-structures,
it may be possible to orient further edges by making using the acyclicity of DAGs; we do
not cover this here.

34

Algorithm 1 First part of the PC algorithm: finding the skeleton.


Set G to be the complete undirected graph. Set ` = 1.
repeat
Increment ` ` + 1.
repeat
Select a (new) ordered pair of nodes j, k that are adjacent in G and such that
j) \ {k}| `.
|adj(G,
repeat
j) \ {k} with |S| = `.
Choose new S adj(G,
If Zj Zk |ZS then delete edges (j, k) and (k, j) and set S(j, k) = S(k, j) = S.
until edges (j, k), (k, j) are deleted or all relevant subsets have been chosen.
until all relevant ordered pairs have been chosen.
j) \ {k}| < `.
until for every ordered pair j, k that are adjacent in G we have |adj(G,
Population version
The PC-algorithm, named after its inventors Peter Spirtes and Clarke Glymour exploits
the fact that we need not search over all sets S but only subsets of either pa(j) or pa(k)
for efficiency. The version assumes P is known and so conditional independencies can be
queried directly. A sample version that is applicable in practice is given in the following
subsection. We denote the set of nodes that are adjacent to a node j in graph G by
adj(G, j).
Suppose P is faithful to DAG G 0 . At each stage of the Algorithm 1 we must have that
By the end of the algorithm, for each pair j, k adjacent in
the skeleton is a subgraph of G.
we would have searched through adj(G,
j) and adj(G,
k) for sets S such that Zj Zk |ZS .
G,
0
If P were faithful to G then, we would know that j and k must be adjacent in G 0 . That
is the output of Algorithm 1 would be the skeleton of G 0 .
Algorithm 2 Second part of the PC algorithm: finding the v-structures
with common neighbour l do
for all pairs of non-adjacent variables j, k (in skeleton G)
If l
/ S(j, k) then orient j l k.
end for

Sample version
The sample version of the PC algorithm replaces the querying of conditional independence
with a conditional independence test applied to data x1 , . . . , xn . The level of the test
will be a tuning parameter of the method. If the data are assumed to be multivariate
normal, the (sample) partial correlation can be used to test conditional independence since
if Zj Zk |ZS then
Corr(Zj , Zk |ZS ) := jkS = 0.
35

To compute the sample partial correlation, we regress Xj and Xk on XS and compute the
correlation between the resulting residuals.

36

Chapter 4
Multiple testing
In many modern applications, we may be interested in testing many hypotheses simultaneously. Suppose we are interested in testing null hypotheses H1 , . . . , Hm of which m0
are true and m m0 are not (we do not mention the alternative hypotheses explicitly).
Consider the following contingency table:
Claimed non-significant

Claimed significant (reject)

Total

True null hypotheses


False null hypotheses

N00
N10

N01
N11

m0
m m0

Total

mR

The Njj are unobserved random variables; R is observed.


Suppose we have p-values p1 , . . . , pm associated with H1 , . . . , Hm and Hi , i I0 are the
true null hypotheses, so
P(pi )
for all [0, 1], i I0 . Traditional approaches to multiple testing have sought to control
the familywise error rate (FWER) defined by
FWER = P(N01 1)
at a prescribed level ; i.e. find procedures for which FWER . The simplest such
procedure is the Bonferroni correction, which rejects Hi if pi /m.
Theorem 25. Using Bonferroni correction,
P(N01 1) E(N01 )

37

m0
.
m

Proof. The first inequality comes from Markovs inequality. Next


X

E(N01 ) = E
1{pi /m}
iI0

P(pi /m)

iI0

m0
.
m

A more sophisticated approach is the closed testing procedure.

4.1

The closed testing procedure

Given our family of hypotheses {Hi }m


i=1 , define the closure of this family to be
{HI : I {1, . . . , m}, I 6= }
where HI = iI Hi is known as an intersection hypothesis (HI is the hypothesis that all
Hi i I are true).
Suppose that for each I, we have an -level test I taking values in {0, 1} for testing
HI (we reject if I = 1), so under HI ,
PHI (I = 1) .
The I are known as local tests.
The closed testing procedure (Marcus, Peritz, Gabriel, 1976) is defined as follows:
Reject HI if and only if for all J I,
HJ is rejected by the local test J .
Typically we only make use of the individual hypotheses that are rejected by the procedure
i.e. those rejected HI where I is a singleton.
We consider the case of 4 hypotheses as an example. Suppose the underlined hypotheses
are rejected by the local tests.
H1234
H123 H124 H134 H234
H12 H13 H14 H23 H24 H34
H1 H2 H3 H4
Here H1 is rejected be the closed testing procedure.
H2 is not rejected by the closed testing procedure as H24 is not rejected by the local
test.

38

H23 is rejected by the closed testing procedure.

Theorem 26. The closed testing procedure makes no false rejections with probability 1.
In particular it controls the FWER at level .
Proof. Assume I0 is not empty (as otherwise no rejection can be false anyway). Define the
events
A = {at least one false rejection} {N01 1},
B = {reject HI0 with the local test} = {I0 = 1}.
In order for there to be a false rejection, we must have rejected HI0 with the local test.
Thus B A, so
FWER P(A) = P(A B) = P(B)P(A|B) P(I0 = 1) .
Different choices for the local tests give rise to different testing procedures. Holms
procedure takes I to be the Bonferroni test i.e.
(

1 if miniI pi |I|
I =
0 otherwise.
It can be shown (see example sheet) that Holms procedure amounts to ordering the pvalues p1 , . . . , pm as p(1) p(m) with corresponding hypothesis tests H(1) , . . . , H(m) ,
so (i) is the index of the ith smallest p-value, and then performing the following.
Step 1. If p(1) /m reject H(1) , and go to step 2. Otherwise accept H(1) , . . . , H(m) and
stop.
Step i. If p(i) /(mi+1), reject H(i) and go to step i+1. Otherwise accept H(i) , . . . , H(m) .
Step m. If p(m) , reject H(m) . Otherwise accept H(m) .
The p-values are visited in ascending order and rejected until the first time a p-value exceeds
a given critical value. This sort of approach is known (slightly confusingly) as a step-down
procedure.

4.2

The False Discovery Rate

A different approach to multiple testing does not try to control the FWER, but instead
attempts to control the false discovery rate (FDR) defined by
FDR = E(FDP)
N01
,
FDP =
max(R, 1)
39

where FDP is the false discovery proportion. Note the maximum in the denominator is to
ensure division by zero does not occur. The FDR was introduced by Benjamini & Hochberg
(1995), and it is now widely used across science, particularly biostatistics.
The BenjaminiHochberg procedure attempts to control the FDR at level and works
as follows. Let


i

k = max i : p(i)
.
m

Reject H(1) , . . . , H(k)


(or perform no rejections if k is not defined).
Theorem 27. Suppose that the pi , i I0 are independent, and independent of {pi :
i
/ I0 }. Then the BenjaminiHochberg procedure controls the FDR at level ; in fact
FDR m0 /m.
Proof. For each i I0 , let Ri denote the number of rejections we get by applying a modified
BenjaminiHochberg procedure to
p\i := {p1 , p2 , . . . , pi1 , pi+1 , . . . , pm }
with cutoff



(j
+
1)
\i
,
ki = max j : p(j)
m

\i

where p(j) is the jth smallest p-value in the set p\i .


For r = 1, . . . , m and i I0 , note that
 


r
r
s
r
, R = r = pi
, p(r)
, p(s) >
for all s > r
pi
m
m
m
m


r \i
r \i
s
= pi
,p

,p
>
for all s > r
m (r1)
m (s1)
m


r
= pi
, Ri = r 1 .
m

40

Thus

FDR = E


N01
max(R, 1)

m
X  N01
=
E
1{R=r}
r
r=1
X

m
X
1
=
E
1{pi r/m} 1{R=r}
r
r=1
iI
0

=
=

m
X
r=1
m
X
r=1

1X
P(pi r/m, R = r)
r iI
0

1X
P(pi r/m)P(Ri = r 1)
r iI

m iI
0
m0
=
.
m

m
XX

P(Ri = r 1)

r=1

41

You might also like