Professional Documents
Culture Documents
Use ‘completing the square’ to show that the posterior of w is given by p(w|y, x, α, β) =
N (w|m, S), where
! −1
n
S= αI + β ∑ xi xiT ,
i =1
n
m = βS ∑ yi xi .
i =1
yi | θ ∼ Poisson(exp(θT xi )) (4.1)
−1
θ ∼ N (0, α I) (4.2)
where yi are the observed counts, xi the related covariates, i = 1, . . . , N, and θ are the
regression weights. In this exercise, we approximate the posterior p(θ|y) using the
Laplace approximation.
(a) Derive the gradient ∇ log p(θ|y) and the Hessian H = ∇∇ log p(θ|y) needed for
the Laplace approximation.
(b) Write the Laplace approximation as the density of a Gaussian distribution. What
is the mean and the covariance matrix of this distribution?
(c) Compare the Laplace approximation to the true posterior (computed using nu-
merical integration), in a case where we have one-dimensional covariates only.
Use data given in the file ex4 4 data.txt and hyperparameter value α = 10−2 .
Plot the two posteriors and the true value θ = π/4 used to generate the data.
Also plot the data with the regression line ŷi = exp(θ̂xi ) using the MAP estimate
θ̂. A Python template ex4 4 template.py is attached to help with this.