You are on page 1of 4

Problem Set-2014

Non Linear Regression


Weibull distribution with shape parameter and scale parameter (WE(, )) and
gamma distribution with shape parameter and scale parameter (GA(, )) have prob-
ability density functions f
WE
(x; , ) = x
1
e
x

and f
GA
(x; , ) =

()
x
1
e
x
,
respectively for x > 0, > 0, > 0;
1. Consider the following non-linear regression model:
y
t
= f
t
(
0
1
,
0
2
) +
t
, t = 1, 2, . . . , 2n,
here
t
s are independent and identically distributed random variables with mean zero
and variance 1 and
f
t
(
1
,
2
) =


2
e

1
t
if t is odd

1
+
2
t if t is even
(a) Provide the Newton-Raphson procedure with step factor modication to compute
the maximum likelihood estimators of the unknown parameters.
(b) Find the expected Fisher information matrix.
(c) Can you suggest some reasonable initial guesses of the above Newton-Raphson
method and give justication.
(d) If
0
1
= 2, nd the MLE of
0
2
and nd its distribution.
(e) Suppose both the parameters are unknown, but it is known that
0
1
{1, 2} and

0
2
{1, 2}, nd the MLEs of
0
1
and
0
2
, based on y
t
values 4, 3, 2, 2, 3, 4 at
t = 1, . . . , 6.
2. Consider the following model;
Y = X() +e.
The orders of Y, X(), and e are n1, np, q 1, p 1 and n 1 respectively.
(a) If Rank(X()) = p, for all , then show that X()(X()
T
X())
1
X()
T
is a
projection operator on the column space spanned by X().
(b) Using the projection method (with out using anything else) ne the least squares
estimator of if is xed.
(c) Show that if both and are unknown then the least squares estimators of and
can be obtained by solving a p-dimensional optimization problem.
1
3. Consider the following model;
y
t
=
1
1 + t
+
t
.
You have a sample of size n, say 1 < t
1
< . . . < t
n
< 10. Here
t
s are independent and
identically distributed normal random variables with mean zero and nite variance
2
and (, ).
(a) Show that the maximum likelihood estimator of can be obtained by solving a
one-dimensional optimization problem.
(b) Provide the Newton-Raphson algorithm to nd the maximum likelihood estimator
of from an initial guess value say
(0)
.
(c) Find an approximate 95% condence interval of .
(d) Using (c) or otherwise, test the following hypothesis H
0
: =
0
, vs. H
1
: =
0
.
(e) For a given n (quite large), nd t
i
s so that 1 < t
1
< . . . < t
n
< 10, and it produces
approximately the most ecient (minimum variance) maximum likelihood estimator
of .
4. Consider the following regression model;
y
t
= + t + Acos(t
2
) + Bsin(t
2
) +
t
.
You have a sample of size n, say y
1
, . . . , y
n
. Here
t
s are independent and identi-
cally distributed random variables with mean zero and nite variance
2
. <
, , A, B < and [0, ].
(a) Show that the least squares estimators of the unknown parameters can be obtained
by solving a one-dimensional optimization problem.
(b) If you have the solution of the one-dimensional problem of (a), nd the least squares
estimators of the other parameters in terms of that solution.
(c) Suppose you know , for what values of all the other parameters can be estimated.
(d) Assuming is known, nd a 95% condence set of the other parameters.
(e) Assuming is known, test the following hypothesis H
0
: = 0, vs. H
1
: = 0.
5. Suppose we have the following observations:
y
i
= Ai + e
i
; i = 1, . . . , 50.
(a) Suppose e
i
s are i.i.d. normal random variables with mean zero and variance
2
,
nd the maximum likelihood estimators of A and
2
(b) Suppose e

s are i.i.d. Laplace random variables, with the probability density func-
tion
f(x) =
1
2
2
e

2
|x|
, < x < .
Using question no. 1 or otherwise, nd the maximum likelihood estimators of A and

2
.
2
6. Consider the following non-linear regression model for 0 < < 1.
y(t) =
t
1 + t
+
t
; t = 1, . . . N.
Here
t
s are normally distributed with mean 0 and variance t
2
.
(a) Find the least squares estimator of when is known.
(b) Show that when both and are unknown the least squares estimators of and
can be obtained by solving a one dimensional optimization problem.
(c) Suggest an suitable algorithm, not necessarily Gauss-Newton or Newton-Raphson,
to compute the least squares estimators of and using the above restriction on .
7. Consider the following non-linear regression model;
y
t
= cos(
0
t) + e
t
; t = 1, . . . N
Here e
t
s are i.i.d. normal random variables with mean zero and variance 1.
(a) Find the least squares estimator of
0
using Newton-Raphson or Gauss-Newton
algorithm.
(b) Suppose is the least squares estimator of , assuming the fact that converges
to , nd the distribution of N
3
2
( ) for large N. If you are assuming anything
please state them clearly.
8. Consider the following non-linear model;
y(t) = 2e
t
+ e
2t
; t = 1, . . . , N
Show that there exits g
1
, g
2
, g
3
such that
g
1
y(k) + g
2
y(k + 1) + g
3
y(k + 2) = 0; for k = 1, . . . , N 2,
and nd those gs.
9. Consider the following non-linear regression model;
y(t) = f
t
(
0
) + e(t), t = 1, . . . , N.
Here e(t)

s are i.i.d. random variables with mean zero and nite variance and
0
[0, 1].
Suppose Q() =

N
t=1
(y(t) f
t
())
2
and the set S
c
= {; [0, 1], |
0
| > c}. If for
all c > 0,
min
1
N
[Q() Q(
0
)] > 0
3
then prove that the least squares estimator of will converge to
0
. (Hint: Try to
prove it by contradiction)
10. Consider the following non-linear regression model;
y(x
t
) = 2
t
+
t
; t = 1, . . . , 10,
where E(
t
) = 0 and Var(
t
) = 1. We want to nd the least squares estimator of
. If we start with the initial guess
0
= 1, what will be the next iterate of the
Newton-Raphson/ Gauss-Newton algorithm. (Explicitly mention which one you are
using)
4

You might also like