You are on page 1of 18

Nonlinear Regression

Models

Chapter 14
Intrinsically linear and intrinsically
nonlinear regression models
 Some models may look nonlinear in the
parameters but are intrinsically linear
because with suitable transformation they
can be made linear-in-the parameter
regression models.
 If such models cannot be linearized in
parameters  intrinsically nonlinear
regression models (NLRM)

01/06/22 Prepared by Sri Yani Kusumastuti 2


Linear regression models: example

 1   1 
Yi  1   2    ui ln Yi  1   2    ui
 Xi   Xi 
Yi  1   2 ln X i  ui ln Yi  1   2 X i  ui
ln Yi     2 ln X i  ui Yi  e 1  2 X i ui
Yi  1 X 2i2 X 3i3 ui  Cobb-Douglas Production Function
1
Yi  1   2 X i  ui
 Logistic Probability Distribution Function
1 e

01/06/22 Prepared by Sri Yani Kusumastuti 3


Intrinsically nonlinear regression
models: example
Yi  1   0.75  1  e
 2  X i 2
 ui
Yi  1   23 X i  ui
Yi  1 X 2i2 X 3i3  ui
1 
Yi  A  K   1    L 
i
 
i  CES Production Function
A  scale parameter
  distribution parameter  0    1
  substitution parameter    1
01/06/22 Prepared by Sri Yani Kusumastuti 4
Estimation of linear and nonlinear
regression models
 Consider the following two models:
Linear Yi  1   2 X i  ui
Nonlinear Yi  1e 2 X i  ui

 Estimating the parameters of the two models by


OLS  minimize the residual sum of squares
ui  Yi  1e 2 X i

u   Y   e 
2
i i 1
2 X i 2

01/06/22 Prepared by Sri Yani Kusumastuti 5


Estimation of linear and nonlinear
regression models
 u 2

 2  Y   e    1e    0
i
i 1
2 Xi 2 Xi

1
  ui2
 2

 2 Yi  1e  2 X i  
 1e 2 X i X i  0

The normal equations are:


i
Y e 2 X i
 1e 2 2 X i

i i
Y X e 2 X i
 1 X i e 2 2 X i

We can not obtain explicit solutions of unknowns in term of


the known quantities  Nonlinear Least Squares (NLLS)
01/06/22 Prepared by Sri Yani Kusumastuti 6
obs ASSET (billions dollars) FEE (%)
X Y
 Table 14.1 Advisory Fees 1 0.5 0.5200
Charged and Asset Size 2 5 0.5080
3 10 0.4840

fee  f (asset ) 4 15 0.4600
5 20 0.4398
 The higher the asset 6 25 0.4238
value of the fund, the 7 30 0.4115
lower are the advisory 8 35 0.4020
fees 9 40 0.3944
10 45 0.3880
11 55 0.3825
12 60 0.3738

01/06/22 Prepared by Sri Yani Kusumastuti 7


Relationship of advisory fees to fund assets
0.55

0.50
FEE, %

0.45

0.40

0.35
0 20 40 60 80

ASSET, billions of dollars


How the exponential regression model fits the data?

01/06/22 Prepared by Sri Yani Kusumastuti 8


Estimating nonlinear regression models:
the trial-and-error method
 We can proceed by trial and error.
1  0.45  2  0.01 ui  Yi  0.45e0.01 X i  i  0.3044
u 2

1  0.5  2  0.01 ui  Yi  0.5e 0.01 X i  i  0.0073


u 2

But how do we know that we have reached the


lowest possible error sum squares?
 The trial-and-error process may ultimately produce
values of 1 and 2 that may guarantee the lowest
possible error sum of squares.

01/06/22 Prepared by Sri Yani Kusumastuti 9


Approaches to estimating nonlinear
regression models
There are several approaches to NLRMs:
1. Direct search or trial and error or derivative-free
method
This method is generally not used
 If an NLRM involves several parameters, the method
becomes very cumbersome and computationally
expensive.
 There is no guarantee that the final set of parameter
values you have selected will give you the absolute
minimum error sum of squares  no method guarantees
a global minimum

01/06/22 Prepared by Sri Yani Kusumastuti 10


Approaches to estimating nonlinear
regression models
2. Direct optimization
We differentiate the error sum of squares with
respect to each unknown coefficient, or parameter,
set the resulting equation to zero, and solve the
resulting normal equations simultaneously
method of steepest descent
Disadvantage: it may converge to the final values
of the parameters extremely slowly

01/06/22 Prepared by Sri Yani Kusumastuti 11


Approaches to estimating nonlinear
regression models
3. Iterative linearization
 We linearize a nonlinear equation around some
initial values of the parameters.
 The linearized equation is estimated by OLS
and the initially chosen values are adjusted.
 The adjusted values are used to realize the
model, and we estimate it again by OLS and
readjust the estimated values.
 This process is continued until there is no
substantial change in the estimated values from
the last couple of iterations.

01/06/22 Prepared by Sri Yani Kusumastuti 12


Approaches to estimating nonlinear
regression models
 The main technique used in linearizing a
nonlinear equation is Taylor series expansion.
 Estimating NLRM using Taylor series expansion
is systematized in two algorithms
1) The Gauss-Newton iterative method
2) The Newton-Raphson iterative method

01/06/22 Prepared by Sri Yani Kusumastuti 13


Print-out Eviews
Dependent Variable: FEE
Method: Least Squares
Sample: 1 12
Included observations: 12
Convergence achieved after 5 iterations
FEE=C(1)*2.718281828^(C(2)*ASSET)
Coefficient Std. Error t-Statistic Prob.
C(1) 0.508901 0.007459 68.22773 0.0000
C(2) -0.005965 0.000484 -12.31635 0.0000
R-squared 0.938535 Mean dependent var 0.432317
Adjusted R-squared 0.932388 S.D. dependent var 0.050129
S.E. of regression 0.013035 Akaike info criterion -5.691384
Sum squared resid 0.001699 Schwarz criterion -5.610566
Log likelihood 36.14830 F-statistic 152.6933
Durbin-Watson stat 0.349342 Prob(F-statistic) 0.000000

01/06/22 Prepared by Sri Yani Kusumastuti 14


Print-out Eviews: the initial value
Dependent Variable: FEE
Method: Least Squares
Sample: 1 12
Included observations: 12
Convergence achieved after 8 iterations
FEE=(C(1)-0.45)*2.718281828^((C(2)-0.01)*ASSET)
Coefficient Std. Error t-Statistic Prob.
C(1) 0.958900 0.007458 128.5656 0.0000
C(2) 0.004035 0.000484 8.332437 0.0000
R-squared 0.938535 Mean dependent var 0.432317
Adjusted R-squared 0.932388 S.D. dependent var 0.050129
S.E. of regression 0.013035 Akaike info criterion -5.691384
Sum squared resid 0.001699 Schwarz criterion -5.610566
Log likelihood 36.14830 F-statistic 152.6933
Durbin-Watson stat 0.349340 Prob(F-statistic) 0.000000

01/06/22 Prepared by Sri Yani Kusumastuti 15


From these result, we can write the estimated model as :
Fee  0.5089 Asset 0.005965
We choose the initial value : 1  0.45 and  2  0.01
We obtain the result as : Fee  0.9589 Asset 0.004035

 The NLLS estimator are not normally


distributed, are not unbiased, and do not have
minimum variance in finite, or small, sample.
We cannot use the t test or the F test.
 The residuals do not necessarily sum to zero,
ESS and RSS do not necessarily add up to
TSS, R2 may not be a meaningful descriptive
statistic for model.

01/06/22 Prepared by Sri Yani Kusumastuti 16


 Consequently, inferences the regression
parameters in nonlinear regression are based on
large-sample theory.
 This theory said that the least squares and
maximum likelihood estimators for nonlinear
regression models with normal error terms, when
the sample size is large, are approximately normally
distributed and almost unbiased, and have almost
minimum variance.
 This theory also applies when the error term are not
normally distributed.
 All inference procedures in NLRM are large sample,
or asymptotic

01/06/22 Prepared by Sri Yani Kusumastuti 17


 Returning to example:
 The t statistic are meaningful only if interpreted in the
large sample context. The estimated coefficients are
individually statistically significant.
 The rate of change of fee with respect to asset is:

d  Fee 
 1 2 e 2 Asseti   0.5089   0.0059  e0.0059 Asseti
d  Asset 
if Asset  20(million),
the expected rate of change in fees  0.0031 percent

01/06/22 Prepared by Sri Yani Kusumastuti 18

You might also like