11 views

Uploaded by JongHun Park

Lectures of Weighted Least Squares
details, lecture for WLS algorithms with adaptive filter
Not focused on regression field

save

- mod2s1a
- 2009-11-14_BR_Assigment
- Salavatore+Ch+4.ppt
- Regressions Matrix Solutions
- Lecture 2 - Panel Data
- Air Oven Moisture Content Determination Methods.pdf
- gujarati.08.mra-inf.pptx
- US Federal Reserve: ifdp647
- Affine Invariant Descriptors of 3D Object Using Multiple Regression Model
- 5.2.1 Bayesian Linear Regression
- Lecture 11.pdf
- Labour Economics 1
- BR REPORT
- Economia de la empresa 2003
- Chap 2
- 3-Tareq-
- eliuslide
- Forecasting Elements for an Acre of Crops (Wheat, Barley and Rice) In Suleimanyah Governorate from 2002 to 2010
- ERS resource mapping: School funding
- Chapter12_solutions.pdf
- Gls
- S&P 500
- 2240417
- BI ASS
- art microeconomie
- Lecture 3 Regression Analysis
- Investigacion de Mercados
- DAS
- Factors Influencing Job Satisfaction
- 01_2011_194_Petek_06
- Sapiens: A Brief History of Humankind
- The Unwinding: An Inner History of the New America
- Yes Please
- The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution
- Dispatches from Pluto: Lost and Found in the Mississippi Delta
- Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
- John Adams
- Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
- The Prize: The Epic Quest for Oil, Money & Power
- A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
- This Changes Everything: Capitalism vs. The Climate
- Grand Pursuit: The Story of Economic Genius
- The Emperor of All Maladies: A Biography of Cancer
- Team of Rivals: The Political Genius of Abraham Lincoln
- The New Confessions of an Economic Hit Man
- Rise of ISIS: A Threat We Can't Ignore
- Smart People Should Build Things: How to Restore Our Culture of Achievement, Build a Path for Entrepreneurs, and Create New Jobs in America
- The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
- The World Is Flat 3.0: A Brief History of the Twenty-first Century
- Bad Feminist: Essays
- How To Win Friends and Influence People
- Steve Jobs
- Angela's Ashes: A Memoir
- Leaving Berlin: A Novel
- The Silver Linings Playbook: A Novel
- The Sympathizer: A Novel (Pulitzer Prize for Fiction)
- Extremely Loud and Incredibly Close: A Novel
- The Light Between Oceans: A Novel
- The Incarnations: A Novel
- You Too Can Have a Body Like Mine: A Novel
- Life of Pi
- The Love Affairs of Nathaniel P.: A Novel
- The First Bad Man: A Novel
- We Are Not Ourselves: A Novel
- The Blazing World: A Novel
- The Rosie Project: A Novel
- The Flamethrowers: A Novel
- Brooklyn: A Novel
- A Man Called Ove: A Novel
- Bel Canto
- The Master
- Interpreter of Maladies
- Beautiful Ruins: A Novel
- The Kitchen House: A Novel
- Wolf Hall: A Novel
- The Art of Racing in the Rain: A Novel
- The Wallcreeper
- A Prayer for Owen Meany: A Novel
- The Cider House Rules
- The Perks of Being a Wallflower
- Lovers at the Chameleon Club, Paris 1932: A Novel
- The Bonfire of the Vanities: A Novel
- Little Bee: A Novel

**∗ The standard linear model assumes that Var(εi) = σ2 for
**

i = 1, . . . , n.

**∗ As we have seen, however, there are instances where
**

σ2

Var(Y | X = xi) = Var(εi) =

.

wi

**∗ Here w1, . . . , wn are known positive constants.
**

∗ Weighted least squares is an estimation technique which

weights the observations proportional to the reciprocal of

the error variance for that observation and so overcomes the

issue of non-constant variance.

7-1

Weighted Least Squares in Simple Regression

**∗ Suppose that we have the following model
**

Yi = β0 + β1Xi + εi

i = 1, . . . , n

where εi ∼ N(0, σ 2/wi) for known constants w1, . . . , wn.

**∗ The weighted least squares estimates of β0 and β1 minimize
**

the quantity

Sw (β0, β1) =

n

X

wi(yi − β0 − β1xi)2

i=1

**∗ Note that in this weighted sum of squares, the weights are
**

inversely proportional to the corresponding variances; points

with low variance will be given higher weights and points with

higher variance are given lower weights.

7-2

Weighted Least Squares in Simple Regression

**∗ The weighted least squares estimates are then given as
**

βˆ0 = y w − βˆ1xw

P

wi(xi − xw )(yi − y w )

ˆ

β1 =

P

wi(xi − xw )2

where xw and y w are the weighted means

P

xw =

wixi

P

wi

P

yw =

wiyi

.

P

wi

∗ Some algebra shows that the weighted least squares estimates are still unbiased.

7-3

Weighted Least Squares in Simple Regression

**∗ Furthermore we can find their variances
**

Var(βˆ1) = X

σ2

wi(xi − xw )2

1

Var(βˆ0) = X

wi

+X

x2

w

wi(xi − xw )2

σ2

**∗ Since the estimates can be written in terms of normal random
**

variables, the sampling distributions are still normal.

**∗ The weighted error mean square Sw (βˆ0, βˆ1)/(n − 2) also gives
**

us an unbiased estimator of σ 2 so we can derive t tests for

the parameters etc.

7-4

General Weighted Least Squares Solution

**∗ Let W be a diagonal matrix with diagonal elements equal to
**

w1, . . . , wn.

**∗ The the Weighted Residual Sum of Squares is defined by
**

Sw ( β ) =

n

X

wi(yi − xtiβ )2 = (Y − Xβ )tW (Y − Xβ ).

i=1

**∗ Weighted least squares finds estimates of β by minimizing
**

the weighted sum of squares.

**∗ The general solution to this is
**

ˆ = (X tW X )−1X tW Y .

β

7-5

Weighted Least Squares as a Transformation

**∗ Recall from the previous chapter the model
**

yi = β0 + β1xi + εi

2.

σ

where Var(εi) = x2

i

**∗ We can transform this into a regular least squares problem
**

by taking

yi0 =

yi

xi

x0i =

1

xi

ε0i =

εi

.

xi

**∗ Then the model is
**

yi0 = β1 + β0x0i + ε0i

where Var(ε0i) = σ 2.

7-6

Weighted Least Squares as a Transformation

**∗ The residual sum of squares for the transformed model is
**

S1(β0, β1) =

n

X

(yi0 − β1 − β0x0i)2

i=1

n

X

!2

1

yi

− β1 − β0

xi

i=1 xi

!

n

X

1

2

=

(y

−

β

−

β

x

)

0

1

i

i

2

x

i

i=1

=

**∗ This is the weighted residual sum of squares with wi = 1/x2i .
**

∗ Hence the weighted least squares solution is the same as the

regular least squares solution of the transformed model.

7-7

Weighted Least Squares as a Transformation

**∗ In general suppose we have the linear model
**

Y = Xβ + ε

where Var(ε) = W −1σ 2.

**∗ Let W 1/2 be a diagonal matrix with diagonal entries equal
**

to

√

wi.

∗ Then we have Var(W 1/2ε) = σ2In.

7-8

Weighted Least Squares as a Transformation

**∗ Hence we consider the transformation
**

Y 0 = W 1/2Y

X 0 = W 1/2X

ε0 = W 1/2ε.

**∗ This gives rise to the usual least squares model
**

Y 0 = X 0 β + ε0

**∗ Using the results from regular least squares we then get the
**

solution

ˆ =

β

X0

t

X0

−1

X 0tY 0 =

−1

t

X WX

X tW Y .

**∗ Hence this is the weighted least squares solution.
**

7-9

The Weights

**∗ To apply weighted least squares, we need to know the weights
**

w1, . . . , wn.

**∗ There are some instances where this is true.
**

∗ We may have a probabilistic model for Var(Y | X = xi) in

which case we would use this model to find the wi.

**∗ Weighted least squares may also be used to downweight or
**

omit some observations from the fitted model.

7-10

The Weights

**∗ Another common case is where each observation is not a
**

single measure but an average of ni actual measures and the

original measures each have variance σ 2.

∗ In that case, standard results tell us that

Var (εi) = Var Y i | X = xi

σ2

=

ni

**∗ Thus we would use weighted least squares with weights wi = ni.
**

∗ This situation often occurs in cluster surveys.

7-11

Unknown Weights

∗ In many real-life situations, the weights are non known apriori.

**∗ In such cases we need to estimate the weights in order to
**

use weighted least squares.

∗ One way to do this is possible when there are multiple repeated observations at each value of the covariate vector.

**∗ That is often possible in designed experiments in which a
**

number of replicates will be observed for each set value of

the covariate vector.

∗ We can then estimate the variance of Y for each fixed covariate vector and use this to estimate the weights.

7-12

Pure Error

**∗ Suppose that we have ni observations at x = xj , j = 1, . . . , k.
**

∗ Then a fitted model could be

yij = β0 + β1xj + εij

i = 1, . . . , nj ; j = 1, . . . , k.

**∗ The (i, j)th residual can then be written as
**

eij = yij − yˆij = (yij − y j ) + (y j − yˆij ).

∗ The first term is referred to as the pure error.

7-13

Pure Error

**∗ Note that the pure error term does not depend on the mean
**

model at all.

**∗ We can use the pure error mean squares to estimate the
**

variance at each value of x.

s2

j =

nj

X

1

(yij − y j )2

nj − 1 i=1

j = 1, . . . , k.

**∗ Then we can use the weights wij = 1/s2j (i = 1, . . . , nj ) as
**

the weights in a weighted least squares regression model.

7-14

Unknown Weights

∗ For observational studies, however, this is generally not possible since we will not have repeated measures at each value

of the covariate(s).

**∗ This is particularly true when there are multiple covariates.
**

∗ Sometimes, however, we may be willing to assume that the

variance of the observations is the same within each level

of some categorical variable but possibly different between

levels.

∗ In that case we can estimate the weights assigned to observations with a given level by an estimate of the variance for

that level of the categorical variable.

**∗ This leads to a two-stage method of estimation.
**

7-15

Two-Stage Estimation

**∗ In the two-stage estimation procedure we first fit a regular
**

least squares regression to the data.

**∗ If there is some evidence of non-homogenous variance then
**

we examine plots of the residuals against a categorical variable which we suspect is the culprit for this problem.

**∗ Note that we do still need to have some apriori knowledge of
**

a categorical variable likely to affect variance.

**∗ This categorical variable may, or may not, be included in the
**

mean model.

7-16

Two-Stage Estimation

**∗ Let Z be the categorical variable and assume that there are
**

nj observations with Z = j (j = 1, . . . , k).

∗ If the error variability does vary with the levels of this categorical variable then we can use

σ

ˆj2 =

X

1

ri2

nj − 1 i:z =j

i

as an estimate of the variability when Z = j.

**∗ Note Your book uses the raw residuals ei instead of the
**

studentized reiduals ri but that does not work well.

7-17

Two-Stage Estimation

∗ If we now assume that σj2 = cj σ2 we can estimate the cj by

ˆ

cj =

σ

ˆj2

=

σ

ˆ2

X

1

ri2

nj − 1 i:z =j

i

n

1 X

ri2

n i=1

**∗ We could then use the reciprocals of these estimates as the
**

weights in a weighted least squares regression in the second

stage.

**∗ Approximate inference about the parameters can then be
**

made using the results of the weighted least squares model.

7-18

Problems with Two-Stage Estimation

**∗ The method described above is not universally accepted and
**

a number of criticisms have been raised.

**∗ One problem with this approach is that different datasets
**

would result in different estimated weights and this variability

is not properly taken into account in the inference.

∗ Indeed the authors acknowledge that the true sampling distributions are unlikely to be Student-t distributions and are

unknown so inference may be suspect.

**∗ Another issue is that it is not clear how to proceed if no
**

categorical variable explaining the variance heterogeneity can

be found.

7-19

- mod2s1aUploaded byYukichi Fukuzawa
- 2009-11-14_BR_AssigmentUploaded byCyril Moreau
- Salavatore+Ch+4.pptUploaded byUmer Tariq Bashir
- Regressions Matrix SolutionsUploaded byRyan Guevara
- Lecture 2 - Panel DataUploaded bywhatnow92
- Air Oven Moisture Content Determination Methods.pdfUploaded byDewiayu Novita
- gujarati.08.mra-inf.pptxUploaded byV Prasanna Shrinivas
- US Federal Reserve: ifdp647Uploaded byThe Fed
- Affine Invariant Descriptors of 3D Object Using Multiple Regression ModelUploaded byAnonymous Gl4IRRjzN
- 5.2.1 Bayesian Linear RegressionUploaded byFrancisco José
- Lecture 11.pdfUploaded byCharles
- Labour Economics 1Uploaded byClaudia Fonseca
- BR REPORTUploaded byMira Dhurga
- Economia de la empresa 2003Uploaded byJARED EMMANUEL MONTECINO HERRERA
- Chap 2Uploaded byHuyen My
- 3-Tareq-Uploaded bySajjad_Mughal_3064
- eliuslideUploaded byapi-284720550
- Forecasting Elements for an Acre of Crops (Wheat, Barley and Rice) In Suleimanyah Governorate from 2002 to 2010Uploaded bySaad Khadur Eilyes
- ERS resource mapping: School fundingUploaded byThe Florida Times-Union
- Chapter12_solutions.pdfUploaded bychadbonadonna
- GlsUploaded byArsenal Holic
- S&P 500Uploaded byWei He
- 2240417Uploaded byalexandru_bratu_6
- BI ASSUploaded byPritish Das
- art microeconomieUploaded byAndreea Cocan
- Lecture 3 Regression AnalysisUploaded byAtina Tungga Dewi
- Investigacion de MercadosUploaded byFelipe Mendoza Casanova
- DASUploaded byPond Juprasong
- Factors Influencing Job SatisfactionUploaded byurusha
- 01_2011_194_Petek_06Uploaded byWiana Baskara