1.1K views

Uploaded by Jorge Rojas-Vallejos

This note is a basic introduction to econometrics, particularly OLS. It also contains a practical summary of some linear algebra that you should know to address the topic.

- Derivation of BLUE property of OLS estimators
- Notes on Econometrics
- OLS
- Money Banking and Financial Markets
- Public Goods
- Econ Modelling
- Heteroscedasticity
- Caste System in India
- Multiple Regression
- Arrow's Impossibility Theorem
- Lecture1-Module1-Econometrics
- Advanced Microeconomics
- MultipleRegressionREview.pdf
- Autocorrelation
- Topic 6 Heteroscedasticity
- Lucas Critique
- multiple regression
- 48727302-Baumol’s-Sales-Maximisation-Model
- RBI - Gold Monetization
- Buchanan-An Economic Theory of Clubs

You are on page 1of 9

Abstract This is a summary containing the main ideas in the subject. This is not a summary of the lecture notes, this is a summary of ideas and basic concepts. The mathematical machinery is necessary, but the principles are much more important.

Linear Algebra

1. (AT )T = A 2. (A + B)T = AT + B T 3. (AB)T = B T AT 4. (cA)T = cAT c R

Properties of Transpose

5. det(AT ) = det(A) 6. a b = aT b =< a, b > (inner product) 7. This is important: If A has only real entries, then (AT A) is a positive-semidenite matrix. 8. (AT )1 = (A1 )T 9. If A is a square matrix, then its eigenvalues are equal to the eigenvalues of its transpose. Notice that if A M(nm) , then AAT is always symmetric. Properties of the Inverse 1. (A1 )1 = A

1 2. (kA)1 = k A1

k R \ {0}

Without Equality in Opportunities, Freedom is the privilege of a few, and Oppression the reality of everyone else.

n i=1

aii

2. tr(A + B) = tr(A) + tr(B) 3. tr(cA) = c tr(A) c R 4. tr(AB) = tr(BA) 5. Similarity invariant: tr(P 1 AP ) = tr(A) 6. Invariant under cyclic permutations: tr(ABCD) = tr(BCDA) = tr(CDAB) = tr(DABC) 7. tr(X Y ) = tr(X) tr(Y ) where is the tensor product, also known as Kronecker product. 8. tr(XY ) =

i,j

Xij Yji

The Kronecker product is dened for matrices A M(mn) and B M(pq) as follows: a11 B a1n B . . .. . AB = . . . . am1 B amn B mpnq Properties of the Kronecker Product 1. (A B)1 = (A1 B 1 ) 2. If A M(mm) and B M(nn) , then: |A B| = |A|n |B|m (A B)T = AT B T tr(A B) = tr(A)tr(B) 3. (A B)(C D) = AC BD

Careful! it doesnt distribute with respect to the usual multiplication

Properties of Determinants

n

1. det(aA) = a det(A) a R 2. det(A) = (1)n det(A) 3. det(AB) = det(A) det(B) 4. det(In ) = 1 5. det(A) =

1

det(A1 )

6. det(BAB 1 ) = det(A) similarity transformation. 7. det(A) = det(AT ) 8. det(A) = det(A) the bar represents complex conjugate.

University of Washington Page 2

aT x x Ax x

(matrices)

= =

xT a x xT AT x

=a = AT

xT A x

=A = (A + AT )x = abT x + baT x

xT Ax x

aT xxT b x

Dierentiation of traces 1. 2. 3. 4.

tr(AX) X

tr(XA) X

tr(AXB) X

tr(XBA) X

tr(AXBX T C) X |X| X

tr(XBX T CA) X

Probability Distributions

Here, we could say that, starts the summary for Econometrics ECON581. Denition 1. Normal distribution: where is the mean and 2 is the variance.

(x)2 1 f (x) = e 22 2

x R

If the mean is zero and the variance is one, then we have the standard normal distribution N (0, 1). The normal distribution has no closed form solution for its cumulative density function CDF. Denition 2. Chi-square Distribution: We say that 2 has r degrees of freedom. (r)

r

i=1

Zi2 2 (r)

E(A) = r and V (A) = 2r Thus, the 2 is just a square sum of standard normal distributions. We use this (r) distribution to test the value of the variance of a population. For instance, H0: 2 = 5 against H1: > 5

University of Washington

Page 3

Denition 3. t-student Distribution: We say that t(r) has r degrees of freedom. The t-distribution has fatter tails than the standard normal distribution. Z N (0, 1) A 2 (Z and A are independent) = T = (r) E(T ) = 0 and V (T ) =

r r2

Z A/r

t(r)

The t distribution is an appropriate ratio of a standard normal and a 2 random (r) variables. Denition 4. F Distribution: We say that F (r1 , r2 ) has r1 degrees of freedom in the numerator and r2 degrees of freedom in the denominator. A1 2 1 ) A2 2 2 ) (A1 and A2 are independent) = F = (r (r A1 /r1 F (r1 , r2 ) A2 /r2

We use the F distribution to test whether two variances are the same or not after a 2 2 2 2 structural break. For instance, H0: 0 = 1 against H1: 0 > 1 .

Probability Denitions

Denition 5. The expected value of a continuos random variable is given by: E[X] =

xf (x)dx

(1)

The notation

Denition 6. The variance of a continuos random variable is given by: V [X] = V ar[X] = E[(x )2 ] =

(x )2 f (x)dx

(2)

Denition 7. The covariance of two continuos random variables is given by: C[X, Y ] = Cov[X, Y ] = E[XY ] E[X]E[Y ] (3)

Notice that the covariance of a r.v. X with itself is its variance. In addition, if two random variables are independent, then its covariance is zero. The reverse is not necessarily true.

Some useful properties: 1. E(a + bX + cY ) = a + bE(X) + cE(Y ) 2. V (a + bX = cY ) = b2 V (X) + c2 V (Y ) + 2bcCov(X, Y ) 3. Cov(a1 + b1 X + c1 Y, a2 + b2 X + c2 Y ) = b1 b2 V (X) + c1 c2 V (Y ) + (b1 c2 + c1 b2 )Cov(X, Y ) 4. If Z = h(X, Y ), then E(Z) = EX [EY |X (Z|X)] Law of iterated expectations

University of Washington Page 4

Econometrics

A random variable is a real-valued function dened over a Sample Space. The Sample Space () is the set of all possible outcomes. Before collecting the data (ex-ante) all our estimators are random variables. Once we have realized the data(ex-post), we get a specic number for our estimators. These numbers are what we called estimates. Remark 1. A simple Econometric Model: yi = + ei regression model, but is an econometric one. i = 1, . . . , n. This is not a

3. Cov(ei , ej ) = E(ei ej ) = 0 i = j In a near future, we will further assume that the residual term follows a normal distribution with = 0 and variance 2 . This is not necessary for the estimation process, but we need to run some hypothesis tests. What we are looking for is for a line that ts the data, minimising the distance between the tted line and the data. In other words, Ordinary Least Squares (OLS).

n

Min

i=1

i 1 n n i=1

ei 2

yi = y

Denition 8. We say that an estimator is Unbiased if: E() = In other words, if after innitely sampling we are able to achieve the true population value. For this particular estimator () is easy to see that is indeed unbiased and its variance 1 2 is V ar() = n , given the assumption that the draws are iid. Note: Linear combination of normal distribution is a normal distribution. Proposition 1. If N (, ), then Z = n

2

/ n

N (0, 1)

e2 i 2 ei 2 2

2 n

2

= (n1) 2 We lose one degree of freedom here because we need to use n1 2 one datum to estimate . 2 / n

t(n1)

University of Washington

Page 5

Hypothesis Testing

Reject H0 Reject H1

Thus, we dene the following probabilities: P(Type I error) = P(Reject H0| H0 is true) = P(Type II error) = P(Fail to reject H0| H0 is false) = 1 and is the so-called power of the test. Remark 2. Multiple Linear Regression (Population) yi = xi + ei ASSUMPTIONS E(ei ) E(e2 ) i E(ei ej) ei = = = 0 i 2 i 0 i = j N (0, 2 ) (4) i = 1, . . . , n vector notation

X variables are non-stochastic. There is NO exact linear relationship among X variables. If ei is not normal, we may apply the Central Limit Theorem (CLT). However, for this we need to have a large sample size. How large is large enough? 30 (n K) is one number, but it will depend on the problem. OLS estimator results from minimising the SSE(error sum of squares)

n n

=(

i=1

x i xi )

1 i=1

xi y i

(5)

The above estimator is useful if we are in Asymptopia. In matrix notation we have: y = X + e e iid N (0, 2 In ) X is non-stochastic The OLS from the sample is: = (X X)1 X Y = + (X X)1 X e This mathematical form is useful to run analysis in the nite sample world. The OLS estimator is unbiased and its variance-covariance matrix is given by: Cov() = E[( E())( E()) ] = E[(X X)1 X ee X(X X)1 ] = 2 (X X)1 Thus, N (, 2 (X X)1 )

University of Washington Page 6

(6)

(7)

(8)

Denition 9. The matrix MX = In X(X X)1 X is symmetric and idempotent, i.e., T MX = MX and MX MX = MX In general, we can have Mi = In Xi (Xi Xi )1 Xi . Thus, Mi Xj is interpreted as the residuals from regressing Xj on Xi . Note: The following properties are important for demonstrations: 1. If A is a square matrix, then A = CC 1 where is a diagonal matrix with the eigenvalues of A, and C is the matrix of the eigenvectors in column form. 2. If A is symmetric, then C C = CC = In and hence A = CC 3. If A is symmetric and idempotent, then is a diagonal matrix with either eigenvalues 1 or 0. 4. If A = CC , then rank(A)=r where r = n i i=1 Using this denition we get that e e = e MX e and hence E( e) = 2 (n K) e Theorem 1. Gauss-Markov Theorem: In a linear regression model in which the errors have expectation zero and are uncorrelated and have equal variances, the best linear unbiased estimator (BLUE) of the coecients is given by the OLS estimator. Best means giving the lowest possible mean squared error of the estimate. Notice that the errors need not be normal, nor independent and identically distributed (only uncorrelated and homoscedastic). The proof for this theorem is based on supposing an estimator = CY that is better than and nding the related contradiction. Remark 3. Suppose that you have the model: Y = X1 1 + X2 2 + e , then you can estimate 1 as: 1 = (X1 M2 X1 )1 X1 M2 Y M2 = In X2 (X2 X2 )1 X2 likewise, for 2 we have: 2 = (X2 M1 X2 )1 X2 M1 Y M1 = In X1 (X1 X1 )1 X1 e N (0, 2 In )

University of Washington

Page 7

4.1

Misspecication Cases.

Including an Irrelevant Variable True regression model: Y = X1 1 + e Estimated regression: Y = X1 1 + X2 2 + e The main result is that the OLS estimators are NOT ecient, however theyre still unbiased. 1 = 1 + (X1 M2 X1 )1 X1 M2 e E(1 ) = 1 V ar(1 ) = 2 (X1 M2 X1 ) Thus, comparing the variances between the true estimator and the inecient one, we get a matrix that is positive denite, and so we establish the claim. 1 1 1 = 2 X1 X2 (X2 X2 )1 X2 X1 1,true ) V ar(1,est ) V ar( Omitting a Relevant Variable. True regression model: Y = X1 1 + X2 2 + e Estimated regression: Y = X1 1 + e In this case, we get bias in the estimator, so we do not even mind to analyse the variance. 1 = 1 + (X1 X1 )1 X1 X2 2 + (X1 X1 )1 X1 e E(1 ) = 1 + (X1 X1 )1 X1 X2 2

University of Washington

Page 8

Y E() Cov()

2

= = = =

n

e2 i

i=1

E( 2 ) = 2 V ar( ) =

2

e e 2 (n) (n K) 2 ee = 2 (nK) 2 2 e N (0, In ) e N (0, 2 In ) MX = In X(X X)1 X e = MX Y Theorem 2. Gauss-Markov Theorem: In a linear regression model in which the errors have expectation zero and are uncorrelated and have equal variances, the best linear unbiased estimator (BLUE) of the coecients is given by the OLS estimator. Best means giving the lowest possible mean squared error of the estimate. Notice that the errors need not be normal, nor independent and identically distributed (only uncorrelated and homoscedastic).

University of Washington

Page 9

- Derivation of BLUE property of OLS estimatorsUploaded byanksingh08
- Notes on EconometricsUploaded byJorge Rojas-Vallejos
- OLSUploaded byTzak Tzakidis
- Money Banking and Financial MarketsUploaded byRafiur Rahman
- Public GoodsUploaded byRahmat Adi
- Econ ModellingUploaded byVijaya Prasad KS
- HeteroscedasticityUploaded byDean Griffin
- Caste System in IndiaUploaded bypvvrr
- Multiple RegressionUploaded byOisín Ó Cionaoith
- Arrow's Impossibility TheoremUploaded bythedoctor248
- Lecture1-Module1-EconometricsUploaded bytarun19942008
- Advanced MicroeconomicsUploaded bythe1uploader
- MultipleRegressionREview.pdfUploaded byAaron KW
- AutocorrelationUploaded byBenazir Rahman
- Topic 6 HeteroscedasticityUploaded byRaghunandana Raja
- Lucas CritiqueUploaded bySudhanshu N Ranjan
- multiple regressionUploaded byarmailgm
- 48727302-Baumol’s-Sales-Maximisation-ModelUploaded byBenard Odero
- RBI - Gold MonetizationUploaded byNihaarika
- Buchanan-An Economic Theory of ClubsUploaded byRamanditya Wimbardana
- MoneyUploaded byDeepak Ehn
- Uncertainty RiskUploaded bytawongamm
- BankingUploaded byAnantdeep Singh Puri
- Fundamentals of Econometrics-iUploaded byAftabKhan
- Heteroscedasticity[1]Uploaded bySafuan Halim
- Rent Seeking and the Provision of Public GoodsUploaded byEdy Gutenmaher
- Market FailureUploaded byJared Allen
- Legal AspectsUploaded bynsmu838
- banking regulationUploaded byMeet Topiwala
- Bank Regulation - Wikipedia, The Free EncyclopediaUploaded bySarah Sims

- Auctions, best responses, reactions functions, Nash EquilibriaUploaded byJorge Rojas-Vallejos
- Happiness and Clean AirUploaded byJorge Rojas-Vallejos
- Trade, Unemployment and InequalityUploaded byJorge Rojas-Vallejos
- Macroeconomic Theory NotesUploaded byJorge Rojas-Vallejos
- Economic GrowthUploaded byJorge Rojas-Vallejos
- General Equilibrium and Welfare EconomicsUploaded byJorge Rojas-Vallejos
- The Inflation Tax in RBC: A revisionUploaded byJorge Rojas-Vallejos
- Turbulence characteristics of a FluidUploaded byJorge Rojas-Vallejos
- 2007Mid Semester Exam Q5a KarenUploaded byJorge Rojas-Vallejos
- Impact of Income Distribution on Economic Growth, Rojas and KhorUploaded byJorge Rojas-Vallejos
- Stackelberg Model and Bayesian GamesUploaded byJorge Rojas-Vallejos
- Basics of EconometricsUploaded byJorge Rojas-Vallejos
- Writing GuideUploaded byJorge Rojas-Vallejos
- Monetary Theory and ExpectationsUploaded byJorge Rojas-Vallejos
- Details All-pay Auction with different valuations.Uploaded byJorge Rojas-Vallejos
- Math for Economists (Midterm 2004)Uploaded byJorge Rojas-Vallejos
- Math for economists (Midterm 2005 q4.a.)Uploaded byJorge Rojas-Vallejos
- Mean-Variance Theory - Markowitz ProblemUploaded byJorge Rojas-Vallejos
- Ito's lemmaUploaded byJorge Rojas-Vallejos
- Math for EconomistsUploaded byJorge Rojas-Vallejos
- Summary of Simple Linear RegressionUploaded byJorge Rojas-Vallejos
- Math for Economists (Midterm 2007)Uploaded byJorge Rojas-Vallejos

- Chapter1 Vector CalculusUploaded byfarah hani
- [Krull_I.S.,_(Ed.)_(2012)]_Analytical_Chemistry_-_(b-ok.xyz).pdfUploaded byUlima Inayah
- Tensors an Introduction for Students Kolecki J NASA 26 REDUploaded byEduardo Castañeda
- Matrix Algebra 2016Uploaded byandre
- DieUploaded bywahyudi
- Warming, R.F.; Beam, R.M.; Hyett, B.J. - Diagonalization and Simultaneous Symmetrization of the Gas-Dynamic Matrices.pdfUploaded byEron Dauricio
- Sliding Mode ManualUploaded byAshik Ahmed
- Basic Principles of Plates and SlabsUploaded byMarin Marius
- Package ‘fda’.pdfUploaded byAndres Rodrigues
- Lecture 7 JacobianUploaded bypeterwalid
- The Utilization of Educational Technology in Enhancing Mathematics Learning in K:12 Classrooms in Batangas National High School: Guide in Developing Student Enhancement ActivitiesUploaded byLykee Bee
- B.Dasgupta Solutions.pdfUploaded byAkshat Rastogi
- VectorsUploaded byRajat Hattewar
- syllabus (2)Uploaded byHong Chul Nam
- The Cyclic Decomposition of a Nilpotent OperatorUploaded bygermanschultze
- Wei Prater Notes IitbUploaded byIndreesh Badrinarayanan
- vectors sl marksUploaded byÜlle Koduste
- HW02 Ch03 Equiv ForcesUploaded byTulong Zhu
- Ej Mat1 GADI 1718 Unit1Uploaded byPedro jota seil
- Presentation 3.pptxUploaded byshifera chimdessa
- Tensor Data AnalysisUploaded bykesavaero1
- Change of BasisUploaded bykvdorn
- JEE problems MatricesUploaded byChess
- Equality and Operation on MatricesUploaded byHy Lau
- MIT18 06SCF11 Final ExUploaded byLishii Ðë Lä Callë
- 2 Academicplan(M II)AUTONUploaded byFsdhdf Fbxh
- Available Transfer Capability CalculationUploaded byFikret Velagic
- lqrUploaded byAndrea Guidi
- Linear Algebra Nut ShellUploaded bySoumen Bose
- Lecture 33 Positive DefiniteUploaded byheypartygirl