Pass week 7 - Pass class week 7 PDF

Title Pass week 7 - Pass class week 7
Author John Ceena
Course Introductory Econometrics
Institution University of New South Wales
Pages 2
File Size 300.3 KB
File Type PDF
Total Downloads 63
Total Views 161

Summary

Pass class week 7...


Description

IE PASS 2019 – Week 6 MLR Assumptions x x x

x x

MLR1-4 ensure that the OLS estimators are unbiased MLR1-5 are needed to prove OLS is BLUE MLR 5 Homoskedasticity: 𝑉𝑎𝑟(𝑢|𝑥1 … 𝑥𝑘 ) = 𝜎2 o It can also be proven that the OLS estimators are the most efficient (lowest variance i.e. B - Best) Heteroskedasticity: 𝑉𝑎𝑟(𝑢|𝑥1 … 𝑥𝑘 ) ≠ 𝜎 2 o The variance of u changes as the values of (𝑥1 … 𝑥𝑘 ) change Implications of Heteroskedasticity o Inference invalidated – usual standard errors incorrect, therefore t and F statistics invalidated. o OLS is no longer BLUE (Not ‘best’) – it is not the most efficient

x 1. 2. 3.

o 𝑦2 includes interaction and squared terms 𝐻0 : 𝛿1 = 𝛿2 = 0 𝐻1 : 𝐻0 is false Regress model by OLS, as usual. Obtain fitted values 𝑢 and 𝑦. Compute 𝑢 2 and 𝑦2 Regress 𝑢 2 on 𝑦 and 𝑦2 . Save 𝑅𝑢22 Calculate F-stat as before. Note: 𝑘 = 2; critical values: 𝐹2,𝑛−3

Question 2

Regression model:

Question 1 Using the estimation results, is there evidence of heteroskedasticity?

Resolving the problem of Heteroskedasticity Testing for Heteroskedasticity Breusch Pagan Test x Intuition behind BP test: Can we explain 𝑢 2 using 𝑥1 … 𝑥𝑘 ? Is 𝑢 2 is independent of 𝑥’s? If so then heteroskedasticity is present. 𝒖𝟐 = 𝜹𝟎 + 𝜹𝟏 𝒙𝟏 + 𝜹𝟐 𝒙𝟐 + ⋯ + 𝜹𝒌 𝒙𝒌 + 𝒗 x 𝐻0 : 𝛿1 = 𝛿2 = 𝛿𝑘 = 0 𝐻1 : 𝐻0 is false o The null hypothesis is that we assume homoskedasticity o If we reject the null hypothesis this implies our model suffers from heteroskedasticity 1. Regress model by OLS, as usual. Obtain the squared OLS residuals:  𝒖𝟐 2. Run regression of 𝑢 2 on explanatory variables and save R-squared: 𝑹𝟐𝒖𝟐 3. Compute the F statistic for joint significance of 𝑥1 … 𝑥𝑘 a. 𝐹𝑘,𝑛−𝑘−1 distribution and 𝜒𝑘2 distribution respectively under null 4. Reject if statistic too large (indicating 𝑅𝑢22 is large) or p-value too small 5.

2

𝑅 2 /𝑘  𝑢

𝑭 − 𝒔𝒕𝒂𝒕 = (1−𝑅2

2 𝑢

)/(𝑛−𝑘−1)

White Test x Why use White test? BP test assumed linear relationship [𝑢 2 = 𝛿0 + 𝛿1 𝑥1 + 𝛿2 𝑥2 ], does not capture non-linear relationships  + 𝜹𝟐  𝒖  𝟐 = 𝜹𝟎 + 𝜹𝟏 𝒚 𝒚𝟐 + 𝒗

Heteroskedasticity-Robust Inference x Heteroskedasticity can occur in known or unknown forms. x In the presence of hetero. of an unknown form , it is possible to recalculate standard errors so t-stats (or F-stats) are valid for inference o The adjustment is called a heteroskedasticity-robust procedure. This is done by STATA.

x

x

Robust std. errors are not always the best (they are only justified as the sample size becomes large) o They may validate inference but OLS estimators are no longer BLUE if heteroskedasticity is present. If heteroskedasticity is of known form, we can use better estimation procedures.

Weighted Least Squares Estimation (known form of heteroskedasticity) x The key idea here is that we know how the variance of the error term is related to 𝑥’s. Therefore, we can exploit this information and adjust for it to improve our estimators and predictions. x In this course, we assume heteroskedasticity takes the form of a multiplicative constant, ℎ(𝑥) - variance is a constant multiplied by some function ℎ(𝑥) 𝝈𝒊 𝟐 = 𝑽𝒂𝒓(𝒖|𝒙𝟏 , … , 𝒙𝒌 ) = 𝝈𝟐 𝒉(𝒙)

x x

𝒉(𝒙) is the function that describes how the error is related to 𝑥 o Often in the form: 𝒆𝒙𝒑(𝜹𝟎 + 𝜹𝟏 𝒙𝟏 + ⋯ + 𝜹𝒌 𝒙𝒌 ) Through algebra it can be shown that: 𝑽𝒂𝒓 (

x

𝒖

√𝒉(𝒙)

4. 5. x

|𝒙𝟏 , … , 𝒙𝒌 ) = 𝝈𝟐 Æ This is a constant, so it is homoskedastic.

If we re-weight our whole OLS by

𝟏

√𝒉(𝒙)

it will be homoskedastic:

x

The F-stat, which is drawn from the 𝑭𝟐,𝒏−𝒌−𝟑 distribution under the null. Reject H0 when F-stat > c (F2,n-k-3 critical value) If we reject the null hypothesis (that 𝑦 2 and 𝑦 3 are jointly insignificant) this implies we have omitted a function of an explanatory variable o Model suffers from functional form misspecification Problem: we don’t know specifically which variable to include

Question 4 x x

x

Running this new weighted regression model produces new estimates. If we have correctly specified the form of the variance (as a function of explanatory variables) i.e. our ℎ(𝑥) is exactly correct, then weighted least square (WLS) estimators are more efficient than OLS estimators (even robust). WLS also leads to new t and F statistics that have t and F distributions.

Question 3

Question 5

a) Show what the appropriate WLS weights are b) What is the transformed equation with the WLS weights?

Functional Form Misspecification If the omitted variable is a function of an explanatory variable in the model, the model suffers from functional form misspecification o E.g. omitting 𝑥 2 or interaction terms (𝑥1 . 𝑥2 ) x Results in biased estimators – ZCM (MLR4) violated x RESET Test can be used to test whether misspecification is caused by omitting a nonlinear function of the explanatory variables x When the true model is 𝑦 = 𝛽0 + 𝛽1 𝑥1 + ⋯ + 𝛽𝑘 𝑥𝑘 + 𝑢, additional functions of the 𝑥’s (𝑥 2 , 𝑥 3 , 𝑥1 . 𝑥2 ) should be insignificant when added o This is the intuition behind the RESET test RESET test: 1. Run original model and save fitted values 𝑦 2. Generate squares and cubes of those fitted values 𝑦 2 and 𝑦3 3. Test 𝑯𝟎 : 𝜹𝟏 = 𝟎, 𝜹𝟐 = 𝟎 in the expanded (unrestricted) model  We assume original (restricted) model is correct (i.e. no FFM) 𝒚 = 𝜷𝟎 + 𝜷𝟏 𝒙𝟏 + ⋯ + 𝜷𝒌 𝒙𝒌 + 𝜹𝟏 𝒚 𝟐 +𝜹𝟐 𝒚𝟑 + 𝒗 x

Is there evidence of functional form misspecification at the 5% significance level? Outline the steps for a RESET test....


Similar Free PDFs