Title | Stock Watson 3U Exercise Solutions Chapter 17 Instructors |
---|---|
Author | Leo Lamas |
Course | Econometria |
Institution | Universidad Carlos III de Madrid |
Pages | 26 |
File Size | 672.2 KB |
File Type | |
Total Downloads | 2 |
Total Views | 165 |
Download Stock Watson 3U Exercise Solutions Chapter 17 Instructors PDF
Introduction to Econometrics (3 Updated Edition, Global Edition) rd
by
James H. Stock and Mark W. Watson
Solutions to End-of-Chapter Exercises: Chapter 17* (This version August 17, 2014)
*Limited distribution: For Instructors Only. Answers to all odd-numbered questions are provided to students on the textbook website. If you find errors in the solutions, please pass them along to us at [email protected].
©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 1 _____________________________________________________________________________________________________
17.1. (a) Suppose there are n observations. Let b1 be an arbitrary estimator of 1. Given the estimator b1, the sum of squared errors for the given regression model is n
(Y b X ) . 2
i
1
i
i1
ˆ1RLS , the restricted least squares estimator of 1, minimizes the sum of squared RLS errors. That is, ˆ1 satisfies the first order condition for the minimization which
requires the differential of the sum of squared errors with respect to b1 equals zero: n
2(Y b X )( X ) 0. i
1
i
i
i 1
Solving for b1 from the first order condition leads to the restricted least squares estimator
ˆ1RLS
ni 1 X iYi . ni1 X i2
(b) We show first that ˆ1RLS is unbiased. We can represent the restricted least RLS squares estimator ˆ1 in terms of the regressors and errors:
ˆ1RLS
in1 X iYi in1 X i (1 X i ui ) ni 1 X iu i 1 n n n 2 2 2 . i 1 X i i 1 X i i 1 X i
Thus n X u n X E (u | X , , X n ) E ( ˆ1RLS ) 1 E in1 i 2 i 1 E i1 i n i 1 2 1, i 1 X i i 1 X i
where the second equality follows by using the law of iterated expectations, and the third equality follows from ni1 X i E( u i| X 1, , X n) 0 ni1 X i2 (continued on the next page) ©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 2 _____________________________________________________________________________________________________
17.1 (continued) because the observations are i.i.d. and E(ui | Xi) 0. (Note, E(ui | X1,…, Xn) E(ui | Xi) because the observations are i.i.d. RLS Under assumptions 13 of Key Concept 17.1, ˆ1 is asymptotically normally
distributed. The large sample normal approximation to the limiting distribution RLS of ˆ follows from considering 1
ˆ1 RLS 1
in1 X i ui n1 ni 1 X iui 1 n 2 . in1 X i2 i 1 Xi n
Consider first the numerator which is the sample average of vi Xiui. By assumption 1 of Key Concept 17.1, vi has mean zero: E ( X iui ) E[ X i E(ui | Xi )] 0. By assumption 2, vi is i.i.d. By assumption 3, var(vi) is finite. Let v 1n ni 1 Xi ui , then v2 v2 / n. Using the central limit theorem, the sample average v /v
1
v
n
v n
i
d N (0, 1)
i 1
or
1 n For the denominator, X
2 i
n
Xu d
i i
N (0, v2 ).
i1
is i.i.d. with finite second variance (because X has a
finite fourth moment), so that by the law of large numbers
1 n 2 p Xi E (X 2 ). n i 1 Combining the results on the numerator and the denominator and applying Slutsky’s theorem lead to
(continued on the next page)
©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 3 _____________________________________________________________________________________________________
17.1 (continued)
n (ˆ1RLS u )
1 n
ni1 X iu i
1 n
n i 1
X
2 i
var( X iu i ) d N 0, . E( X 2)
(c) ˆ1RLS is a linear estimator: n i 1 X iYi i 1 ai Yi , n 2 i 1 X i n
ˆ1RLS
where ai
Xi . Xi2 n i 1
The weight ai (i 1, , n) depends on X1,, Xn but not on Y1,, Yn. Thus
ˆ1RLS 1
ni1 X iu i . ni1 X i2
ˆ1RLS is conditionally unbiased because
E( ˆ 1RLS|X 1,, X n E 1
n X iui i1 |X1 ,, X n n 2 i1 X i
n X iui |X1 ,, X n 1 E i1 n 2 i1 X i 1 . The final equality used the fact that n n i1 i1 Xi u i X i E(ui |X1 ,, X n ) 0 |X ,, X E n n n 1 2 X i2 i1 i1 X i
because the observations are i.i.d. and E (ui |Xi) 0. (continued on the next page)
©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 4 _____________________________________________________________________________________________________
17.1 (continued) (d) The conditional variance of ˆ1RLS , given X1,, Xn, is
ni1 X iui RLS ˆ |X 1 ,, X n var( 1 |X1,, X n ) var 1 n 2 i1 X i
ni1 X i2 var(ui|X 1,, X n ) (ni1 X i2 ) 2
ni1 X i2 2u (ni1 X i2 )2
2u ni1 X i2
.
(e) The conditional variance of the OLS estimator ˆ1 is var( ˆ1|X 1 , , X n )
u2 in1 (X i X )2
.
Since n
n
n
n
n
i 1
i 1
i 1
i 1
i 1
( Xi X )2 Xi2 2 X X i nX 2 X i2 nX 2 X i2, the OLS estimator has a larger conditional variance: var( |X , , X ) var(ˆ RLS | X , , X ). 1
1
n
1
1
n
RLS The restricted least squares estimator ˆ1 is more efficient. RLS (f) Under assumption 5 of Key Concept 17.1, conditional on X1,, Xn, ˆ1 is
normally distributed since it is a weighted average of normally distributed variables ui:
ˆ1RLS 1
ni1 X iu i . ni1 X i2
(continued on the next page)
©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 5 _____________________________________________________________________________________________________
17.1 (continued) Using the conditional mean and conditional variance of ˆ1 RLS derived in parts (c) and (d) respectively, the sampling distribution of ˆ1RLS , conditional on X1,, Xn, is
u2
ˆ1RLS ~ N 1 ,
n i 1
. X 2 i
(g) The estimator
The conditional variance is
The difference in the conditional variance of
In order to prove
we need to show
n 1 n 2 ( X i ) i 1 X i2 n i 1
or equivalently (continued on the next page)
©2015 Pearson Education, Ltd.
is
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 6 _____________________________________________________________________________________________________
17.1 (continued)
2
n n X X i . i 1 i 1 n
2 i
This inequality comes directly by applying the Cauchy-Schwartz inequality 2
n n n 2 2 a b a ( ) i bi i i i 1 i 1 i 1
which implies 2
2
n n n n n 2 2 1 X 1 X n X i Xi2 . i i i1 i1 i1 i1 i1
That is Note: because
is linear and conditionally unbiased, the result follows directly from the Gauss-Markov
theorem.
©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 7 _____________________________________________________________________________________________________
17.2. The sample covariance is
1 n ( X i X )(Yi Y ) n 1 i 1 1 n {[ X i X ) ( X X )][Y i Y) (Y Y)]} n 1 i 1
sXY
n 1 n ( )( ) X Y i 1 ( X X )(Y i Y ) X i Y n 1 i 1 i
n n ( Xi X )(Y Y ) ( X X )(Y Y ) i 1 i 1 n n 1 n ( Xi X )( Yi Y ) ( X X )( Y Y ) n 1 n i 1 n 1
where the final equality follows from the definition of X and Y which implies that ni1( X i X ) n ( X X ) and ni1(Y i Y ) n (Y Y ), and by collecting terms.
We apply the law of large numbers on sXY to check its convergence in probability. It p is easy to see the second term converges in probability to zero because X X and p p Y 0 by Slutsky’s theorem. Let’s look at the first Y so (X X )(Y Y )
term. Since (Xi, Yi) are i.i.d., the random sequence (Xi X) (Yi Y) are i.i.d. By the definition of covariance, we have E[( X i X )( Yi Y )] XY. To apply the law of large numbers on the first term, we need to have
var[( X i X )(Y i Y )] which is satisfied since var[(X i X )(Yi Y )] E [(X i X )2 (Yi Y )2 ] E [( X i X ) ]E [(Yi Y ) ] . 4
4
The second inequality follows by applying the Cauchy-Schwartz inequality, and the third inequality follows because of the finite fourth moments for (Xi, Yi). Applying the law of large numbers, we have (continued on the next page)
©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 8 _____________________________________________________________________________________________________
17.2 (continued)
1 n (X i X )(Yi Y ) p E [(X i X )(Yi Y )] XY. n i1 Also,
n n1
1, so the first term for sXY converges in probability to XY. Combining
p results on the two terms for sXY , we have s XY XY .
©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 9 _____________________________________________________________________________________________________
17.3. (a) Using Equation (17.19), we have
n ( ˆ1 1) n n
1 n 1 n
ni1 ( X i X )u i ni1( X i X ) 2
1 n
ni1 [( X i X ) ( X X )]ui 1 n ( X X ) 2 n i 1 i
1 n
ni1 (X i X )ui
1 n
i1 ( X i X ) n
2
n ( X X ) n1 i1 ui 1 n 2 n i 1 ( X i X )
( X X ) n1 ni 1 u i ni 1 vi 1 n ( X X ) 2 1 n ( X X ) 2 n i 1 i n i 1 i 1 n
by defining vi (Xi X)ui. (b) The random variables u1,, un are i.i.d. with mean u 0 and variance 2 0 u . By the central limit theorem, n ( u u )
u
1 n
ni1 u i
u
d N (0, 1).
The law of large numbers implies X X 2 , or X X 0. By the consistency p
of sample variance,
1 n
p
i 1 ( X i X )2 converges in probability to population n
variance, var(Xi), which is finite and non-zero. The result then follows from Slutsky’s theorem. (c) The random variable vi (Xi X) ui has finite variance: var(vi ) var[(X i X )i ] E[( X i X ) 2 u i2 ] E[( X i X ) 4 ] E[(ui ) 4 ] .
The inequality follows by applying the Cauchy-Schwartz inequality, and the second inequality follows because of the finite fourth moments for (Xi, ui). The finite variance along with the fact that vi has mean zero (by assumption 1 of Key Concept 15.1) and vi is i.i.d. (by assumption 2) implies that the sample average v satisfies the requirements of the central limit theorem. Thus, (continued on the next page) ©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 10 _____________________________________________________________________________________________________
17.3 (continued) v
v
ni1 v i
1 n
v
satisfies the central limit theorem. (d) Applying the central limit theorem, we have 1 n
ni1 vi
v
d N (0, 1).
Because the sample variance is a consistent estimator of the population variance, we have 1 n
ni 1 ( X i X ) 2 p 1. var( X i )
Using Slutsky’s theorem,
ni1 vt
1 n
1 n
v
( Xt X ) n i 1
2
d N (0,1),
X2 or equivalently
ni1 vi var( vi ) d N 0, . n 2 2 1 i1 ( X i X ) [var( X i)] n 1 n
Thus n ni 1 vi ( X X ) n1 i 1 ui 1 n 2 ni1 ( X i X ) 2 n i 1 ( X i X ) 1 n
n (ˆ1 1 )
1 n
var(vi ) d N 0, 2 [var( X i )] since the second term for
n (ˆ1 1 ) converges in probability to zero as shown
in part (b). ©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 11 _____________________________________________________________________________________________________
17.4. (a) Write ( ˆ1 1 ) an Sn where an
1 n
and Sn
n( Bˆ1 1 ). Now,
d an 0 and S n S where S is distributed N (0, a2). By Slutsky’s theorem d an S n 0 S. Thus Pr (|ˆ1 1 | ) 0 for any > 0, so that
p ˆ1 1 0 and ˆ1 is consistent.
(b) We have (i)
su2
u2
p
1 and (ii) g( x)
x is a continuous function; thus from the
continuous mapping theorem s2u
2 u
su
u
p
1.
©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 12 _____________________________________________________________________________________________________
17.5. Because E(W 4) [E(W2)]2 var(W2), [E(W2)]2 E (W 4) < . Thus E(W2) < .
©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 13 _____________________________________________________________________________________________________
17.6. Using the law of iterated expectations, we have
E( ˆ1 ) E[E(ˆ1 |X1 ,, X n)] E( 1 ) 1 .
©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 14 _____________________________________________________________________________________________________
17.7. (a) The joint probability distribution function of ui, uj, Xi, Xj is f (ui, uj, Xi, Xj). The conditional probability distribution function of ui and Xi given uj and Xj is f (ui, Xi | uj, Xj). Since ui, Xi, i 1,, n are i.i.d., f (ui, Xi | uj, Xj) f (ui, Xi). By definition of the conditional probability distribution function, we have f (u i , u j , X i , X j ) f ( ui , X i | u j , X j ) f (u j, X j ) f (u i , X i ) f (u j , X j ).
(b) The conditional probability distribution function of ui and uj given Xi and Xj equals f ( ui , u j | Xi , X j)
f ( ui , u j , Xi , X j ) f (X i , X j )
f ( ui , Xi ) f ( u j , X j ) f ( X i ) f (X j )
f ( ui | Xi) f ( u j| X j).
The first and third equalities used the definition of the conditional probability distribution function. The second equality used the conclusion the from part (a) and the independence between Xi and Xj. Substituting f (ui , u j | X i , X j) f (u i | X i) f (u j | X j)
into the definition of the conditional expectation, we have
E (uiu j |X i , X j ) u iu j f (u i , u j |X i , X j )du idu j uiu j f (ui | X i) f (u j |X j)du idu j u i f (u i |X i)du i u j f (u j |X j)du
j
E (ui |X i )E (u j |X j ). (c) Let Q (X1, X2,, Xi – 1, Xi + 1,, Xn), so that f (ui|X1,, Xn) f (ui |Xi, Q). Write
(continued on next page)
©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 15 _____________________________________________________________________________________________________
17.7 (continued)
f (ui | Xi , Q)
f (ui , X i , Q) f ( Xi , Q )
f (ui , X i ) f (Q ) f ( X i ) f ( Q) f (ui , X i ) f (Xi) f ( ui | Xi )
where the first equality uses the definition of the conditional density, the second uses the fact that (ui, Xi) and Q are independent, and the final equality uses the definition of the conditional density. The result then follows directly. (d) An argument like that used in (c) implies f (ui u j |X i , X n ) f (u iu j |X i , X j )
and the result then follows from part (b).
©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 16 _____________________________________________________________________________________________________
17.8. (a) Because the errors are heteroskedastic, the Gauss-Markov theorem does not apply. The OLS estimator of 1 is not BLUE. (b) We obtain the BLUE estimator of 1 from OLS in the following
where
(c) Using equations (17.2) and (17.19), we know the OLS estimator, ˆ1 , is
ˆ1
ni 1 (X i X )(Yi Y ) ni 1( X i X ) ui . 1 in 1( Xi X ) 2 in 1( Xi X ) 2
As a weighted average of normally distributed variables ui , ˆ1 is normally distributed with mean E(ˆ1 ) 1 . The conditional variance of ˆ1 , given X1,, Xn, is n ( X X ) u i var (ˆ1 |X1 ,..., X n ) var 1 i n1 i |X 1 ,..., X n 2 i 1 (X i X ) n 2 ( X X ) var (u i |X 1 ,..., X n) i1 i n [i 1 ( Xi X ) 2 ]2
ni1 ( X i X ) 2 var(u i|X i ) [ ni1 ( X i X ) 2 ]2
i1 ( X i X ) (0 1| X i |) . n 2 2 [ i 1 ( X i X ) ] n
2
Thus the exact sampling distribution of the OLS estimator, 1ˆ , conditional on X1,, Xn, is (continued on next page)
©2015 Pearson Education, Ltd.
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 17 _...