Book solution introduction to econometrics james h stock mark w watson solutions odd numbered exercises 2 PDF

Title Book solution introduction to econometrics james h stock mark w watson solutions odd numbered exercises 2
Course Econometria
Institution Universidad Carlos III de Madrid
Pages 61
File Size 1.3 MB
File Type PDF
Total Downloads 89
Total Views 144

Summary

Solutions to Stock and Watson book...


Description

For Students Solutions to Odd-Numbered End-of-Chapter Exercises

Chapter 2 Review of Probability 2.1.

(a) Probability distribution function for Y Outcome (number of heads) Probability

Y0 0.25

Y1 0.50

Y2 0.25

(b) Cumulative probability distribution function for Y Outcome (number of heads) Probability

Y0 0

0Y1 0.25

1Y2 0.75

d (c) Y = E (Y )  (0  0.25)  (1  0.50)  (2  0.25)  1.00 . F  Fq, . 2 2 Using Key Concept 2.3: var(Y )  E (Y )  [ E(Y )] , and

( ui | X i )

so that

var(Y )  E (Y 2)  [E (Y )] 2  1.50  (1.00) 2  0.50. 2.3.

For the two new random variables W  3  6 X and V  20  7Y , we have: (a) E (V )  E (20  7Y )  20  7E (Y )  20  7 0 78  14 54,

E(W )  E (3  6 X )  3  6 E ( X )  3  6  0 70  72

(b) W2  var (3  6X )  62   X2  36  0 21 7 56, V2  var (20  7Y )  (7)2   Y2  49  0 1716  8 4084 (c)  WV  cov(3  6X , 20  7Y )  6  ( 7) cov(X , Y )  42  0 084   3 528 corr ( W , V ) 

 WV  3 528   0 4425   W V  7 56  8 4084

©2011 Pearson Education, Inc. Publishing as Addison Wesley

Y2 1.0

Solutions to Odd-Numbered End-of-Chapter Exercises

3

2.5.

Let X denote temperature in F and Y denote temperature in C. Recall that Y  0 when X  32 and Y 100 when X  212; this implies Y  (100/180)  ( X  32) or Y  17.78  (5/9)  X. Using Key Concept 2.3, X  70oF implies that Y  17.78  (5/9)  70  21.11C, and  X  7oF implies  Y  (5/9)  7  3.89 C.

2.7.

Using obvious notation, C  M  F ; thus  C   M   F and C2  M2  F2  2cov(M, F ). This implies (a)  C  40  45  $85,000 per year. (b) corr ( M , F ) 

cov(M , F )

 M F

, so that cov (M , F )   M  F corr ( M, F). Thus cov( M, F ) 

12 18  0.80  172.80, where the units are squared thousands of dollars per year. 2 2 2 2 2 2 (c)  C   M   F  2cov(M, F ), so that C  12 18  2 172.80  813.60, and

 C  813.60  28.524 thousand dollars per year. (d) First you need to look up the current Euro/dollar exchange rate in the Wall Street Journal, the Federal Reserve web page, or other financial data outlet. Suppose that this exchange rate is e (say e  0.80 Euros per dollar); each 1 dollar is therefore with e Euros. The mean is therefore e   C (in units of thousands of Euros per year), and the standard deviation is e  C (in units of thousands of Euros per year). The correlation is unit-free, and is unchanged.

2.9.

Value of Y

1 5 8 Probability distribution of Y Value of X

14 0.02 0.17 0.02 0.21

22 0.05 0.15 0.03 0.23

30 0.10 0.05 0.15 0.30

40 0.03 0.02 0.10 0.15

65 0.01 0.01 0.09 0.11

Probability Distribution of X 0.21 0.40 0.39 1.00

(a) The probability distribution is given in the table above. E (Y )  14  0.21  22  0.23  30  0.30  40  0.15  65 0.11 30.15 2 2 2 2 2 2 E (Y )  14  0.21  22  0.23  30  0.30  40  0.15  65  0.11  1127.23

var(Y )  E (Y 2 )  [E (Y )]2  218.21  Y  14.77

©2011 Pearson Education, Inc. Publishing as Addison Wesley

4

Stock/Watson • Introduction to Econometrics, Third Edition

(b) The conditional probability of Y|X  8 is given in the table below Value of Y

14

22

30

40

65

0.02/0.39

0.03/0.39

0.15/0.39

0.10/0.39

0.09/0.39

E ( Y| X 8) 14  (0.02/0.39)  22  (0.03/0.39)  30  (0.15/0.39)  40  (0.10/0.39)  65  (0.09/0.39)  39.21 2 2 2 2 E (Y |X  8) 14  (0.02/0.39)  22  (0.03/0.39)  30  (0.15/0.39)

 40 2  (0.10/0.39)  65 2  (0.09/0.39)  1778.7

var(Y )  1778.7  39.212  241.65  Y  X 8  15.54 (c) E (XY )  (1 14 0.02)  (1 22: 0.05)   (8 65 0.09)  171.7

cov(X , Y )  E (XY )  E( X ) E(Y )  171.7  5.33  30.15  11.0 corr( X , Y )  cov( X , Y )/(  X  Y )  11.0 / (2.60  14.77)  0.286

2.11.

(a) 0.90 (b) 0.05 (c) 0.05 (d) When Y ~  10, then Y /10 ~ F10,  . 2

(e) Y  Z 2 , where Z ~ N (0,1), thus Pr (Y  1)  Pr ( 1 Z  1)  0.32. 2.13.

2 2 2 2 (a) E(Y )  Var(Y)  Y  1  0  1; E(W )  Var( W)  W  100  0  100.

(b) Y and W are symmetric around 0, thus skewness is equal to 0; because their mean is zero, this means that the third moment is zero. 4 4 4 (c) The kurtosis of the normal is 3, so 3  E (Y  Y ) / Y ; solving yields E(Y )  3; a similar calculation yields the results for W. (d) First, condition on X  0, so that S  W:

E (S | X  0)  0; E (S 2 | X  0)  100, E (S 3|X  0)  0, E( S 4| X  0)  3 100 2. Similarly, E (S | X  1)  0; E (S 2 | X  1)  1, E (S 3 | X  1)  0, E(S 4 | X  1)  3. From the law of iterated expectations E (S )  E (S | X  0)  Pr (X  0)  E (S | X  1) Pr(X  1)  0

E( S 2 )  E ( S 2 | X  0)  Pr (X  0)  E (S 2 | X  1) Pr( X  1)  100 0.01 1 0.99  1.99 E( S 3 )  E (S 3 | X  0)  Pr (X  0)  E (S 3 | X  1)  Pr( X  1)  0 E( S 4 )  E ( S 4 | X  0)  Pr (X  0)  E ( S 4 | X  1) Pr( X  1) 2

 3 100  0.01 3 1  0.99  302.97 ©2011 Pearson Education, Inc. Publishing as Addison Wesley

Solutions to Odd-Numbered End-of-Chapter Exercises

5

3 3 (e)  S  E (S )  0, thus E( S  S )  E( S )  0 from part (d). Thus skewness  0. Similarly,

 S2  E( S  S )2  E( S2 )  1.99, and E( S  S )4  E( S 4 )  302.97. 2 Thus, kurtosis  302.97 / (1.99 )  76.5

2.15.

 9.6  10 Y  10 10.4  10   (a) Pr (9.6  Y 10.4)  Pr    4/n 4/n   4/n 10.4  10  9.6  10  Pr Z  4/n   4/ n

where Z ~ N(0, 1). Thus, 10.4  10   9.6  10 (i) n  20; Pr  Z   Pr (0.89  Z  0.89)  0.63 4/n   4/n

10.4  10  9.6 10 Z (ii) n  100; Pr    Pr(2.00  Z  2.00)  0.954 4/n   4/n 10.4  10   9.6  10 Z   Pr( 6.32  Z  6.32)  1.000 4/ n   4/ n

(iii) n  1000; Pr 

 c Y  10  (b) Pr (10  c  Y 10  c)  Pr   4/n  4/ n c  c  Pr  Z  4/n  4/n

c   4/n   . 

c

gets large, and the probability converges to 1. 4/n (c) This follows from (b) and the definition of convergence in probability given in Key Concept 2.6. As n get large

2.17.

2  Y = 0.4 and Y  0.4 0.6 0.24

 Y  0.4 0.43  0.4   Y  0.4  (a) (i) P( Y  0.43)  Pr    0.6124   0.27   Pr  0.24/n   0.24/n  0.24/n   Y  0.4 0.37  0.4   Y  0.4  (ii) P( Y  0.37)  Pr    1.22   0.11   Pr  0.24/n   0.24/n  0.24/n  0.41  0.40 (b) We know Pr(1.96  Z  1.96)  0.95, thus we want n to satisfy 0.41   1.96 24 / n 0.39  0.40 and  1.96. Solving these inequalities yields n  9220. 24 / n

©2011 Pearson Education, Inc. Publishing as Addison Wesley

6

Stock/Watson • Introduction to Econometrics, Third Edition

l

2.19.

(a) Pr (Y  yj )   Pr ( X  xi , Y  yj ) i 1 l

  Pr (Y  y j |X xi )Pr (X xi ) i 1

k

k

l

j 1

j1

i1

(b) E (Y )   y j Pr (Y  yj )   yj  Pr (Y  yj |X  xi ) Pr ( X  xi ) l





k

    y j Pr ( Y  y j| X  x i)  Pr ( X  xi)  i 1  j 1

 

l

  E (Y |X  xi )Pr ( X  xi ) i 1

(c) When X and Y are independent, Pr (X  xi , Y  yj )  Pr (X  xi )Pr (Y  yj ) so

 XY  E[( X   X )(Y  Y )] l

k

   (xi X )(y j Y ) Pr (X xi ,Y  y j ) i 1 j 1 l

k

  (xi  X )( yj Y ) Pr ( X  xi ) Pr (Y y j ) i 1 j 1

  l  k    (x i   X )Pr ( X  xi )   ( yj   Y ) Pr (Y  yj   i1   j 1   E( X   X ) E( Y   Y )  0  0  0, corr(X , Y )  2.21.

0  XY   0  X Y  X Y

(a) E( X   ) 3  E[( X   )2 ( X   )]  E [ X 3  2 X 2  X  2  X 2  2 X  2   3 ]  E ( X 3 )  3E ( X 2 )  3E ( X ) 2   3  E ( X 3 )  3E (X 2 )E (X )  3 E ( X )[ E ( X )] 2  [ E ( X )]3  E ( X 3 )  3E (X 2 )E (X )  2E ( X )3 (b) E( X   ) 4  E[( X 3  3 X 2  3 X  2   3 )( X   )]  E[ X 4  3 X 3   3 X 2  2  X  3  X 3   3 X 2  2  3 X  3   4 ]  E( X 4 )  4 E ( X 3 ) E ( X )  6E ( X 2 ) E ( X )2  4E ( X )E ( X )3  E ( X )4 4 3 2 2 4  E( X )  4[ E ( X )][ E ( X )]  6[ E ( X )] [ E ( X )] 3[E (X )]

©2011 Pearson Education, Inc. Publishing as Addison Wesley

Solutions to Odd-Numbered End-of-Chapter Exercises

2.23.

X and Z are two independently distributed standard normal random variables, so

X  Z  0,  2X  Z2  1,  XZ  0. (a) Because of the independence between X and Z , Pr (Z  z| X  x)  Pr ( Z  z), and

E (Z |X )  E( Z )  0. Thus E(Y| X )  E( X 2  Z| X )  E( X 2| X)  E( Z| X)  X 2  0  X 2 2 2 2 2 2 (b) E (X )   X   X  1, and Y  E( X  Z )  E( X )  Z  1 0  1 3 3 (c) E ( XY )  E( X  ZX )  E( X )  E( ZX). Using the fact that the odd moments of a standard

normal random variable are all zero, we have E ( X )  0. Using the independence between 3

X and Z , we have E( ZX )   Z  X  0. Thus E ( XY )  E ( X 3 )  E( ZX )  0. cov (XY )  E [( X   X )(Y  Y )]  E [(X  0)(Y  1)]  E (XY  X )  E (XY )  E (X )

(d)

 0  0  0 corr ( X , Y ) 

2.25.

n

n

i 1

i 1

 axi  (ax1  ax2  ax3   axn ) a( x1 x2  x3   xn ) a xi

(a) n

(b)

0  XY   0  X  Y  X Y

 (x

i

 y i )  (x1  y1  x 2  y 2  x n  y n )

i 1

 ( x1  x 2   x n )  (y1  y2   y n ) n

n

i 1

i 1

  xi   yi n

(c)

 a  ( a  a  a    a)  na i 1 n

(d)

 (a  bx i 1

n

i

 cy i ) 2   (a 2  b 2 xi2  c 2 yi2  2 abxi  2 acyi  2 bcxi yi ) i 1

n

n

n

n

n

i 1

i 1

i 1

i 1

i 1

 na2  b2  xi2  c2  yi2  2 ab xi  2 ac yi  2 bc xi yi

2.27

(a) E(W)  E[E(W|Z)]  E[E(X  X )|Z]  E[E(X|Z)  E(X|Z)]  0. (b) E(WZ)  E[E(WZ|Z)]  E[ZE(W)|Z]  E[ Z  0]  0 (c) Using the hint: V  W  h(Z), so that E(V2)  E(W2)  E[h(Z)2]  2  E[W  h(Z)]. Using an argument like that in (b), E[W  h(Z)]  0. Thus, E(V2)  E(W2)  E[h(Z)2], and the result follows by recognizing that E[h(Z)2]  0 because h(z)2  0 for any value of z.

©2011 Pearson Education, Inc. Publishing as Addison Wesley

7

Chapter 3 Review of Statistics 3.1.

The central limit theorem suggests that when the sample size ( n ) is large, the distribution of the

Y

2

sample average ( Y ) is approximately N   Y ,  Y2  with  Y2   

n

. Given a population Y  100,

Y2  43 0, we have (a) n  100,  Y2 

 Y2 n



43  0 43, and 100

 Y  100 101  100  Pr (Y  101)  Pr      (1.525)  0 9364 0 43   0 43

(b) n  64,  2Y 

 Y2 n



43  06719, and 64

 101  100 Y  100 103  100  Pr(101  Y  103)  Pr    0 6719 0 6719   0 6719  (3 6599)  (1 2200)  0 9999  0 8888  0 1111  (c) n  165,  Y2 

 2Y n



43  02606, and 165

 Y  100 98  100  Pr (Y  98)  1  Pr (Y  98)  1  Pr    02606   02606 1  ( 3 9178)  (3 9178) 1 0000 (rounded to four decimal places) 3.3.

Denote each voter’s preference by Y . Y  1 if the voter prefers the incumbent and Y  0 if the voter prefers the challenger. Y is a Bernoulli random variable with probability Pr (Y  1)  p and Pr (Y  0)  1  p. From the solution to Exercise 3.2, Y has mean p and variance p (1  p ). (a) p ˆ

215  0 5375. 400

pˆ(1  pˆ) 0.5375  (1  0.5375)  var(pˆ )  (b) The estimated variance of pˆ is    62148  10 4. The 400 n 1 standard error is SE ( ˆp)  (var( ˆp)) 2  0 0249.

©2011 Pearson Education, Inc. Publishing as Addison Wesley

Solutions to Odd-Numbered End-of-Chapter Exercises

9

(c) The computed t-statistic is

t act 

pˆ   p  0 0 5375  05   1 506 SE( pˆ ) 00249

Because of the large sample size (n  400), we can use Equation (3.14) in the text to get the p-value for the test H0  p  05 vs. H1  p  0 5 :

p-value  2(|t act |)  2(1506)  2  0066  0132 (d) Using Equation (3.17) in the text, the p-value for the test H 0  p  05 vs. H1  p  05 is

p-value 1  ( tact ) 1  (1 506) 1  0 934  0 066  (e) Part (c) is a two-sided test and the p-value is the area in the tails of the standard normal distribution outside  (calculated t-statistic). Part (d) is a one-sided test and the p-value is the area under the standard normal distribution to the right of the calculated t-statistic. (f) For the test H0  p  05 vs. H1  p  05, we cannot reject the null hypothesis at the 5% significance level. The p-value 0.066 is larger than 0.05. Equivalently the calculated t-statistic 1 506 is less than the critical value 1.64 for a one-sided test with a 5% significance level. The test suggests that the survey did not contain statistically significant evidence that the incumbent was ahead of the challenger at the time of the survey. 3.5.

(a) (i) The size is given by Pr(| pˆ  0.5|  .02), where the probability is computed assuming that

p  0.5. Pr(| pˆ  0.5|  0.02)  1  Pr( 0.02  pˆ  0.5  .02)  0.02 pˆ  0.5  1  Pr    0.5  0.5/1055  0.5  0.5/1055 pˆ  0.5    1  Pr   1.30   1.30  0.5  0.5/1055    0.19

0.02   0.5  0.5/1055 

where the final equality using the central limit theorem approximation. (ii) The power is given by Pr(| pˆ  0.5|  0.02), where the probability is computed assuming that p  0.53.

Pr(|pˆ  0.5|  0.02)  1 Pr( 0.02  pˆ  0.5 .02) 0.02  0.02  pˆ  0.5  1  Pr     0.53  0.47/1055 0.53  0.47/1055   0.53  0.47/1055 pˆ  0.53  0.05 0.01   1  Pr     0.53  0.47/1055 0.53  0.47/1055   0.53  0.47/1055 pˆ  0.53    1  Pr  3.25    0.65 .53 0.47/1055    0.74 where the final equality using the central limit theorem approximation. ©2011 Pearson Education, Inc. Publishing as Addison Wesley

10

Stock/Watson • Introduction to Econometrics, Third Edition

0.54  0.50  2.61, and Pr(|t | 2.61) 0.01,so that the null is rejected at the (0.54  0.46) / 1055 5% level. (ii) Pr(t  2.61)  .004, so that the null is rejected at the 5% level.

(b) (i) t 

(iii) 0.54 1.96 (0.54 0.46) / 1055 0.54 0.03, or 0.51 to 0.57. (iv) 0.54 2.58 (0.54 0.46) / 1055  0.54 0.04, or 0.50 to 0.58. (v) 0.54 0.67 (0.54 0.46) / 1055  0.54 0.01, or 0.53 to 0.55. (c) (i) The probability is 0.95 is any single survey, there are 20 independent surveys, so the 20 probability if 0.95  0.36 (ii) 95% of the 20 confidence intervals or 19. (d) The relevant equation is 1.96  SE( pˆ)  .01 or 1.96  p(1  p) / n  .01. Thus n must be 1.962 p (1  p ) , so that the answer depends on the value of p. Note that the 2 0.01 largest value that p(1 − p) can take on is 0.25 (that is, p  0.5 makes p(1  p) as large as 2 1.96  0.25  9604, then the margin of error is less than 0.01 for all possible). Thus if n  0.012 values of p. chosen so that n 

3.7.

The null hypothesis is that the survey is a random draw from a population with p = 0.11. The tpˆ  0.11 statistic is t  , where SE( ˆp )  ˆp (1  ˆp )/n. (An alternative formula for SE( pˆ ) is SE( pˆ ) 0.11 (1  0.11) / n, which is valid under the null hypothesis that p  0.11). The value of the tstatistic is 2.71, which has a p-value of that is less than 0.01. Thus the null hypothesis p  0.11 (the survey is unbiased) can be rejected at the 1% level.

3.9.

Denote the life of a light bulb from the new process by Y . The mean of Y is  and the standard deviation of Y is  Y  200 hours. Y is the sample mean with a sample size n  100. The 200 standard deviation of the sampling distribution of Y is Y   Y   20 hours. The n 100 hypothesis test is H0 :   2000 vs. H1    2000 . The manager will accept the alternative hypothesis if Y  2100 hours. (a) The size of a test is the probability of erroneously rejecting a null hypothesis when it is valid. The size of the manager’s test is

size  Pr(Y  2100|   2000)  1  Pr( Y  2100|   2000)  Y  2000 2100  2000  |  2000   1 Pr   20 20    1  (5)  1 0 999999713  2 87  10 7 ,

©2011 Pearson Education, Inc. Publishing as Addison Wesley

Solutions to Odd-Numbered End-of-Chapter Exercises

11

where Pr(Y  2100|  2000) means the probability that the sample mean is greater than 2100 hours when the new process has a mean of 2000 hours. (b)

The power of a test is the probability of correctly rejecting a null hypothesis when it is invalid. We calculate first the probability of the manager erroneously accepting the null hypothesis when it is invalid:  Y 2150 2100 2150   |  2150  20 20  

  Pr(Y  2100|   2150)  Pr 

  (25)  1   (25)  1  0 9938  00062 The power of the manager’s testing is 1    1 0 0062 0 9938. (c) For a test with 5%, the rejection region for the null hypothesis contains those values of the t-statistic exceeding 1.645. t act  Y

act

 2000  1 645  Y 20

act

 2000 1 645 20  2032 9

The manager should believe the inventor’s claim if the sample mean life of the new product is greater than 2032.9 hours if she wants the size of the test to be 5%. 3.11.

Assume that n is an even number. Then Y is constructed by applying a weight of 1/2 to the n/2 “odd” observations and a weight of 3/2 to the remaining n/2 observations.

E(Y) 

 1  1 3 1 3 E (Y1 )  E (Y2 )   E (Yn1 )  E (Yn )    n  2 2 2 2 

1 1 n 3 n      Y    Y   Y 2 2 n 2 2  11 9 1 9  var(Y)  2  var(Y1 )  var(Y2 )   var(Yn 1 )  var(Yn ) n 4 4 4 4   3.13

2 11 n 2 9 n 2     Y     Y   1 25 Y  2 n 4 2 n 4 2 

(a) Sample size n  420, sample average Y  646.2 sample standard deviation s Y  19 5. The 19.5 s standard error of Y is SE ( Y )  Y   0 9515. The 95% confidence interval for the n 420 mean test score in the population is

  Y  196SE(Y )  6462  196  09515  (64434 64806) (b) The data are: sample size for small classes n1  238, sample average Y 1  657 4, sample standard deviation s1  194; sample size for large classes n2  182, sample average

Y 2  650 0, sample standard deviation s2  17 9. The standard error of Y1  Y2 is s12 s22 19.42 17.92     1 ...


Similar Free PDFs