Econometrics-cheat-sheet - 2 All the formulas you need PDF

Title Econometrics-cheat-sheet - 2 All the formulas you need
Author Mohanad Ben Assaf
Course Introduction to Econometrics
Institution University of California Los Angeles
Pages 2
File Size 81.1 KB
File Type PDF
Total Downloads 50
Total Views 144

Summary

General formulas used in econometrics for your every need. Use it for exams or for solving home works....


Description

Elasticity percentage change in y Dy=y Dy x  h¼ ¼ ¼ percentage change in x Dx=x Dx y h¼

DEðyÞ=EðyÞ DEðyÞ x x ¼  ¼ b2  EðyÞ Dx=x Dx EðyÞ

Least Squares Expressions Useful for Theory b2 ¼ b2 þ Swi ei wi ¼

xi  x

Sðxi  xÞ2

Swi ¼ 0;

Swi xi ¼ 1;

Swi2 ¼ 1=Sðxi  xÞ2

Properties of the Least Squares Estimators " # s2 Sx2i varðb2 Þ ¼ varðb1 Þ ¼ s2 Sðxi  xÞ2 NSðxi  xÞ2 " # x covðb1 ; b2 Þ ¼ s2 2 Sðxi  xÞ Gauss-Markov Theorem: Under the assumptions SR1–SR5 of the linear regression model the estimators b1 and b 2 have the smallest variance of all linear and unbiased estimators of b1 and b 2. They are the Best Linear Unbiased Estimators (BLUE) of b 1 and b2 .

Rejection rule for a two-tail test: If the value of the test statistic falls in the rejection region, either tail of the t-distribution, then we reject the null hypothesis and accept the alternative. Type I error: The null hypothesis is true and we decide to reject it. Type II error: The null hypothesis is false and we decide not to reject it. p-value rejection rule: When the p-value of a hypothesis test is smaller than the chosen value of a, then the test procedure leads to rejection of the null hypothesis. Prediction y0 ¼ b1 þ b2 x0 þ e0 ; y^0 ¼ b1 þ b2 x0 ; f ¼ y^0  y0 " # q ffiffiffiffiffiffiffiffiffiffiffiffiffi ðx0  xÞ2 1 2 b þ ; seð f Þ ¼ b varð f Þ varð f Þ ¼ s ^ 1þ 2 N Sðxi  xÞ

A (1  a)  100% confidence interval, or prediction interval, for y0 ^ y0  tc seð f Þ Goodness of Fit Sðyi  yÞ2 ¼ Sðy^i  yÞ2 þ S^ei2 SST ¼ SSR þ SSE SSE SSR ¼ 1 R2 ¼ ¼ ðcorrðy; ^yÞÞ2 SST SST

If we make the normality assumption, assumption SR6, about the error term, then the least squares esti- Log-Linear Model lnðyÞ ¼ b1 þ b2 x þ e; b mators are normally distributed. lnð yÞ ¼ b1 þ b2 x ! ! s2 s2 å xi2 100  b2  % change in y given a one-unit change in x: ; b2  N b2 ; b1  N b1 ; Sðxi  xÞ2 NSðxi  xÞ2 ^ yn ¼ expðb1 þ b2 xÞ Estimated Error Variance s ^2 ¼

S^e2i N2

Estimator Standard Errors q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi seðb1 Þ ¼ b varðb1 Þ; seðb2 Þ ¼ b varðb2 Þ t-distribution

If assumptions SR1–SR6 of the simple linear regression model hold, then t¼

bk  bk  tðN2Þ ; k ¼ 1; 2 seðbk Þ

Interval Estimates

P[b2  t cse(b 2)  b 2  b 2 þ tc se(b2 )] ¼ 1  a Hypothesis Testing Components of Hypothesis Tests 1. A null hypothesis, H 0 2. An alternative hypothesis, H1 3. A test statistic 4. A rejection region 5. A conclusion If the null hypothesis H0 : b2 ¼ c is true, then t¼

b2  c  tðN2Þ seðb2 Þ

^ yc ¼ expðb1 þ b2 xÞexpð^ s2 =2Þ

Prediction interval: h i h i exp b lnðyÞ  tc seð f Þ ; exp b lnð yÞ þ tc seð f Þ

Generalized goodness-of-fit measure Rg2¼ ðcorrðy;^yn ÞÞ

2

Assumptions of the Multiple Regression Model MR1 yi ¼ b1 þ b2 x i2 þ    þ bK xiK þ e i MR2 E(y i) ¼ b 1 þ b 2xi2 þ    þ bK xiK , E(e i ) ¼ 0. 2 MR3 var(y i) ¼ var(e i) ¼ s MR4 cov(yi , y )j ¼ cov(e i , ej ) ¼ 0

MR5 The values of xik are not random and are not exact linear functions of the other explanatory variables. MR6 yi  N½ðb1 þ b2 xi2 þ    þ bK xiK Þ; s2  , ei  Nð0; s2 Þ

Least Squares Estimates in MR Model Least squares estimates b1 , b2 , . . . , b K minimize Sðb1 , b 2, . . . , bK Þ ¼ åðyi  b1  b2 xi2      bK xiKÞ2 Estimated Error Variance and Estimator Standard Errors s ^2 ¼

å ^ei2 NK

seðbk Þ ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi b varðbk Þ

Hypothesis Tests and Interval Estimates for Single Parameters bk  bk Use t-distribution t ¼  tðNKÞ seðbk Þ t-test for More than One Parameter H0 : b2 þ cb3 ¼ a b2 þ cb3  a  tðNKÞ seðb2 þ cb3 Þ q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi varðb2 Þ þ c2 b seðb2 þ cb3 Þ ¼ b varðb3 Þ þ 2c  b covðb2 ; b3 Þ

When H0 is true



Joint F-tests

To test J joint hypotheses,

H0 : b2 ¼ 0; b3 ¼ 0; : : : ; bK ¼ 0 H1 : at least one of the bk is nonzero ðSST  SSEÞ=ðK  1Þ F¼ SSE=ðN  KÞ y^i ¼ b1 þ b2 xi2 þ b3 xi3

H0 : g1 ¼ 0 yi ¼ b1 þ b2 xi2 þ b3 xi3 þ g1 ^yi 2þ ei ; H0 : g1 ¼ g2 ¼ 0 yi ¼ b1 þ b2 xi2 þ b3 xi3 þ g1y^2i þ g2 ^yi3 þ ei ; Model Selection AIC ¼ ln(SSE=N) þ 2K=N SC ¼ ln(SSE=N) þ K ln(N)=N Collinearity and Omitted Variables yi ¼ b1 þ b2 xi2 þ b3 xi3 þ ei s2 varðb2 Þ ¼ ð1  r223 Þ å ðxi2  x2 Þ2 When x3 is omitted; biasðb2 Þ ¼ Eðb2 Þ  b2 ¼ b3 Heteroskedasticity 2 var(y i) ¼ var(ei ) ¼ si General variance function

b covðx2 ; x3 Þ b varðx2 Þ

s2i ¼ expða1 þ a2 zi2 þ    þ aS ziS Þ

Breusch-Pagan and White Tests for H0 : a 2 ¼ a3 ¼    ¼ aS ¼ 0 When H0 is true

x2 ¼ N  R2  x2ðS1Þ

2 Goldfeld-Quandt test for H0 : sM ¼ s2R versus H1 : s2M 6¼ sR2

s2R F¼s ^ 2M =^ varðei Þ ¼ si2

 FðNM KM ;NR KR Þ When H0 is true Transformed model for ¼ s2 x i pffiffiffiffi pffiffiffiffi p ffiffiffiffi p ffiffiffiffi yi = xi ¼ b1 ð1= xi Þ þ b2ð xi = xiÞ þ ei = xi Estimating the variance function

lnð^e2i Þ ¼ lnðsi2Þ þ vi ¼ a1 þ a2 zi2 þ    þ aS ziS þ vi

Grouped data varðei Þ ¼ s2i ¼

(

Finite distributed lag model yt ¼ a þ b0 xt þ b1 xt1 þ b2 xt2 þ    þ bq xtq þ vt Correlogram rk ¼ å ðyt  yÞðytk  yÞ= å ðyt  yÞ2 p ffiffiffiffi For H0 : rk ¼ 0; z ¼ T rk  Nð0; 1Þ LM test yt ¼ b1 þ b2 xt þ r^et1 þ ^vt Test H0 :r ¼ 0 with t-test et ¼ g1 þ g2 xt þ r^ ^ et1 þ v^t

Test using LM ¼ T  R2

y t ¼ b1 þ b2 x t þ e t

AR(1) error

et ¼ ret1 þ vt

Nonlinear least squares estimation

ðSSER  SSEU Þ=J SSEU =ðN  KÞ To test the overall significance of the model the null and alternative hypotheses and F statistic are F¼

RESET: A Specification Test yi ¼ b1 þ b2 xi2 þ b3 xi3 þ ei

Regression with Stationary Time Series Variables

sM2 i ¼ 1; 2; . . . ; NM s2R i ¼ 1; 2; . . . ; NR

Transformed model for feasible generalized least squares  .p ffiffiffiffiffi  .pffiffiffiffiffi . pffiffiffiffiffi .p ffiffiffiffiffi s ^ i ¼ b1 1 s ^ i þ b2 x i s ^ i þ ei s ^i yi

yt ¼ b1 ð1  rÞ þ b2 xt þ ryt1  b2 rxt1 þ vt

ARDL(p, q) model yt ¼ d þ d0 xt þ dl xt1 þ    þ dq xtq þ ul yt1 þ    þ up ytp þ vt AR(p) forecasting model yt ¼ d þ ul yt1 þ u2 yt2 þ    þ up ytp þ vt

Exponential smoothing y^t ¼ ayt1 þ ð1  aÞ^ yt1 Multiplier analysis 2 q d0 þ d1 L þ d2 L þ    þ dq L ¼ ð1  u1 L  u2 L2      up Lp Þ Unit Roots and Cointegration

 ðb0 þ b1 L þ b2 L2 þ   Þ

Unit Root Test for Stationarity: Null hypothesis: H0 : g ¼ 0 Dickey-Fuller Test 1 (no constant and no trend): Dyt ¼ gyt1 þ vt Dickey-Fuller Test 2 (with constant but no trend): Dyt ¼ a þ gyt1 þ vt Dickey-Fuller Test 3 (with constant and with trend): Dyt ¼ a þ gyt1 þ lt þ vt Augmented Dickey-Fuller Tests: m

Dyt ¼ a þ gyt1 þ å as Dyts þ vt s¼1

Test for cointegration D^ et ¼ g^et1 þ vt Random walk: yt ¼ yt1 þ vt Random walk with drift: yt ¼ a þ yt1 þ vt Random walk model with drift and time trend: yt ¼ a þ dt þ yt1 þ vt Panel Data Pooled least squares regression yit ¼ b1 þ b2 x2it þ b3 x3it þ eit

Cluster robust standard errors cov(eit, eis) ¼ cts Fixed effects model b1i not random yit ¼ b1i þ b2 x2it þ b3 x3it þ eit yit  yi ¼ b2 ðx2it  x2i Þ þ b3 ðx3it  x3i Þ þ ðeit  ei Þ

Random effects model yit ¼ b1i þ b2 x2it þ b3 x3it þ eit

bit ¼ b1 þ ui random

yit  ayi ¼ b1 ð1  aÞ þ b2 ðx2it  ax2i Þ þ b3 ðx3it  ax3i Þ þ vit q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ts2u þ se2 a ¼ 1  se

Hausman test

t ¼ ðbFE;k  bRE;k Þ

h i1=2 b b varðb FE;k Þ  varðbRE;k Þ...


Similar Free PDFs