Midterm review - Summary Logistics Management PDF

Title Midterm review - Summary Logistics Management
Author RUIHENG CAO
Course Logistics Management
Institution Ohio State University
Pages 4
File Size 205.9 KB
File Type PDF
Total Downloads 93
Total Views 142

Summary

midterm review document...


Description

ECON 4400 Midterm review

Things you shouldn’t worry about for the exam  



Stats review including calculating expectations Most formulas EXCEPT the ones for calculating t statistics. Also, while you may not need to fully memorize it, you should have an intuitive sense of what goes into the standard error of a regression coefficient. Specifically, how does variation in the regressor of interest and correlation with other regressors affect the standard error. Same goes for OVB. Don’t worry about knowing the technical information on what the various distributions we covered are (std normal, t, F, chi-squared), but do know which distribution to use for which type of hypothesis test.

Things you should worry about (not necessarily exhaustive). 



Hypothesis testing: o calculating t stats, (� − �)/(1∕√�)=t o determining whether or not to reject the null given a critical value, �=(� −5)/��(�  ) t=1.24>-1.699 we fail to reject the null o determining at what levels we have significance give a p-value. 0.05=-1.699 o Rewriting a model to conduct hypotheses involving linear combinations of parameters � _0: �_1−�_2=0 �_1: �_1−�_2 average difference between treatment and control groups Omitted variable bias o When do we have it? When an omitted variable independently affects both the treatment and outcome variables o Given a “short” estimate and a “long” estimate, calculate OVB. o



Short

^β = ^β +^π ^γ s1 1 2

Where

^π 2 comes from regressing

X 2 on X 1 :

X 2=b+π 2 X 1 o

Given correlation between omitted and treatment, and between omitted and outcome, what will be the sign of OVB? Cor (X1, X2) 0

−¿ ¿ γ¿ ¿^

(+)= +



Relationship between consistency and unbiasedness for OLS. 1. consistency means that, as the sample size increases, the sampling distribution of the estimator becomes increasingly concentrated at the true parameter value. Under the following conditions, the OLS estimator ^β j is a consistent estimator

for β j 1. Linearity: The true population model is given by: Y =α + β 1 X 1+ β 2 X 2 +…+ β k X k +e 2.5 Zero mean and zero correlation: E[ e ]=0 and Cov ( X j , e ) =0 for J =1,2 , … k 3. No perfect collinearity 4. {( Y i , X 1 i , X 2i , … , X ki) , i=1 , … , n } is a purely random sample from the population



2. That is, the mean of the sampling distribution of the estimator is equal to the true parameter value. 3. Unbiasedness is a statement about the expected value of the sampling distribution of the estimator. Consistency is a statement about "where the sampling distribution of the estimator is going" as the sample size increases. Homoskedasticity/heteroskedasticity o Given a conditional variance determine which we have. 1. We call this condition homoskedasticity. This just says that the variance of the error term (the unobserved components) conditional on X is a constant. 2. Under heteroskedasticity, the conditional variance of the error term is a function of X, in other words the residuals will be more or less spread out around the regression line depending on what the value of X is. o



If we have heteroskedasticity, how does that affect standard errors and hypothesis testing? it can cause ordinary least squares estimates of the variance (and, thus, standard errors) of the coefficients to be biased, possibly above or below the true or population variance. Under heteroskedasticity, the conditional variance is a function of X, and thus (potentially) differs for each observation.

R-squared o What is it? (Measure of percentage of variation in Y that is explained by regression, or measure of goodness of fit of the regression line) o What happens when we add variables to the regression?

R2 = 

SSE SSR =1− SST SST

How do we get OLS estimates? (Minimize the Sum of Squared Residuals)

^e n

^e i =∑ (¿ ¿ i❑ −´e )2 2

i=1

n

SSR=∑ ¿ i=1



Residuals and fitted values The fitted value of the simple regression is defined as: ^ o Y i=^α + ^β X i The residual of the simple regression is: o ^e i=Y i−Y^i =Y i− ^α −^β X i From the formula for a residual we have:

Y i= ^ Y i +e^i 

Correlation and the sign of a regression coefficient n

∑( X i− X´ ) e i

It provides a metric for how “good” an estimator is.

^β=β + i =1 n

2 ∑ ( X i− X´ ) i=1

 



Estimating coefficients for treatment and control variables, is there a difference? Scatterplots o Homoskedasticity/heteroskedasticity

o Non-linear scatterplots and possible violations of the linearity conditions o OLS solved by fitting straight line through data points on scatterplot Asymptotic normality o If we don’t have normally distributed errors we can still conduct hypothesis testing using t and F distributions as approximations with large enough sample sizes assuming we have the 5 G-M conditions satisfied  This does not hold for samples that are “too small” or when the G-M assumptions are violated....


Similar Free PDFs