7 Method of moments - Lecture notes 7 PDF

Title 7 Method of moments - Lecture notes 7
Course Statistics
Institution University of Strathclyde
Pages 4
File Size 89.5 KB
File Type PDF
Total Downloads 32
Total Views 159

Summary

Introducing the method of moments...


Description

Chapter 7 The Method of Moments So far the only method we have studied of constructing estimators is the Maximum Likelihood Method. In this chapter we consider another analytical method, the Method of Moments. The Method of Moments is an old method (which goes back essentially to Gauss about 200 years ago). It often gives the same estimator as Maximum Likelihood. Where the two methods differ, the Method of Moments usually gives a slightly less satisfactory estimator. Nevertheless it often gives more tractable calculations than Maximum Likelihood and can be useful occasionally in cases where MLEs cannot readily be found analytically (see the Gamma example below). (However there are some probability models where the r.v. of interest has no finite moments at all (e.g. the Cauchy distribution), and then the Method of Moments cannot be used.) The idea is very simple. In many cases the parameters we are trying to estimate are either the moments of the r.v. in question, or they are simple functions of those moments. Examples • Let X ∼ N(µ, σ 2 ). In this case the mean µ = E(X) = µ1′ (X) (the first ordinary moment) and σ 2 = V (X) = µ2 (X) (the second central moment). • Let X ∼ Negexp(λ) with pdf fX (x) = λe−λx, x > 0. In this case µ1′ (X ).

1 λ

= E(X) =

• Let X ∼ Pois(λ). In this case λ = E (X) = µ1′ (X ). If this is so, we write down the relations expressing the lowest possible order moments as functions of the parameters, and use these to express the parameters as functions of these low order moments. We then use as the estimator(s) for our parameter(s) the corresponding function(s) of the Sample Moments rather than the theoretical ones. 62

Definition If X1 , X2 , . . . , Xn is a simple random sample of a r.v. X, then the r -th ordinary Sample Moment of X is the statistic Mr′ (X)

=

Pn

i=1

Xir

n

.

Example Find the Method of Moments Estimators (MMEs) for the mean and variance of a Normal distribution. Solution For the mean µ, since µ = E(X) = µ′1 (X), the MME for µ is ¯ µ ˆ = M 1′ (X) = X. Clearly the MME here coincides with the Maximum Likelihood Estimator. For the variance σ 2 , since σ 2 = µ2′ (X) − µ1′ (X )2 the MME is σˆ 2 = M2′ (X) − M1′ (X )2 Pn 2 i=1 X i − X¯ 2 = n Pn ¯ 2 i=1 (Xi − X) by some earlier working. = n Again this coincides with the Maximum Likelihood Estimator, which, as shown earlier, is biased. Example Find the Method of Moments Estimator for the variance of X, where X ∼ Pois(λ). Solution By work done earlier on the Poisson distribution E(X) = V (X) = λ. So we wish to estimate λ. Recalling the rule which says that to estimate a parameter by the Method of Moments we should use the moment(s) of lowest possible order to derive our estimator, we use the relation E(X) = λ. Hence the MME estimator for λ is ˆ = M ′ (X) = X. ¯ Λ 1 This estimator of λ again coincides with the Maximum Likelihood Estimator, and as we have already seen it is unbiased.

63

Example Let X ∼ Un(0, θ) have a continuous uniform distribution on the interval 0 to θ . Find the Method of Moments Estimator for θ . Solution E(X) =

Z θ 0

1 x2 x. dx = 2θ θ "



0

θ = , 2

so

θ = 2E(X). Hence the MME for θ is ¯ ˜ = 2M1′ (X) = 2X. Θ This is quite different from the Maximum Likelihood Estimator ˆ = max{Xi } = X(n) . Θ The MME is unbiased here, but is far less efficient than the Maximum Likelihood Estimator (which can be adjusted to be made unbiased; see the example in the previous chapter comparing these two estimators). Nor is it based on a sufficient statistic, unlike the Maximum Likelihood Estimator. Example Let X ∼ Gam(α, β), with pdf fX (x) = Γ(1α) β1α e−x/β xα−1 , x > 0. Find the Method of Moments Estimators for α and β, and compare them with the Maximum Likelihood Estimators. Solution As E(X) = αβ and V (X) = αβ 2 (see the reference sheet), the MMEs A˜ and B˜ satisfy the simultaneous equations ˜ M 1′ (X) = A˜B 2 ′ ′ ˜ 2. M2 (X) − M1 (X ) = A˜B Dividing the second equation by the first we find B˜ = [M2′ (X )−M1′ (X )2 ]/M1′ (X) = M2′(X )/M 1′ (X )−M1′ (X) =

X

and substituting this into the first equation we obtain

Xi2/

X

X

Xi −



Xi /n ,

[ Xi /n]2 M ′ (X ) M1′ (X )2 [ Xi ]2 . = A˜ = 1 P = = ′ P P P Xi2 /n − [ Xi /n]2 n Xi2 − [ Xi ]2 M2 (X) − M1′ (X )2 B˜ P

P

To study the properties of these estimators mathematically would be very challenging, but at least they are easy to calculate. 64

The Maximum Likelihood Estimators on the other hand lead to equations which are hard to solve:

L(α, β; x) =

n Y

1 1 −xi/β α−1 e xi Γ(α) β α i=1

1 xi = n exp − (Γ(α)β α) β P

!

Y

ℓ(α, β; x) = −n ln (Γ(α)) − nα ln(β) −

xα−1 i

P

β

xi

+ (α − 1)

X ∂ℓ nΓ′ (α) − n ln(β) + ln xi = − Γ(α) ∂α P nα xi ∂ℓ + 2 = − β ∂β β

X

ln(xi )

ˆ and B ˆ satisfy the equations Hence the MLEs A −

ˆ X nΓ′ (A) ˆ + − n ln(B) ln Xi = 0 ˆ Γ(A) P nAˆ Xi + − = 0. ˆ ˆ B B2 ¯

From the second equation we can find Bˆ = X ˆ , but substituting this into the A ˆ There is no explicit first equation gives a difficult non-linear equation to solve for A. solution in closed form, and we would need to use iterative numerical methods to get an approximate solution. In this case also the Maximum Likelihood Estimators will be considerably more efficient (their sampling distributions have lower standard deviations, i.e. lower standard errors for the estimators) than the MMEs, but are clearly much harder to find. Depending on what they were required for, we might just use the MMEs as our estimators, or we could use these as initial values in a numerical root finding algorithm to obtain the MLEs.

65...


Similar Free PDFs