Solution Manual - Mathematical Statistics with Applications 7th edition, Wackerly chapter 16 PDF

Title Solution Manual - Mathematical Statistics with Applications 7th edition, Wackerly chapter 16
Author Pham Quang Huy
Course Mathematical Statistics
Institution Đại học Hà Nội
Pages 8
File Size 224.3 KB
File Type PDF
Total Downloads 74
Total Views 154

Summary

Download Solution Manual - Mathematical Statistics with Applications 7th edition, Wackerly chapter 16 PDF


Description

Chapter 16: Introduction to Bayesian Methods of Inference 16.1

Refer to Table 16.1. a. β (10,30) b. n = 25 c. β (10,30) , n = 25 d. Yes e. Posterior for the β (1,3) prior.

16.2

a.-d. Refer to Section 16.2

16.3

a.-e. Applet exercise, so answers vary.

16.4

a.-d. Applex exercise, so answers vary.

16.5

It should take more trials with a beta(10, 30) prior.

16.6

⎛ n⎞ Here, L( y | p) = p( y | p) = ⎜⎜ ⎟⎟ p y (1 − p) n − y , where y = 0, 1, …, n and 0 < p < 1. So, ⎝ y⎠ Γ( α + β) α −1 ⎛ n⎞ 1 f ( y, p) = ⎜⎜ ⎟⎟ p y (1 − p ) n− y × p (1 − p )β− Γ(α )Γ(β ) ⎝ y⎠ so that 1 Γ( α + β) Γ ( y + α )Γ (n − y + β ) ⎛ n ⎞ Γ (α + β ) y +α −1 m( y ) = ∫ ⎜⎜ ⎟⎟ . p (1 − p ) n− y +β−1 dp = Γ(α )Γ(β ) Γ (n + α + β ) 0 ⎝ y ⎠ Γ(α )Γ (β ) The posterior density of p is then Γ (n + α + β ) g * ( p | y) = p y +α −1 (1 − p ) n− y +β−1 , 0 < p < 1. Γ ( y + α )Γ (n − y + β ) This is the identical beta density as in Example 16.1 (recall that the sum of n i.i.d. Bernoulli random variables is binomial with n trials and success probability p).

16.7

a. The Bayes estimator is the mean of the posterior distribution, so with a beta posterior with α = y + 1 and β = n – y + 3 in the prior, the posterior mean is Y +1 Y 1 pˆ B = . = + n +4 n+4 n+4 V (Y ) np (1 − p ) E (Y ) + 1 np + 1 b. E ( p ˆB) = = = ≠ p , V ( ˆp) = 2 ( n + 4) 2 n+4 n+4 ( n + 4)

16.8

a. From Ex. 16.6, the Bayes estimator for p is pˆ B = E( p | Y ) =

Y +1 . n +2

b. This is the uniform distribution in the interval (0, 1). c. We know that pˆ = Y / n is an unbiased estimator for p. However, for the Bayes estimator, 326

Chapter 16: Introduction to Bayesian Methods of Inference

327 Instructor’s Solutions Manual

E ( pˆ B ) =

E (Y ) + 1 np + 1 V (Y ) np (1 − p ) = = and V ( pˆ B ) = . 2 n+2 n +2 (n + 2) (n + 2) 2 2

2 np(1 − p) ⎛ np + 1 ⎞ = np(1 − p) + (1 − 2 p) + − p . ⎜ ⎟ (n + 2)2 ⎝ n + 2 (n + 2) 2 ⎠ d. For the unbiased estimator pˆ , MSE( pˆ ) = V( pˆ ) = p(1 – p)/n. So, holding n fixed, we must determine the values of p such that np (1 − p ) + (1 − 2 p )2 p(1 − p) . < n (n + 2)2 The range of values of p where this is satisfied is solved in Ex. 8.17(c).

Thus, MSE( pˆ B ) = V ( ˆpB ) + [ B( ˆpB )]2 =

16.9

a. Here, L ( y | p ) = p ( y | p ) = (1 − p ) y −1 p , where y = 1, 2, … and 0 < p < 1. So, Γ( α + β) α −1 f ( y , p ) = (1 − p ) y −1 p × p (1 − p ) β−1 Γ (α )Γ(β ) so that 1 Γ( α + β) α Γ (α + β ) Γ( α + 1)Γ(y + β − 1) m( y) = ∫ p (1− p )β+ y− 2 dp = . Γ (α )Γ (β ) Γ (α )Γ(β ) Γ( y + α + β) 0

The posterior density of p is then Γ(α + β + y) g* ( p | y ) = p α (1− p ) β+ y −2 , 0 < p < 1. Γ(α + 1) Γ( β + y −1) This is a beta density with shape parameters α* = α + 1 and β* = β + y – 1.

b. The Bayes estimators are α+1 (1) pˆ B = E ( p | Y ) = , α+β+Y

( 2) [ p(1− p)] B = E( p | Y ) − E( p 2 | Y ) = =

(α + 2)( α + 1) α +1 − α + β + Y (α + β + Y + 1)( α + β + Y )

(α + 1)(β + Y − 1) , (α + β + Y + 1)(α + β + Y )

where the second expectation was solved using the result from Ex. 4.200. (Alternately, 1

the answer could be found by solving E[ p(1 − p) | Y ] = ∫ p(1 − p ) g * ( p | Y ) dp . 0

16.10 a. The joint density of the random sample and θ is given by the product of the marginal densities multiplied by the gamma prior:

328

Chapter 16: Introduction to Bayesian Methods of Inference

Instructor’s Solutions Manual

[

f ( y 1 ,… , y n , θ) = ∏i =1 θ exp( −θy i ) =

n

]Γ( α1)β

α

θα −1 exp( −θ / β )

⎞ ⎛ n θ n + α −1 θ n +α −1 β ⎟ ⎜ y exp / exp − θ − θ β = − θ ∑i =1 i n ⎟ ⎜⎜ Γ( α)β α Γ( α)β α β∑i =1 y i + 1 ⎟⎠ ⎝

(

)

∞ ⎛ ⎞ 1 β n + α −1 ⎜−θ ⎟d θ , but this integral resembles exp b. m ( y 1 ,…, y n ) = θ n α ∫ ⎜ Γ (α )β 0 β ∑i=1 y i + 1 ⎟⎠ ⎝ β that of a gamma density with shape parameter n + α and scale parameter . n β∑i = yi + 1 1

Thus, the solution is m( y1 ,… , yn ) =

⎞ ⎛ 1 β ⎟ ⎜ Γ + β (n ) α n ⎜ Γ (α )β y +1⎟ β ⎠ ⎝ ∑i=1 i

n+α

.

c. The solution follows from parts (a) and (b) above.

d. Using the result in Ex. 4.111,

⎡ ⎤ 1 β ⎥ n 1 μˆ B = E(μ | Y ) = E(1 / θ | Y ) = * * =⎢ + α − ( ) n β ( α −1) ⎢ β∑ Yi + 1 ⎥ i =1 ⎣ ⎦ β ∑i =1Yi + 1 n

= e. The prior mean for 1/θ is E (1 / θ ) =

β (n + α − 1 )



n

=

Y

i =1 i

n+ α −1

+

−1

1 β(n + α − 1)

1 (again by Ex. 4.111). Thus, μˆ B can be β( α − 1)

written as n 1 ⎛ α −1 ⎞ ⎛ ⎞ μˆ B = Y⎜ ⎟+ ⎜ ⎟, ⎝ n + α − 1 ⎠ β (α − 1) ⎝ n + α −1 ⎠ which is a weighted average of the MLE and the prior mean.

f. We know that Y is unbiased; thus E( Y ) = μ = 1/θ. Therefore, n n 1 ⎛ α −1 ⎞ 1 ⎛ 1 ⎛ α −1 ⎞ ⎛ ⎞ ⎞ E (μˆ B ) = E(Y ) ⎜ ⎟+ ⎜ ⎟= ⎜ ⎟+ ⎜ ⎟. ⎝ n + α −1 ⎠ β(α − 1) ⎝ n + α − 1⎠ θ ⎝ n + α − 1⎠ β(α − 1) ⎝ n + α − 1 ⎠ Therefore, μˆ B is biased. However, it is asymptotically unbiased since E (μˆ B ) − 1 / θ → 0 . Also,

Chapter 16: Introduction to Bayesian Methods of Inference

329 Instructor’s Solutions Manual

2

2

n n n 1 ⎛ 1 ⎛ ⎞ ⎞ →0. V (μˆ B ) = V (Y )⎜ ⎟ = 2 ⎜ ⎟ = 2 θ n⎝ n + α − 1 ⎠ θ ( n + α − 1) 2 ⎝ n + α −1 ⎠ p So, μˆ B ⎯ ⎯→ 1 / θ and thus it is consistent.

16.11 a. The joint density of U and λ is

( n λ) u exp( −n λ) 1 × λα −1 exp( −λ / β ) α u! Γ( α)β u n = λu +α −1 exp( −n λ − λ / β) α u! Γ( α)β

f ( u, λ) = p( u | λ) g ( λ) =

=

⎡ nu λ u +α −1 exp ⎢− λ α u! Γ( α)β ⎣

⎛ β ⎞⎤ ⎟⎟⎥ ⎜⎜ ⎝ nβ + 1 ⎠ ⎦

∞ ⎡ nu λu+α − 1 exp ⎢− λ α ∫ u!Γ (α )β 0 ⎣

⎛ β ⎞⎤ ⎜⎜ ⎟⎟ ⎥ dλ , but this integral resembles that of a ⎝ n β +1 ⎠ ⎦ β gamma density with shape parameter u + α and scale parameter . Thus, the nβ + 1

b. m (u ) =

nu ⎛ β ⎞ solution is m (u ) = Γ(u + α)⎜⎜ ⎟⎟ α u!Γ (α )β ⎝ nβ +1 ⎠

u+ α

.

c. The result follows from parts (a) and (b) above.

⎛ β ⎞ * * ⎟⎟ . d. λˆ B = E (λ |U ) = α β = (U + α )⎜⎜ ⎝ nβ +1 ⎠ e. The prior mean for λ is E(λ) = αβ. From the above, ⎛ 1 ⎞ ⎛ nβ ⎞ ⎛ β ⎞ n λˆ B = ∑ i= Yi + α ⎜⎜ ⎟⎟ , ⎟⎟ + αβ⎜⎜ ⎟⎟ = Y ⎜⎜ 1 ⎝ n β +1 ⎠ ⎝ nβ + 1 ⎠ ⎝n β +1⎠ which is a weighted average of the MLE and the prior mean.

(

)

f. We know that Y is unbiased; thus E( Y ) = λ Therefore, ⎛ 1 ⎞ ⎛ nβ ⎞ ⎛ 1 ⎞ ⎛ nβ ⎞ E (λˆ B ) = E(Y )⎜⎜ ⎟ + αβ⎜⎜ ⎟ = λ ⎜⎜ ⎟ + αβ⎜⎜ ⎟⎟ . ⎟ ⎟ ⎟ ⎝ nβ + 1 ⎠ ⎝ nβ + 1 ⎠ ⎝ nβ + 1 ⎠ ⎝ nβ + 1 ⎠ So, λˆ B is biased but it is asymptotically unbiased since

E (λˆ B ) – λ → 0. Also,

330

Chapter 16: Introduction to Bayesian Methods of Inference

Instructor’s Solutions Manual 2

2

nβ λ ⎛ nβ ⎞ ⎛ nβ ⎞ ⎟⎟ = λ ⎟⎟ = ⎜⎜ V (λˆ B ) = V (Y )⎜⎜ →0 . n ⎝ nβ + 1⎠ (n β +1 )2 ⎝ nβ + 1 ⎠ p So, λˆB ⎯ ⎯→ λ and thus it is consistent.

16.12 First, it is given that W = vU = v ∑ i=1 (Y i − μ 0 )2 is chi–square with n degrees of freedom. n

Then, the density function for U (conditioned on v) is given by 1 1 (uv ) n / 2−1 e− uv / 2 = un / 2− 1v n / 2e− uv / 2 . f U (u | v ) = v f W (uv ) = v n /2 Γ (n / 2)2 Γ (n / 2)2n / 2 a. The joint density of U and v is then 1 1 α −1 f ( u, v) = fU ( u | v) g( v) = un / 2−1 v n / 2 exp( − uv/ 2) × exp( − v / β ) α v n /2 Γ ( n / 2)2 Γ( α) β 1 = u n / 2−1v n / 2+α −1 exp( −uv / 2 − v / β) n/2 Γ ( n / 2)Γ (α )2 β α =

⎡ 1 u n / 2−1v n / 2+α −1 exp ⎢ − v n/2 α Γ ( n / 2)Γ (α )2 β ⎣

⎛ 2β ⎞ ⎤ ⎜⎜ ⎟⎟ ⎥ . ⎝ uβ + 2 ⎠ ⎦

∞ ⎡ 1 ⎛ 2β ⎞ ⎤ n / 2 −1 u v n/ 2 +α −1 exp⎢ − v ⎜⎜ n/2 α ∫ ⎟⎟⎥ dv , but this integral Γ( n / 2)Γ( α)2 β ⎝ u β + 2 ⎠⎦ 0 ⎣ resembles that of a gamma density with shape parameter n/2 + α and scale parameter

b. m (u ) =

⎛ 2β ⎞ u n / 2− 1 2β . Thus, the solution is m (u ) = Γ( n / 2 + α )⎜⎜ ⎟⎟ n/2 α uβ + 2 Γ( n / 2)Γ (α )2 β ⎝ uβ + 2 ⎠

n / 2 +α

.

c. The result follows from parts (a) and (b) above.

d. Using the result in Ex. 4.111(e),

σˆ 2B = E( σ2 | U ) = E(1 / v | U ) =

⎛ Uβ + 1 1 ⎜ = * + α − 1 ⎜⎝ 2β β ( α − 1) n / 2 *

Uβ + 2 2⎞ . ⎟⎟ = β ( n + 2α − 2) ⎠

1 . From the above, β(α − 1) Uβ + 2 U⎛ n 1 ⎛ 2(α − 1) ⎞ ⎞ σˆ 2B = = ⎜ ⎟+ ⎜ ⎟. β(n + 2 α − 2 ) n ⎝ n + 2 α − 2 ⎠ β(α − 1) ⎝ n + 2 α − 2 ⎠

e. The prior mean for σ 2 = 1 / v =

16.13 a. (.099, .710) b. Both probabilities are .025.

Chapter 16: Introduction to Bayesian Methods of Inference

331 Instructor’s Solutions Manual

c. P(.099 < p < .710) = .95. d.-g. Answers vary. h. The credible intervals should decrease in width with larger sample sizes.

16.14 a.-b. Answers vary. 16.15 With y = 4, n = 25, and a beta(1, 3) prior, the posterior distribution for p is beta(5, 24). Using R, the lower and upper endpoints of the 95% credible interval are given by: > qbeta(.025,5,24) [1] 0.06064291 > qbeta(.975,5,24) [1] 0.3266527

16.16 With y = 4, n = 25, and a beta(1, 1) prior, the posterior distribution for p is beta(5, 22). Using R, the lower and upper endpoints of the 95% credible interval are given by: > qbeta(.025,5,22) [1] 0.06554811 > qbeta(.975,5,22) [1] 0.3486788

This is a wider interval than what was obtained in Ex. 16.15.

16.17 With y = 6 and a beta(10, 5) prior, the posterior distribution for p is beta(11, 10). Using R, the lower and upper endpoints of the 80% credible interval for p are given by: > qbeta(.10,11,10) [1] 0.3847514 > qbeta(.90,11,10) [1] 0.6618291

16.18 With n = 15,



n i=1

yi = 30.27, and a gamma(2.3, 0.4) prior, the posterior distribution for

θ is gamma(17.3, .030516). Using R, the lower and upper endpoints of the 80% credible interval for θ are given by > qgamma(.10,shape=17.3,scale=.0305167) [1] 0.3731982 > qgamma(.90,shape=17.3,scale=.0305167) [1] 0.6957321

The 80% credible interval for θ is (.3732, .6957). To create a 80% credible interval for 1/θ, the end points of the previous interval can be inverted: .3732 < θ < .6957 1/(.3732) > 1/θ > 1/(.6957) Since 1/(.6957) = 1.4374 and 1/(.3732) = 2.6795, the 80% credible interval for 1/θ is (1.4374, 2.6795).

332

Chapter 16: Introduction to Bayesian Methods of Inference

Instructor’s Solutions Manual

16.19 With n = 25,



n i=1

yi = 174, and a gamma(2, 3) prior, the posterior distribution for λ is

gamma(176, .0394739). Using R, the lower and upper endpoints of the 95% credible interval for λ are given by > qgamma(.025,shape=176,scale=.0394739) [1] 5.958895 > qgamma(.975,shape=176,scale=.0394739) [1] 8.010663

16.20 With n = 8, u = .8579, and a gamma(5, 2) prior, the posterior distribution for v is gamma(9, 1.0764842). Using R, the lower and upper endpoints of the 90% credible interval for v are given by > qgamma(.05,shape=9,scale=1.0764842) [1] 5.054338 > qgamma(.95,shape=9,scale=1.0764842) [1] 15.53867

The 90% credible interval for v is (5.054, 15.539). Similar to Ex. 16.18, the 90% credible interval for σ2 = 1/v is found by inverting the endpoints of the credible interval for v, given by (.0644, .1979). 16.21 From Ex. 6.15, the posterior distribution of p is beta(5, 24). Now, we can find P * ( p ∈ Ω 0 ) = P * ( p < .3) by (in R): > pbeta(.3,5,24) [1] 0.9525731

Therefore, P * ( p ∈ Ω a ) = P * ( p ≥ .3) = 1 – .9525731 = .0474269. Since the probability associated with H0 is much larger, our decision is to not reject H0. 16.22 From Ex. 6.16, the posterior distribution of p is beta(5, 22). We can find P * ( p ∈ Ω 0 ) = P * ( p < .3) by (in R): > pbeta(.3,5,22) [1] 0.9266975

Therefore, P * ( p ∈ Ω a ) = P * ( p ≥ .3) = 1 – .9266975 = .0733025. Since the probability associated with H0 is much larger, our decision is to not reject H0. 16.23 From Ex. 6.17, the posterior distribution of p is beta(11, 10). Thus, P *( p ∈ Ω 0 ) = P* ( p < .4) is given by (in R): > pbeta(.4,11,10) [1] 0.1275212

Therefore, P * ( p ∈ Ω a ) = P * ( p ≥ .4) = 1 – .1275212 = .8724788. Since the probability associated with Ha is much larger, our decision is to reject H0. 16.24 From Ex. 16.18, the posterior distribution for θ is gamma(17.3, .0305). To test H0: θ > .5 vs. Ha: θ ≤ .5, we calculate P * ( θ ∈ Ω 0 ) = P * ( θ > .5) as:

Chapter 16: Introduction to Bayesian Methods of Inference

333 Instructor’s Solutions Manual

> 1 - pgamma(.5,shape=17.3,scale=.0305) [1] 0.5561767

Therefore, P * (θ ∈ Ω a ) = P * (θ ≥ .5) = 1 – .5561767 = .4438233. The probability associated with H0 is larger (but only marginally so), so our decision is to not reject H0.

16.25 From Ex. 16.19, the posterior distribution for λ is gamma(176, .0395). Thus, P * ( λ ∈ Ω 0 ) = P * ( λ > 6) is found by > 1 - pgamma(6,shape=176,scale=.0395) [1] 0.9700498

Therefore, P * (λ ∈ Ω a ) = P * (λ ≤ 6) = 1 – .9700498 = .0299502. Since the probability associated with H0 is much larger, our decision is to not reject H0. 16.26 From Ex. 16.20, the posterior distribution for v is gamma(9, 1.0765). To test: H0: v < 10 vs. Ha: v ≥ 10, * * we calculate P ( v ∈ Ω 0 ) = P ( v < 10 ) as > pgamma(10,9, 1.0765) [1] 0.7464786

Therefore, P * (λ ∈ Ω a ) = P * (v ≥ 10 ) = 1 – .7464786 = .2535214. Since the probability associated with H0 is larger, our decision is to not reject H0....


Similar Free PDFs