Title | Solution Manual - Mathematical Statistics with Applications 7th edition, Wackerly chapter 6 |
---|---|
Author | Pham Quang Huy |
Course | Mathematical Statistics |
Institution | Đại học Hà Nội |
Pages | 22 |
File Size | 623.3 KB |
File Type | |
Total Downloads | 13 |
Total Views | 116 |
Download Solution Manual - Mathematical Statistics with Applications 7th edition, Wackerly chapter 6 PDF
Chapter 6: Functions of Random Variables y
6.1
The distribution function of Y is FY ( y ) = ∫ 2(1 − t ) dt = 2 y − y 2 , 0 ≤ y ≤ 1. 0
a. FU1 (u ) = P(U 1 ≤ u ) = P( 2Y − 1 ≤ u ) = P(Y ≤
f U1 (u ) = FU′1 (u ) =
1−u 2
u+1 2
) = FY (u +2 1 ) = 2(u 2+1 ) − (u +2 1 ) 2 . Thus,
, − 1≤ u ≤ 1.
b. FU2 (u ) = P (U 2 ≤ u ) = P (1− 2Y ≤ u ) = P (Y ≤
1− u 2
) = FY (1−u2 1 ) = 1− 2(u +2 1 ) = (u +2 1 )2 . Thus,
f U 2 (u ) = FU′ 2 (u ) = u 2+1 , − 1 ≤ u ≤ 1 . c. FU3 ( u) = P(U 3 ≤ u) = P( Y 2 ≤ u) = P( Y ≤ u ) = FY ( u ) = 2 u − u Thus, fU3 (u ) = FU′3 ( u) =
1 u
− 1, 0 ≤ u ≤ 1 .
d. E (U 1 ) = − 1 / 3, E (U 2 ) = 1 / 3, E (U 3 ) = 1 / 6. e. E ( 2Y − 1) = −1 / 3, E (1 − 2Y ) = 1 / 3, E (Y 2 ) = 1 / 6. y
6.2
The distribution function of Y is FY ( y ) = ∫ ( 3 / 2 )t 2 dt = (1 / 2)( y 3 − 1) , –1 ≤ y ≤ 1. −1
a. FU1 (u ) = P (U 1 ≤ u ) = P ( 3Y ≤ u ) = P (Y ≤ u / 3) = FY (u / 3) = 12 (u 3 / 18 − 1) . Thus, fU1 (u ) = FU′1 (u ) = u 2 / 18, − 3 ≤ u ≤ 3 .
b. FU2 (u ) = P (U 2 ≤ u ) = P ( 3 − Y ≤ u ) = P (Y ≥ 3 − u ) = 1 − FY ( 3 − u ) = 12 [1 − (3 − u ) 3 ].
Thus, fU 2 (u ) = FU′ 2 (u ) = 32 (3 − u ) 2 , 2 ≤ u ≤ 4 . c. FU 3 (u ) = P (U 3 ≤ u ) = P (Y 2 ≤ u ) = P (− u ≤ Y ≤ u ) = FY ( u ) − FY (− u ) = u 3 / 2.
Thus, fU 3 (u ) = FU′3 (u ) = 6.3
3 2
u, 0 ≤ u ≤ 1.
0≤ y≤1 ⎧ y2 / 2 ⎪ The distribution function for Y is FY ( y ) = ⎨ y − 1 / 2 1 < y ≤ 1.5 . ⎪ 1 y > 1.5 ⎩ a. FU (u ) = P(U ≤ u ) = P (10Y − 4 ≤ u ) = P (Y ≤ + 4 )2 ⎧ (u200 ⎪ FU (u ) = ⎨ u10−1 ⎪ 1 ⎩
u+ 4 10
) = FY ( u10+4 ) . So,
+4 −4 ≤u ≤ 6 ⎧ u100 ⎪ 6 < u ≤ 11 , and f U (u ) = FU′ (u ) = ⎨ 101 ⎪0 u > 11 ⎩
−4≤ u ≤6 6 < u ≤ 11 . elsewhere
b. E(U) = 5.583. c. E(10Y – 4) = 10(23/24) – 4 = 5.583. 6.4
The distribution function of Y is FY ( y ) = 1 − e− y/ 4 , 0 ≤ y. a. FU (u ) = P (U ≤ u ) = P (3Y + 1 ≤ u ) = P (Y ≤ u 3−1 ) = FY ( u3−1 ) = 1 − e −(u −1) / 12 . Thus,
f U (u ) = FU′ ( u ) = b. E(U) = 13.
1 12
e− u − (
1) / 12
, u ≥1.
121
122
Chapter 6: Functions of Random Variables
Instructor’s Solutions Manual
6.5
The distribution function of Y is FY ( y ) = y / 4 , 1 ≤ y ≤ 5. FU (u ) = P(U ≤ u ) = P( 2Y 2 + 3 ≤ u ) = P (Y ≤ f U (u ) = FU′ (u ) =
6.6
( )
1 u −3 − 1 / 2 16 2
u −3 2
) = FY (
u −3 2
)=
u −3 2
1 4
. Differentiating,
, 5 ≤ u ≤ 53 .
Refer to Ex. 5.10 ad 5.78. Define F U(u ) = P(U ≤u ) = P(Y1 −Y2 ≤ u ) = P(Y1 ≤ Y2 + u ) . a. For u ≤ 0, FU (u) = P(U ≤ u ) = P(Y1 − Y2 ≤ u ) = 0 . u y 2 +u
For 0 ≤ u < 1, FU (u ) = P(U ≤ u ) = P(Y1 − Y2 ≤ u) = ∫
∫ 1dy dy 1
2
= u2 / 2 .
0 2y 2
2 −u
2
∫ ∫ 1dy dy
For 1 ≤ u ≤ 2, FU (u ) = P (U ≤ u ) = P (Y 1 − Y 2 ≤ u ) = 1−
1
2
= 1 − ( 2 − u )2 / 2 .
0 y 2 +u
0 ≤ u u / Y2 ) = 1 − ∫
1
∫ 18( y
1
− y12 ) y 22 dy1dy 2
u u /y2
= 9u2 – 8u3 + 6u3lnu. fU (u ) = FU′ (u ) = 18u (1 − u + u ln u ) , 0 ≤ u ≤ 1.
6.15
Let U have a uniform distribution on (0, 1). The distribution function for U is FU (u ) = P(U ≤ u ) = u , 0 ≤ u ≤ 1. For a function G, we require G(U) = Y where Y has 2
distribution function FY(y) = 1 − e − y , y ≥ 0. Note that FY(y) = P(Y ≤ y) = P(G (U ) ≤ y ) = P[U ≤ G −1 ( y )] = FU [G −1 ( y )] = u. 2
So it must be true that G −1 ( y ) = 1 − e − y = u so that G(u) = [–ln(1– u)]–1/2. Therefore, the random variable Y = [–ln(U – 1)]–1/2 has distribution function FY(y).
124
Chapter 6: Functions of Random Variables
Instructor’s Solutions Manual y
6.16
Similar to Ex. 6.15. The distribution function for Y is F Y ( y ) = b ∫t −2 dt = 1 − yb , y ≥ b. b
FY(y) = P(Y ≤ y) = P(G (U ) ≤ y ) = P[U ≤ G −1 ( y )] = FU [G −1 ( y )] = u. So it must be true that G −1 ( y ) = 1 − by = u so that G(u) = 1b−u . Therefore, the random variable Y = b/(1 – U) has distribution function FY(y). 6.17
a. Taking the derivative of F(y), f ( y ) =
α− αy 1 θα
, 0 ≤ y ≤ θ.
()
α
b. Following Ex. 6.15 and 6.16, let u = yθ so that y = θu1/α. Thus, the random variable Y = θU1/α has distribution function FY(y). c. From part (b), the transformation is y = 4 u . The values are 2.0785, 3.229, 1.5036, 1.5610, 2.403. 6.18
a. Taking the derivative of the distribution function yields f ( y ) = αβ α y − α−1 , y ≥ β. b. Following Ex. 6.15, let u = 1 −
()
β α y
− α so that y = (1−uβ) 1 /α . Thus, Y = β(1 −U ) 1 / .
c. From part (b), y = 3 / 1 − u . The values are 3.0087, 3.3642, 6.2446, 3.4583, 4.7904. 6.19
The distribution function for X is: FX(x) = P(X ≤ x) = P(1/Y ≤ x) = P(Y ≥ 1/x) = 1 – FY(1/x) α α = 1 – 1 − (βx ) = (βx ) , 0 < x < β–1, which is a power distribution with θ = β–1.
[
6.20
]
a. FW ( w ) = P (W ≤ w) + P(Y 2 ≤ w ) = P(Y ≤ w ) = FY ( w ) = w , 0 ≤ w ≤ 1. b. FW (w ) = P(W ≤ w) + P( Y ≤ w ) = P(Y ≤ w 2 ) = FY ( w 2 ) = w 2 , 0 ≤ w ≤ 1.
6.21
By definition, P(X = i) = P[F(i – 1) < U ≤ F(i)] = F(i) – F(i – 1), for i = 1, 2, …, since for any 0 ≤ a ≤ 1, P(U ≤ a) = a for any 0 ≤ a ≤ 1. From Ex. 4.5, P(Y = i) = F(i) – F(i – 1), for i = 1, 2, … . Thus, X and Y have the same distribution.
6.22
Let U have a uniform distribution on the interval (0, 1). For a geometric distribution with parameter p and distribution function F, define the random variable X as: X = k if and only if F(k – 1) < U ≤ F(k), k = 1, 2, … . Or since F(k) = 1 – qk, we have that: X = k if and only if 1 – qk–1 < U ≤ 1 – qk, OR X = k if and only if qk, < 1–U ≤ qk–1, OR X = k if and only if klnq ≤ ln(1–U) ≤ (k–1)lnq, OR X = k if and only if k–1 < [ln(1–U)]/lnq ≤ k.
6.23
a. If U = 2Y – 1, then Y =
U +1 2
. Thus,
b. If U = 1– 2Y , then Y =
1−U 2
. Thus,
2
c. If U = Y , then Y = U . Thus,
dy du
dy du dy du
=
= 12 and fU (u ) = 12 2(1 −
u+1 2
) = 12−u , –1 ≤ u ≤ 1.
= 21 and f U (u ) = 12 2(1 − 1−2u ) = 1 2 u
1+ u 2
and fU (u ) = 2 1 u 2(1 − u ) =
, –1 ≤ u ≤ 1.
1− u u
, 0 ≤ u ≤ 1.
Chapter 6: Functions of Random Variables
125 Instructor’s Solutions Manual
6.24
If U = 3Y + 1, then Y = f U (u ) =
6.25
[e
1 1 3 4
− ( u −1 ) / 12
]=
− = 31 . With f Y ( y ) = 14 e y / 4 , we have that e − (u −1) / 12 , 1 ≤ u.
U −1 3
1 12
. Thus,
dy du
Refer to Ex. 6.11. The variable of interest is U = and
dy 1 du
Y1 +Y2 2
. Fix Y2 = y2. Then, Y1 = 2u – y2
= 2 . The joint density of U and Y2 is g(u, y2) = 2e–2u, u ≥ 0, y2 ≥ 0, and y2 < 2u. 2u
Thus, f U (u ) = ∫ 2e −2 udy 2 = 4ue −2 u for u ≥ 0. 0
6.26
a. Using the transformation approach, Y = U1/m so that
dy du
= m1 u −( m −1) / m so that the density
function for U is f U ( u) = α1 e−u / α , u ≥ 0. Note that this is the exponential distribution with mean α. ∞
b. E (Y k ) = E (U k / m ) = ∫ u k / m α1 e −u / αdu = Γ (mk + 1)αk / m , using the result from Ex. 4.111. 0
6.27
2 a. Let W= Y . The random variable Y is exponential so f Y ( y ) = β1 e− y /β . Then, Y = W
and
dy dw
= 2 w . Then, f Y ( y ) = 2β we− w
2
/β
, w ≥ 0, which is Weibull with m = 2.
b. It follows from Ex. 6.26 that E(Yk/2) = Γ( k2 + 1)β k/ 2 6.28
If Y is uniform on the interval (0, 1), fU ( u) = 1 . Then, Y = e −U / 2 and
dy du
= − 12 e −u / 2 .
Then, f Y (y ) = 1 |− 12 e− u / 2 = | 12 e− u / 2 , u ≥ 0 which is exponential with mean 2.
6.29
a. With W = mV2 , V = 2
2W m
dv |= and | dw
f W (w ) =
a ( 2w / m ) 2 mw
e −2
1 2 mw
. Then,
bw/ m
=
a 2 3/ 2 m
w1/ 2 e −
w/ kT
, w ≥ 0.
The above expression is in the form of a gamma density, so the constant a must be chosen so that the density integrate to 1, or simply a 2 = Γ ( 3 )(1kT ) 3 / 2 . m 3/ 2 2
So, the density function for W is fW ( w) =
1 Γ( 3 )(kT )3 / 2 2
b. For a gamma random variable, E(W) = 6.30
3 2
w1 / 2 e− w /kT .
kT .
2 The density function for I is f I ( i) = 1 / 2 , 9 ≤ i ≤ 11. For P = 2I , I = 3/ 2 di p −1 / 2 . Then, f p ( p) = 4 12 p , 162 ≤ p ≤ 242. dp = (1 / 2 )
P / 2 and
126
Chapter 6: Functions of Random Variables
Instructor’s Solutions Manual
6.31
Similar to Ex. 6.25. Fix Y1 = y1. Then, U = Y2/y1, Y2 = y1U and | density of Y1 and U is f ( y 1 , u ) =
2 1
1 8
y e
− y1 ( 1 + u ) / 2
dy 2 du
|= y1 . The joint
, y1 ≥ 0, u ≥ 0. So, the marginal
∞
density for U is f U ( u) = ∫ 81 y12 e −y 1 (1+u ) / 2 dy1 =
2 (1+u )3
, u ≥ 0.
0
6.32
Now fY(y) = 1/4, 1 ≤ y ≤ 5. If U = 2Y2 + 3, then Y = fU ( u ) = 8
1 2 ( u −3 )
(U −2 3 )1 / 2 and | dydu | = 14 ( u−2 3 ) .
Thus,
, 5 ≤ u ≤ 53.
6.33
2 If U = 5 – (Y/2), Y = 2(5 – U). Thus, | dy du | = 2 and f U ( u) = 4(80 − 31u + 3 u ) , 4.5 ≤ u ≤ 5.
6.34
a. If U = Y2, Y = U . Thus, |
|=
dy du
1 2 u
and fU (u ) = θ1 e −u /θ , u ≥ 0. This is the
exponential density with mean θ. b. From part a, E(Y) = E(U1/2) = 6.35
πθ 2
. Also, E(Y2) = E(U) = θ, so V(Y) = θ[1 − 4π ] .
By independence, f ( y1 , y2 ) = 1 , 0 ≤ y1 ≤ 0, 0 ≤ y2 ≤ 1. Let U = Y1Y2. For a fixed value of Y1 at y1, then y2 = u/y1. So that
dy2 du
=
1 y1
. So, the joint density of Y1 and U is
g ( y1 , u ) = 1 / y1 , 0 ≤ y1 ≤ 0, 0 ≤ u ≤ y1. 1
Thus, fU ( u) = ∫ (1 / y1) dy1 = − ln( u) , 0 ≤ u ≤ 1. u
6.36
By independence, f ( y1 , y2 ) =
4 y 1y 2 θ2
2
2
e−( y 1+ y2 ) , y1 > 0, y2 > 0. Let U = Y12 + Y22 . For a fixed
value of Y1 at y1, then U = y21 + Y22 so we can write y 2 = u − y12 . Then,
dy 2 du
=
1 2 u − y12
so
that the joint density of Y1 and U is 2
g( y1 , u) = 4 y1 θ2u −y1 e −u / θ
1 2 u − y 12
= θ22 y1 e −u / θ , for 0 < y1 <
u.
u
Then, f U (u ) =
∫
2 θ2
y 1e − / θdy 1 = θ12 ue − u θ. Thus, U has a gamma distribution with α = 2. u
/
0
6.37
The mass function for the Bernoulli distribution is p (y ) = p y (1 − p ) 1− y , y = 0, 1. 1
a. mY1 (t ) = E (e tY1 ) = ∑ e ty p ( y ) = 1 − p + pet . x =0 n
b. mW ( t ) = E ( etW ) = ∏ mYi (t ) = [1 − p + pet ]n i= 1
c. Since the mgf for W is in the form of a binomial mgf with n trials and success probability p, this is the distribution for W.
Chapter 6: Functions of Random Variables
127 Instructor’s Solutions Manual
6.38
Let Y1 and Y2 have mgfs as given, and let U = a1Y1 + a2Y2. The mdf for U is m U (t ) = E (eUt ) = E (e( a1 Y1 + a2 Y2 ) t ) = E (e( a1 t) Y1 )E (e( a2 t) Y2 ) = mY1 (a 1t )mY2 (a 2t ) .
6.39
The mgf for the exponential distribution with β = 1 is m (t ) = (1 − t ) −1 , t < 1. Thus, with Y1 and Y2 each having this distribution and U = (Y1 + Y2)/2. Using the result from Ex. 6.38, let a1 = a2 = 1/2 so the mgf for U is mU ( t) = m( t / 2 ) m( t / 2 ) = (1 − t / 2 ) −2 . Note that this is the mgf for a gamma random variable with α = 2, β = 1/2, so the density function for U is f U (u ) = 4ue −2u , u ≥ 0 .
6.40
It has been shown that the distribution of both Y12 and Y22 is chi–square with ν = 1. Thus, both have mgf m(t ) = (1 − 2t )−1/ 2 , t < 1/2. With U = Y12 + Y22 , use the result from Ex. 6.38 with a1 = a2 = 1 so that mU ( t ) = m( t )m(t ) = (1 − 2 t) −1 . Note that this is the mgf for a exponential random variable with β = 2, so the density function for U is f U (u ) = 12 e −u / 2 , u ≥ 0 (this is also the chi–square distribution with ν = 2.)
6.41
(Special case of Theorem 6.3) The mgf for the normal distribution with parameters μ and 2 2 σ is m (t ) = eμ t+σ t / 2 . Since the Yi’s are independent, the mgf for U is given by n
n
i =1
i=1
[
]
mU ( t ) = E( eUt ) = ∏ E( e ai tYi ) = ∏ m( a it) = expμ t∑ i ai + ( t2σ 2 / 2 )∑ i a 2i . This is the mgf for a normal variable with mean μ∑i a i and variance σ 2 ∑i a i2 . 6.42
The probability of interest is P(Y2 > Y1) = P(Y2 – Y1 > 0). By Theorem 6.3, the distribution of Y2 – Y1 is normal with μ = 4000 – 5000 = –1000 and σ2 = 4002 + 3002 = ( −1000 ) 250,000. Thus, P(Y2 – Y1 > 0) = P(Z > 0−250 ) = P(Z > 2) = .0228. , 000
6.43
2 a. From Ex. 6.41, Y has a normal distribution with mean μ and variance σ /n.
b. For the given values, Y has a normal distribution with variance σ2/n = 16/25. Thus, the standard deviation is 4/5 so that P(| Y –μ| ≤ 1) = P(–1 ≤ Y –μ ≤ 1) = P(–1.25 ≤ Z ≤ 1.25) = .7888. c. Similar to the above, the probabilities are .8664, .9544, .9756. So, as the sample size increases, so does the probability that P(| Y –μ| ≤ 1). 6.44
The total weight of the watermelons in the packing container is given by U =∑ =i 1 Yi , so n
by Theorem 6.3 U has a normal distribution with mean 15n and variance 4n. We require that .05 = P (U > 140) = P (Z > 140−415n n ) . Thus, 140−415n n = z.05= 1.645. Solving this nonlinear expression for n, we see that n ≈ 8.687. Therefore, the maximum number of watermelons that should be put in the container is 8 (note that with this value n, we have P(U > 140) = .0002).
128
Chapter 6: Functions of Random Variables
Instructor’s Solutions Manual
6.45
By Theorem 6.3 we have that U = 100 +7Y1 + 3Y2 is a normal random variable with mean μ = 100 + 7(10) + 3(4) = 182 and variance σ2 = 49(.5)2 + 9(.2)2 = 12.61. We require a value c such that P(U > c) = P( Z > c12−182 ). So, c −12182 = 2.33 and c = $190.27. . 61 . 61
6.46
The mgf for W is mW (t ) = E (e Wt ) = E (e (2 Y /β ) t ) = mY ( 2t / β ) = (1 − 2t )− n / 2. This is the mgf for a chi–square variable with n degrees of freedom.
6.47
By Ex. 6.46, U = 2Y/4.2 has a chi–square distribution with ν = 7. So, by Table III, P(Y > 33.627) = P(U > 2(33.627)/4.2) = P(U > 16.0128) = .025.
6.48
From Ex. 6.40, we know that V = Y12 + Y22 has a chi–square distribution with ν = 2. The density function for V is f V (v ) = 12 e− v/ 2 , v ≥ 0. The distribution function of U = V is FU (u ) = P (U ≤ u ) = P (V ≤ u 2 ) = FV ( u 2 ) , so that f U ( u) = FU′ ( u) = ue−u / 2 , u ≥ 0. A sharp observer would note that this is a Weibull density with shape parameter 2 and scale 2. 2
6.49
The mgfs for Y1 and Y2 are, respectively, mY1 ( t ) = [1 − p + pe t ]n1 , mY2 (t ) = [1 − p + pe t ]n2 . Since Y1 and Y2 are independent, the mgf for Y1 + Y2 is mY 1 (t ) × mY 2 (t ) = [1− p + pet ] n1 + n2 . This is the mgf of a binomial with n1 + n2 trials and success probability p.
6.50
The mgf for Y is mY (t ) = [1 − p + pe t ]n . Now, define X = n –Y. The mgf for X is m X ( t ) = E ( e tX ) = E( e t( n− Y) ) = e tnm Y(− t ) = [ p + (1 − p) e t] n. This is an mgf for a binomial with n trials and “success” probability (1 – p). Note that the random variable X = # of failures observed in the experiment.
6.51
From Ex. 6.50, the distribution of n2 – Y2 is binomial with n2 trials and “success” probability 1 – .8 = .2. Thus, by Ex. 6.49, the distribution of Y1 + (n2 – Y2) is binomial with n1 + n2 trials and success probability p = .2.
6.52
The mgfs for Y1 and Y2 are, respectively, mY1 ( t ) = eλ1(
et −1 )
, mY2 ( t ) = eλ 2(
et −1)
.
a. Since Y1 and Y2 are independent, the mgf for Y1 + Y2 is mY1 (t )× mY2 (t ) = e( λ1 +λ2 )( e
t
This is the mgf of a Poisson with mean λ1 + λ2. b. From Ex. 5.39, the distribution is binomial with m trials and p = 6.53
λ1 λ1 + λ2
−1 )
.
.
The mgf for a binomial variable Yi with ni trials and success probability pi is given by n mY i ( t ) = [1 − pi + pi e t ] ni . Thus, the mgf for U = ∑i=1 Yi is mU ( t ) = ∏i [1 − pi + pi e t ]n i . a. Let pi = p and ni = m for all i. Here, U is binomial with m(n) trials and success probability p. n b. Let pi = p. Here, U is binomial with ∑i =1 ni trials and success probability p. c. (Similar to Ex. 5.40) The cond. distribution is hypergeometric w/ r = ni, N = d. By definition,
∑n
i
.
Chapter 6: Functions of Random Variables
129 Instructor’s Solutions Manual
P (Y1 +Y 2 = k | ∑i = 1Y i ) = n
P (Y1 +Y2 =k , ∑Yi =m ) P ( ∑Yi =m )
n ⎞ ⎛ n 1+ n 2 ⎞⎛⎜ n⎟ ⎟⎟ ⎜⎜ i= 3 i ⎜ ⎝ k ⎠⎝ m− k ⎟⎠ ⎛ n n⎞ ⎜ i =1 i ⎟ ⎟ ⎜ ⎝ m ⎠
=
∑
n
P( Y1 +Y2 =k, Y =m −k) i=3 i P (∑ Yi =m )
=
∑ i= 3 Yi= m− k ) n
P( Y1+ Y2= k ) P(
P( ∑ Yi =m)
∑
=
, which is hypergeometric with r = n1 + n2.
∑
e. No, the mgf for U does not simplify into a recognizable form. 6.54
∑ Y Poisson w/ mean ∑ λ . n
a. The mgf for U =
i= 1 i
i
is mU (t ) = e
( et −1 )
∑ iλ i , which is recognized as the mgf for a
i
b. This is similar to 6.52. The distribution is binomial with m trials and p =
λ1
∑λ i
.
c. Following the same steps as in part d of Ex. 6.53, it is easily shown that the conditional distribution is binomial with m trials and success probability λ1 +λλ2 . ∑ i 6.55
Let Y = Y1 + Y2. Then, by Ex. 6.52, Y is Poisson with mean 7 + 7 = 14. Thus, P(Y ≥ 20) = 1 – P(Y ≤ 19) = .077.
6.56
Let U = total service time for two cars. Similar to Ex. 6.13, U has a gamma distribution ∞
∫ 4ue
with α = 2, β = 1/2. Then, P(U > 1.5) =
−2u
du = .1991.
1.5
6.57
...