Hw5 solutions - hw 5 PDF

Title Hw5 solutions - hw 5
Author Shuang Song
Course Probability Theory
Institution Yale University
Pages 4
File Size 110.2 KB
File Type PDF
Total Downloads 7
Total Views 58

Summary

S&DS 241: Probability Theory with Applications Homework 5 Due: Oct 7, 2018 in class Prof. Yihong WuSolutions prepared by Ganlin Song and Yihong Wu. (a) Label the players as 1, 2,... , 100, and letIjbe the indicator of player j having the same opponent twice. We haveP(I 1 = 1) = 99/ 992 = 1/9...


Description

S&DS 241: Probability Theory with Applications Homework 5 Due: Oct 7, 2018 in class Prof. Yihong Wu Solutions prepared by Ganlin Song and Yihong Wu. 1. (a) Label the players as 1, 2, . . . , 100, and let Ij be the indicator of player j having the same opponent twice. We have P (I1 = 1) = 99/992 = 1/99. Then by symmetry, linearity, and the fundamental bridge, E (X) = 100E (I1 ) = 100P (I1 = 1) = 100/99. (b) The possible values of X are 0,2,4,...,100; we have that X must be even since if Alice plays the same opponent twice, say Bob, then Bob also plays the same opponent twice. A Poisson distribution has possible values 0, 1, 2, 3, . . . , which does not make sense as an approximation for an r.v. that must be even. (c) Let G be the number of games in the second round such that the same pair of opponents played each other in the first round; note that G = X/2. Let J1 , ..., J50 be indicator r.v.s where Ji is the indicator of the ith game in round 2 having the same pair as a round 1 game, with respect to a pre-determined way to order games (e.g., in increasing order of the smaller of the two player IDs). A Poisson approximation for G does make sense since the Ji are weakly dependent and P (Ji = 1) = 1/99 is small. So G is approximately Poi(50/99). This gives P (X = 0) = P (G = 0) ≈ e−50/99 ≈ 0.6035, P (X = 2) = P (G = 1) ≈ e−50/99 (50/99) ≈ 0.3048. 2. (a) Each box has a chance of n1 of containing the first coupon, and 1 − n1 containing some other coupons. So by independence, P {first coupon is absent in all k boxes} = (1 − 1n )k . (b) By union bound, P {some coupon is absent in all k boxes} ≤ P {first coupon is absent in all k boxes} + · · · P {nth coupon is absent in all k boxes} 1 = n(1 − )k . n Consequently, P {all coupons are collected} = 1 − P {some coupon is absent} ≥ 1 − n(1 − 1 )k . n (c) Using the above bound, we want 1 − n(1 −

1 k ) ≥ 0.99 n

that is, k≥

log(100n) n log n−1

which, when n = 20, evaluates to 148.2. Hence for 20 coupons, buying 149 boxes suffices to guarantee the probability to complete the collection is at least 99%. 1

3. (a) Using law of total probability: P (Y = k) = =

∞ X

P (Y = k|X = i)P (X = i)

i=k ∞  X i=k

=

∞ X i=k

 i −i −λ λi 2 e i! k

(λ/2)i i! e−λ k !(i − k )! i! ∞

(λ/2)k X −λ/2 (λ/2)i−k e = (i − k)! k! j=i−k

=

i=k ∞ k −λ/2 X

λ e k!

j=0

|

λk e−λ/2 (λ/2)j = . k! j! {z }

e−λ/2 =1

Therefore Y ∼ Poi(λ/2). Similarly, Z ∼ Poi(λ/2).

(b) To show that Y and Z are independent, we need to prove that for any k, ℓ ≥ 0, P (Y = k, Z = ℓ) = P (Y = k)P (Z = ℓ). k −λ/2

From the previous part we already know that P (Y = k) = λ ek! and P (Z = ℓ) = For the LHS,   k + ℓ −(k+ℓ) 2 P (Y = k, Z = ℓ) = P (X = k + ℓ) k   λk+ℓ e−λ k + ℓ −(k+ℓ) 2 = (k + ℓ)! k

λℓ e−λ/2 ℓ!

.

λk+ℓ e−λ (k + ℓ)! −(k+ℓ) 2 (k + ℓ)! k!ℓ! λk+ℓ e−λ −(k+ℓ) = 2 k!ℓ! = P (Y = k)P (Z = ℓ), =

and we are done. 4. For the following, let S, P , and H be the events that Debbie’s luggage was lost at SFO, PHL, and HVN, respectively. Let L be the event that Debbie’s luggage was lost. (a) The probability that her luggage was not lost is the probability that her luggage was not lost at SFO, PHL, and HVN. Since what happens at the three airports is mutually independent, this probability is P (Lc ) =P ((S ∪ P ∪ H)c ) = P (S c ∩ P c ∩ H c ) = P (S c )P (P c )P (H c ) = (1 − 1/2)(1 − 1/3)(1 − 1/4) = (1/2)(2/3)(3/4). 2

Thus, probability that her luggage was lost is P (L) = P (S ∪ P ∪ H) = 1 − P ((S ∪ P ∪ H)c ) = 1 − (1/2)(2/3)(3/4) = 3/4. (b) For Debbie’s luggage to be lost at PHL, her luggage must have not been lost at SFO. Thus, the probability that her luggage was lost at PHL is P (L∩P ) = P (S c ∩P ) = (1−1/2)(1/3) = 1/6. By part (a) and Bayes’ rule, P (P |L) =

P (L∩P ) P (L)

=

1/6 3/4

P (S) P (L)

=

1/2 3/4

=

(1/2)(2/3)(1/4) 3/4

= 2/9.

(c) In a similar fashion to part (b), P (S|L) = and P (H|L) =

P (L∩H) P (L)

P (L∩S) P (L)

=

=

P ((S∪P )c ∩H) P (L)

= 2/3, = 1/9.

Let N be the number of days until she recovers her lost luggage. Then P (N = 5) = P (S|L) = 2/3, P (N = 3) = P (P |L) = 2/9, and P (N = 1) = P (H|L) = 1/9. Thus, E(N ) = 5 · P (N = 5) + 3 · P (N = 3) + 1 · P (N = 1) = 30/9 + 6/9 + 1/9 = 37/9. 5. (a) x = 0 means he doesn’t have any money, so it is impossible for him to win any money, i.e. F (0) = 0. x = 1 means he already has $1, then he doesn’t need to play any game to win more money, i.e. F (1) = 1. (b) When x < 1/2, he would bet all the $x, by law of total probability, F (x) = P {reach $1 with $x} = P {reach $1 with $x|win 1st bet}P {win 1st bet} + P {reach $1 with $x|lose 1st bet}P {lose 1st bet} = P {reach $1 with $2x}P {win 1st bet} + P {reach $1 with $0}P {lose 1st bet} = F (2x) · p + F (0) · q = F (2x) · p When x ≥ 1/2, he would just bet $1-x, by law of total probability, F (x) = P {reach $1 with $x} = P {reach $1 with $x|win 1st bet}P {win 1st bet} + P {reach $1 with $x|lose 1st bet}P {lose 1st bet} = P {reach $1 with $1}P {win 1st bet} + P {reach $1 with $2x-1}P {lose 1st bet} = F (1) · p + F (2x − 1) · q = p + q · F (2x − 1)

3

(c) Using the equations in (a)(b), we have F (1/2) = pF (1) = p F (3/4) = p + qF (2 · 3/4 − 1) = p + qF (1/2) = p + qp F (5/8) = p + qF (1/4) = p + q(p · F (1/2)) = p + p2 q (d) In a similar fashion, F (1/3) = pF (2/3) = p(p + qF (1/3)) from which we solve that F (1/3) = p2 /(1 − pq). (e) Timid play corresponds to the usual Gambler’s ruin problem discussed in lecture 10. Here x = 3/4 and each bet is worth 1/4, so it is equivalent to the I play with initial wealth k = 3 with the goal to ruining my opponent with initial wealth n − k = 1. Thus, the winning probability is  q3 1−P h p p 6= 1 1−P h qp 4

3

2

p=

4

1 2

For bold play, the winning probability is F ( 34 ) = p + qp computed above. The graph is shown below. Indeed, for unfavorable games (p < 12 ), bold play is strictly better.

Figure 1: Winning probabilities of timid play (red) and bold play (blue) as a function of p.

4...


Similar Free PDFs