UBC MATH 302 homework 09 solutions PDF

Title UBC MATH 302 homework 09 solutions
Author Anon Profile
Course Introduction To Probability
Institution The University of British Columbia
Pages 6
File Size 128.6 KB
File Type PDF
Total Downloads 19
Total Views 140

Summary

Download UBC MATH 302 homework 09 solutions PDF


Description

Math 302, Assignment 9

Due April 3 (Wednesday!)

1. Let X and Y be two independent uniform random variables on (0, 1). (a) Using the convolution formula, find the p.d.f. fZ (z) of the random variable Z = X + Y , and graph it. (b) What is the moment generating function of Z ? Solutions: (a) The convolution formula gives us Z ∞ f (x)f (z − x)dx fZ (z) = −∞

and note that the interval for which the integrand is nonzero depends on z since we must have 0 < x < 1 and 0 < z − x < 1, or equivalently: 0 < x < 1 and z − 1 < x < z. Thus we integrate on the intersection interval I = (0, 1) ∩ (z − 1, z), that is, we have Z fZ (z) = 1 dx. I

First case: If z ≤ 0 or z ≥ 2 then I = ∅, so fZ (z) = 0.

Second case: If 0 < z < 1 then I = (0, z), and fZ (z) =

Rz

1 dx = z. R1 Third case: If 1 ≤ z < 2 then I = (z − 1, 1), and fZ (z) = z−1 1 dx = 2 − z. 0

The graph of fZ looks like a triangle whose three vertices have coordinates (0, 0), (1, 1) and (2, 0). (b) Let M (t) be the moment-generating function, then by independence of X and Y , M (t) = MX (t)MY (t) = MX (t)2 since X and Y are both uniform on (0, 1). On the other hand, MX (t) = Thus, (et − 1)2 MZ (t) = t2

R1 0

etxdx = etx /t|01 = (et − 1)/t.

2. Suppose that X has moment generating function MX (t) = 41 e−3t +

1 2

+ 14 et .

(a) Find the mean and variance of X by differentiating the m.g.f. above. (b) Find the p.m.f. of X. Use your expression for the p.m.f. to check your answers from part (a). Solutions: (a) We have 3 1 1 + =− 4 4 2 1 9 9 1 1 σ 2 = MX′′ (0) − = + − = 4 4 4 4 4 ′ (0) = − µ = MX

(b) By looking at the m.g.f., we recognize that X takes values 0, −3, 1, with P(X = 0) = −3) = 14 , P(X = 1) = 14 . Therefore, we can calculate again

1 , 2

P(X =

1 1 1 1 · 0 + · (−3) + · 1 = − 4 4 2 2  1 1 9 1 1 · 02 + · (−3)2 + · 12 − = σ2 = 4 4 4 2 4 µ=

3. Let X be a discrete random variable taking values 0, 1, 2, and Xn , n ≥ 1 be a sequence of discrete random variables taking values 0, 21 , 1, 23 , 2, 25 . The joint p.m.f. of Xn and X is

Xn ↓ X → 0 1/2 1 3/2 2 5/2

Table 1: The p.m.f. of (Xn , X ) 0 1 2 (n − 1)/2n 0 1/(8n) 1/(3n) 0 0 0 (n − 1)/(4n) 0 1/(6n) 1/(4n) 0 0 0 (n − 1)/(4n) 0 0 1/(8n)

(a) Calculate the p.m.f. of X . (b) Show that Xn → X in probability. Remark: This illustrates the fact that convergence in probability is associated to concentration of the joint p.m.f. on the diagonal. (c) Show that Xn → X in distribution. Remark: Note that you only need to know the marginal p.m.f.’s of Xn and X to prove this fact, and the dependence between Xn and X (as quantified by the joint p.m.f.) is not needed. Solutions: (a) Clearly, P(X = 0) = 12 , P(X = 1) = P(X = 2) = 14 . (b) We have to show that P(|Xn − X| ≥ ǫ) → 0 as n → ∞ for any ǫ > 0. Indeed, P(|Xn − X| ≥ ǫ) ≤ P(Xn 6= X) =

1 1 1 1 1 1 = →0 + + + + 8n 4n 8n 3n 6n n

as n → ∞.

(c) The p.m.f. of Xn is P(Xn = 0) =

5 1 1 1 3 1 , P(Xn = 32 ) = , P(Xn = 21 ) = , P(Xn = 1) = − − , 4 4n 3n 2 8n 12n 1 1 1 , P(Xn = 52 ) = . P(Xn = 2) = − 4 4n 8n

As n → ∞, this goes to P(Xn = 0) =

1 1 , P(Xn = 12 ) = 0, P(Xn = 1) = , P(Xn = 32 ) = 0, 4 2 1 P(Xn = 2) = , P(Xn = 25 ) = 0 4

which is the same as the p.m.f. of X. Therefore, Xn → X in distribution. Note: For discrete RV’s convergence of the p.m.f.’s is equivalent to convergence of the c.d.f.’s (we never used c.d.f.’s for discrete RV’s). Therefore, convergence in distribution for discrete RV’s may be defined as convergence of the p.m.f’s.

2

4. Consider the sample space S = {1, 2, 3, . . .}, and assume that outcomes have the probabilities P({i}) = 2−i . For any n ≥ 0, define the discrete random variable Xn : S → {0, . . . , n} by Xn (i) = i mod (n + 1), where mod means “modulo”. (a) Show that Xn converges in probability to the “identity” random variable X, defined by X(i) = i. (b) Show that Xn converges in distribution to the Geom(1/2) random variable (e.g. to the time of the first Head in a sequence of fair coin flips). Solutions: (a) The random variable Yn = Xn − X satisfies Yn (i) = 0 if i ≤ n. Therefore, for any ǫ > 0, X 2−i = 2−n , P(|Yn | > ǫ) ≤ P(i > n) = i>n

where in the last step we used the geometric series. Since 2−n goes to zero as n → ∞, Xn → X in probability. (b) We compute the p.m.f. of Xn as follows: For j ∈ {0, . . . , n}, Xn (i) = j means i ∈ {j, j + (n + 1), j + 2(n + 1), . . .}

j 6= 0

i ∈ {n + 1, 2(n + 1), . . .}

j = 0.

Therefore, X

P(Xn = j) =

2−i

j 6= 0

i∈{j,j +(n+1),j +2(n+1),...}

= 2−j

X

k≥0

2−k(n+1) = 2−j ·

X

P(Xn = 0) =

1 1 − 2−(n+1)

2−i

i∈{(n+1),2(n+1),...}

=

X

k≥1

Since

1 1−2−(n+1)

2−k(n+1) = 2−(n+1) ·

1 1 − 2−(n+1)

→ 1 as n → ∞, we have that P(Xn = j) → 2−j

(j 6= 0)

P(Xn = 0) → 0,

and this is the p.m.f. of a Geom(1/2) RV. Thus, Xn → Geom(1/2) in distribution.

5. Let Xn ∼ Bin(n, λn ) for some λ > 0. Show that the moment generating function MXn (t) converges as n → ∞, and show that the limit is the m.g.f. of the Poisson(λ) random variable. Remark: Since we have proven earlier in the course that the p.m.f. of Xn converges to the p.m.f. of Poisson(λ), this illustrates the fact that convergence of the m.g.f. is equivalent to convergence of the distribution. Solutions: According to the lecture, the m.g.f. of Xn is  n MXn (t) = 1 + λn (et − 1) . t

According to (1 + xn )n → ex , this converges to eλ(e −1), and so we must show that the m.g.f. of Poisson(λ) is given by this expression. By definition, and the series for the exponential MX (t) =

X

k≥0

etk

X (λet )k t λk −λ e = e−λ = eλ(e −1), k! k! k≥0 3

as was to be shown.

6. You are given two coins that are optically indistinguishable, but one of them is fair, while the other will flip “Head” 51% of the time. To find out which is the fair one, you choose the following strategy: Pick one of the coins randomly, flip it n times, and record µ ¯ n = 1n × the number of heads flipped. If µ ¯n is closer to 50% than to 51%, you decide that the coin is the fair coin, otherwise, you decide that it is the biased coin. (a) Use Chebyshev’s inequality to find a value for n such that you can be 90% sure that this procedure will identify the fair coin correctly. (b) Now use the central limit theorem and the Φ table to find a better (smaller) value for n. How much smaller is your new n? Solutions: (a) If you happen to pick the fair coin, which has variance p(1 − p) = P(false classification|pick fair coin) = P(¯ µn ≥ 0.505) ≤ P(|¯ µn − 0.5| ≥ 0.005) ≤

1 , 4

then, by Chebyshev,

1 4

n · 0.0052

.

Similarly, if you happen to pick the biased coin, which has variance p(1 − p) = 0.2499, we get P(false classification|pick biased coin) = P(¯ µn ≤ 0.505) ≤ P(|¯ µn − 0.51| ≥ 0.005) ≤

0.2499 . n · 0.0052

Therefore, by the law of total probability, P(falsely decide which coin is fair) = P(false classification|pick fair coin) ·

1 2

+ P(false classification|pick biased coin) ·



1 2

0.24995 . n · 0.0052

= 99980. For this to be at most 10%, we need to choose n ≥ 00.24995 .1·0.0052 (b) Using the same reasoning, we instead use the CLT to approximate

Therefore,

√   n √ P(false classification|pick fair coin) ≈ 1 − Φ 0.005· 0.25  0.005·√n    0.005 =1−Φ √ P(false classification|pick biased coin) ≈ Φ − √0.24995 0.24995 √   0.005·√n   n √ P(falsely decide which coin is fair) ≈ 1 − 12 Φ 0.005· − 21 Φ √ 0.25 0.24995  0.005·√n √ ≤1−Φ 0.25

√  √  n 0.005· n √ √ For this to be less than 10%, we need Φ 0.005· ≥ 1.29, which gives n ≥ 16641. So, ≥ 90%, or 0.25 0.25 if we assume that the CLT is accurate, we will be able to reliably guess the fair coin after far fewer flips than what Chebyshev suggests to us. The difference between the two methods will become even larger if we demand higher reliability (say 99% correct classification).

4

7.* Let X and Y be independent, standard normal random variables. What is the p.d.f. of X 2 + Y 2 ? Solutions: By independence the joint p.d.f. of X and Y is 2

f (x, y) = (1/2π)e−(x

+y 2 )/2

.

For z ≥ 0 we obtain the c.d.f. by integrating using polar coordinates: FX 2 +Y 2 (z) = P(X 2 + Y 2 ≤ z ) ZZ 1 −(x2 +y2 )/2 e dx dy = 2π 2 2 x +y ≤z √ Z z Z 2π r −r2 /2 = e dθ dr 2π 0 0 √ Z z 2 re−r /2 dr = 0

√ /2  z 0 −z/2

= −e−r =1−e

2

.

We clearly have FX 2 +Y 2 (z) = 0 if z < 0, so fX 2 +Y 2 (z) =

(

1 −z/2 e 2

0

if z ≥ 0, if z < 0.

Therefore, X 2 + Y 2 ∼ Exp(1/2). Solution 2: First we calculate the cumulative distribution function of X 2 . Clearly F 2X (x) = 0 if x < 0, for x ≥ 0 we have √ √ √ √ √ FX2 (x) = P(X 2 ≤ x) = P(− x ≤ X ≤ x) = Φ( x) − Φ(− x) = 2Φ( x) − 1, Rx 2 where Φ(x) = −∞ √12π e−y /2 dy is the c.d.f. of X. Taking the derivative yields that the probability density function of X 2 is ( √ 1 e−x/2 if x ≥ 0, 2πx f (x) = 0 if x < 0. Clearly the p.d.f. of Y 2 is the same function f . As X 2 and Y 2 are also independent, the p.d.f. of the sum is the convolution f ∗ f . Using the substitution y = x/z, for all z > 0 we have Z ∞ f (x)f (z − x) dx fX 2 +Y 2 (z) = (f ∗ f )(z) = −∞ Z z 1 1 √ e−(z−x)/2 dx e−x/2 p = 2π(z − x) 2πx 0 Z e−z/2 z 1 = p dx 2π x (z − x) 0 Z e−z/2 1 1 = p dy 2π y(1 − y) 0 = Ce−z/2 ,

where C > 0 is an absolute constant. Clearly fX 2 +Y 2 (z) = 0 if z ≤ 0, so C = 1/2. Thus ( 1 −z/2 e if z > 0, fX 2 +Y 2 (z) = 2 0 if z ≤ 0. 5

R∞ 0

Ce−z/2 dz = 1 yields

Observe that X 2 + Y 2 ∼ Exp(1/2). p √ Note that the antiderivative of 1/ x(1 − x) is 2 arcsin( x) for 0 < x < 1, but we do not need this for our calculations.

6...


Similar Free PDFs