Hw10 solutions - hw 10 solu PDF

Title Hw10 solutions - hw 10 solu
Author Shuang Song
Course Probability Theory
Institution Yale University
Pages 5
File Size 112.1 KB
File Type PDF
Total Downloads 262
Total Views 973

Summary

Fall 2018 STAT 241: Probability Theory with Applications Due: Dec 5, 2018 in class Prof. Yihong WuSolution prepared by Dylan O’Connell and Yihong Wu. (a) We prove this using induction. i. Base case: k= BecauseX 1 andX 2 are continuous variable, we haveP(X 1 =X 2 ) = 0Then becauseX 1 andX 2 are ident...


Description

Fall 2018 STAT 241: Probability Theory with Applications Due: Dec 5, 2018 in class Prof. Yihong Wu Solution prepared by Dylan O’Connell and Yihong Wu. 1. (a) We prove this using induction. i. Base case: k=2 Because X1 and X2 are continuous variable, we have P (X1 = X2 ) = 0 Then because X1 and X2 are identically distributed, by symmetry we have P (X1 < X2 ) = P (X1 > X2 ) = ii. k=n-1 Assume P (X1 < X2 < · · · < Xn−1 ) = iii. k=n Because Xi are iid variables, we have

1 2

1 (n−1)!

P (Xn > X1 · · · Xn−1 ) =

1 n

1 n! (b) Note the only possible way this can occur is if the rolls are exactly {1, 2, 3, 4, 5, 6} in that order (and that the outcomes of the rolls are independent).  6 1 1 6= P (X1 < X2 < · · · < X6 ) = P (X1 = 1, X2 = 2, . . . , X6 = 6) = 6 6! P (X1 < X2 < · · · < Xn ) = P (X1 < X2 < · · · < Xn−1 ) · P (Xn > X1 · · · Xn−1 ) =

2. (a) We compute the conditional probability by definition: for any x, y ≥ 0 such that x + y ≤ n, n n−x n−x−y n  n−x 5 y 4 P (X = x, Y = y) = x , P (X = x) = x n 6n 6 Hence     y  n−x−y  P (X = x, Y = y) n−x 1 n − x 4n−x−y 4 P (Y = y|X = x) = = = n−x y 5 y P (X = x) 5 5 Thus conditioned on X = x, Y ∼ Binom(n − x, 15 ).

The intuitive explaination is the following: If X = x of the n outcomes are 1’s, then the remaining n − x outcomes are 2’s, 3’s, 4’s, 5’s, and 6’s. Hence, given X = x, Y is distributed as the outcome of the sum of n − x Bernoulli trials with success probability 1/5. Thus, Y |X = x ∼ Binom(n − x, 15 ). (b) From part (a), E(Y |X) = n−X 5 , which is linear in X. In fact, this is the best linear estimate we derived in class. Thus, in this case the best estimate turns out to be linear. 1

3. (a) Because X1 , X2 , ..., Xn are independent, we have V ar(Sm ) =

m X

V ar (Xi ) = m

n X

V ar (Xi ) = n

i=1

V ar(Sn ) =

i=1

Cov(Sm , Sn ) = Cov

m X i=1

Xi ,

n X

Xi

i=1

!

=

m X

V ar (Xi ) = m

i=1

Cov(Sm , Sn ) ρ(Sm , Sn ) = p p = V ar(Sm ) V ar(Sn )

r

m n

(b) When n goes to infinity, there are infinite steps of random work between Sm and Sn , and the position after first m steps become more and more uncorrelated. (c) Correlation coefficient of Sm and Sn − Sm Cov (Sm , Sn − Sm ) = Cov (

m X

Xi ,

i=1

n X

Xi ) = 0

i=m+1

This final equality is because the sums of the Xi s with no overlapping terms are independent, and thus are uncorrelated. ρ(Sm , Sn ) = 0 4. (a) Let E denote the desired expectation. The number of rolls to obtain the first six is distributed according to Geometric(1/6). Conditioning on the first six to appear, the expected number of additional tosses is 16 + 65 (1 + E) (which arises from further conditioning on the outcome of the next roll). Thus, E = 6 + 16 + 65 (1 + E). Solving for E yields E = 42. (b) Let F denote the desired expectation. Conditioning on the outcome of the second roll, the expected number of additional steps is 16 + 65 F (according to whether the second roll matches the first). Thus, F = 1 + 61 + 56 F. Solving for F yields F = 7. 5. (a) Because area of the parallelogram is 1, we have  1 0 < y < 1, y < x < y + 1 fxy (x, y) = 0 otherwise Rx  0...


Similar Free PDFs