Homework Solution PDF

Title Homework Solution
Author 国嵩 许
Course Intro to Stochastic Processes
Institution City University of Hong Kong
Pages 49
File Size 916.1 KB
File Type PDF
Total Downloads 106
Total Views 139

Summary

Solution Manual for Assignment...


Description

Solution Manual of MA4546

February 25, 2019

This solution manual includes more exercises than the assigned homework.

Chapter 1 Ex. 1 — A pairwise independent collection of random variables is a set of random variables any two of which are independent. Any collection of mutually independent random variables is pairwise independent, but some pairwise independent collections are not mutually independent. Consider the following example. Toss a fair coin two times. Define the following events •A: head appears on the first toss •B: head appears on the second toss •C: both tosses yield the same outcome. Are the events A,B and C mutually independent ? Are they pairwise independent? Answer (Ex. 1) — It is easy to find out that: P (A) = P (B ) = P (C ) = P (A \ B) = P (A \ C) = P (B \ C) = 14 ; and P (A \ B \ C) = 14 .

1 2;

So A, B, C are pairwise independent because P (A\ B) = P (A)P (B); P (A\ C ) = P (A)P (C ); P (B \ C ) = P (B)P (C). But they are not mutually independent because P (A \ B \ C) 6= P (A)P (B )P (C). (Refer to page 15 in the slide for the definition.) **** Ex. 2 — If p and q are two pdfs defined on R, prove that the Kullback-

1

Leibler divergence (or ”relative entropy”) of q from p, which is defined as Z p(x) D(pkq) = p(x) log q(x) is always non-negative. Answer (Ex. 2) — '(x) =  log(x) is a convex function. According to Jensen’s inequality, q(x) q(x) )]  '(E[ E['( ]) p(x) p(x) Thus we have Z

p(x) )dx q(x) Z q(x) = p(x) log( )dx p(x) q(x) = Ep ['( )] p(x) q(x)  '(Ep [ ]) p(x) Z q(x) =  log( p(x)dx) p(x) Z =  log( q(x)dx)

D(pkq) =

p(x) log(

=  log 1 =0

Ex. 3 — If two r.v.s X and Y are independent, then pX|Y (x|y) = pX (x), E(X |Y ) = E(X ) Answer (Ex. 3) — pX (x)pY (y) pX,Y (x, y) = = pX (x) pY (y) pY (y) Z Z E[X|Y ] = xpX|Y (x|y)dx = xpX (x)dx = E(X) pX|Y (x|y) =

2

P Ex. 4 — If {X1 , X 2 , .., Xn } is mutually independent, then var( i Xi ) = P i var(Xi )

Answer (Ex. 4) — If {X1 , X 2 , .., Xn } is mutually independent, then E[Xi Xj ] = E[Xi ]E[Xj ] for all i, j, i 6= j. X X X var( Xi ) = E[( Xi )2 ]  (E[ Xi ])2 i

i

= E[

=

X

X

Xi2

+2

i

E[X 2i ]

i : 0 if j = 2 Similarly, P(Yn+1 = j|Yn = 2, Yn1 = in1 . . . , Y 1 = i1 ) = P(Xn+1 = 8 > if j = 2; : 0 if j = 2.

•{Zn } can be treated as 2D points.

With the information of Zn , the value of Xn is known, and thus the  Xn first component in Zn+1 = is determined as well. {Xn } are X  n+1   i j iid ) P(Zn+1 = 1 |Zn = 1 , Zn1 , ..., Z1 ) = P(Xn+1 = j2 ) (j1 , i2 ) i2 j2 where (x, y) is a delta function with value 1 if x = y, and with value 0 if x 6= y.

    j1 i |Zn = 1 ) = P(Xn+1 = j2 ) (j1 , i2 ) as i2 j2 well. Hence {Zn } is indeed a Markov Chain. What’s more, P(Zn+1 =

        1 1 1 1 }. , s4 = , s3 = , s2 = 1 1 1 1 3 2 1p p 0 0 6 0 0 1  p p7 7 where Pij The transition matrix is therefore P = 6 41  p p 0 05 0 0 1p p (i, j 2 {1, 2, 3, 4}) means the probability of finding Zk+1 = sj provided that Zk = si . The state space S = {s1 =

Ex. 3 — (textbook p51, 2.13)Consider the following weather model. The weather normally behaves as in Example 2.3. However, when the cloudy spell lasts for two or more days, it continues to be cloudy for another day with probability .8 or turns rainy with probability .2. Develop a four-state DTMC model to describe this behaviour. 9

Answer (Ex. 3) — Denote the weather on day n by {Wn } taking values in {s, c, r }. Then the extra condition is Pr(Wn+1 = c|Wn = Wn1 = c) = 0.8 and Pr(Wn+1 = r|Wn = Wn1 = c) = 0.2, Pr(Wn+1 = s|Wn = Wn1 = c) = 0. However, the probability Pr(Wn+1|Wn = c, Wn1 6= c) will follow the original transition matrix, i.e., Pr(Wn+1 = s|Wn = c, Wn1 6= c) = 0.5, Pr(Wn+1 = c|Wn = c, Wn1 = 6 c) = 0.2, Pr(Wn+1 = r|Wn = c, Wn1 6= c) = 0.3. Apparently, the process {Wn } is not a DTMC any more since the information Wn1 affects the probability of Wn+1. We need to handle the special case of two successive cloudy days by splitting the original “cloudy” state in Example 2.3 into two states. Let Xn be the weather conditions in Heavenly, defined as follows: 8 > 1 if it is sunny on day n, > > >

3 if it is rainy on day n > > > :4 if it is both cloudy on day n & n  1 Then Xn = 4 means Wn = Wn1 = c and Xn = 2 means Wn = c, Wn1 6= c.

It is easy to verify that Pr(Xn = i|Xn1 , X n2 , ..., X0 ) = Pr(Xn = i|Xn1 ) holds for all i. We model Xn as a DTMC with transition matrix P. 1 0 0.5 0.3 0.2 0 B 0.5 0 0.3 0.2 C C P= B @ 0.4 0.5 0.1 0 A . 0 0 0.2 0.8

Note that if today is cloudy, tomorrow will have 0.2 probability to be cloudy again, so Xn+1 goes to state 4.

Remark: If we do not impose the extra condition, and we simply split one state c into two state 2 and 4 here. We have the following transition matrix for {Xn }: 1 0 0.5 0.3 0.2 0 B 0.5 0 0.3 0.2 C C B @ 0.4 0.5 0.1 0 A . 0.5 0 0.3 0.2 Note that the second row equals the fourth row in this matrix (the similar conclusion holds in general). 10

Ex. 4 — (p51. 2.16)A total of N balls are put in two urns, so that initially urn A has i balls and urn B has N  i balls. At each step, one ball is chosen at random from the N balls. If it is from urn A, it is moved to urn B, and vice versa. Let Xn be the number of balls in urn A after n steps. Show that {Xn , n  0} is a DTMC, assuming that the successive random drawings of the balls are independent. Display the transition probability matrix of the DTMC. Answer (Ex. 4) — ( Xn1  1 Xn = Xn1 + 1

if one ball is chosen from urn A if one ball is chosen from urn B

N  Xn1 Xn1 , P (Xn = Xn1 + 1) = N N Since each draw of ball is independent, P (Xn = i|Xn1 , ...X 1 ) = P (Xn = i|Xn1 ), thus {Xn , n  0} is a DTMC. P (Xn = Xn1  1) =

There are N + 1 states for Xn , S = (0, 1, 2, ..., N ) 0 0 1 0 0 0 ... 0 0 0 B 1/N 0 (N  1)/N 0 0 ... 0 0 0 B B 0 2/N 0 (N  2)/N 0 ... 0 0 0 P =B B .. .. .. .. .. .. .. .. ... B . . . . . . . . B @ 0 0 0 0 0 ... (N  1)/N 0 1/N 0 0 0 0 0 ... 0 1 0

Note that here the boundary condition is of the type “reflection”.

Ex. 5 — (p53,2.10)Consider the weather model of Example 2.3. Compute the probability that once the weather becomes sunny, the sunny spell lasts for at least 3 days. Answer (Ex. 5) — Define the event A: once the weather becomes sunny, the sunny spell lasts for at least 3 days P (A) = P (Xn+2 = 1, Xn+1 = 1|Xn = 1) = P (Xn+2 = 1|Xn+1 = 1, X n = 1)P (Xn+1 = 1|Xn = 1) = P (Xn+2 = 1|Xn+1 = 1)P (Xn+1 = 1|Xn = 1) = 0.52 = 0.25.

11

1 C C C C C C C A

Ex. 6 — (p53, 2.11)Compute the expected length of a rainy spell in the weather model of Example 2.3. Answer (Ex. 6) — Define the event An : the rainy spell lasts for n days. The previous problem shows that P (An ) = (1  p33 )(p33)n1 = 0.9(0.1)n1 . So the expected length of a rainy spell is 1 X

nP (An ) =

n=1

X

n1

n ⇥ 0. 9 ⇥ (0. 1)n1

= 0.9(1 + 2 ⇥ 0.1 + 3(0.1)2 + 4(0.1)3 + ......) = Note: for x 2 (0, 1), we have

P1

n1 n1 nx

=(

P

n 0 n1 x )

10 . 9

x = ( 1x ) = 0

1 . (1x)2

Ex. 7 — (p53-54, 2.15b)Compute the occupancy matrix M(10) for the DTMCs with transition matrices as given below: 1 0 0 1 0 0 B 0 0 1 0 C C P =B @ 0 0 0 1 A 1 0 0 0

Answer (Ex. 7) — M (10) =

10 X

Pt

t=0

0

P = P = P = I, P = P 5 = P 9 = P ,

4

8

1

0

0 B 0 2 6 10 P =P =P = B @ 1 0 0 0 0 B 1 0 3 7 P =P =B @ 0 1 0 0

12

1 0 1 C C 0 A 0 1

0 0 0 1

1 0 0 0

0 0 0 1

1 0 C C 0 A 0

0

3 B 2 t M (10) = P =B @ 3 t=0 3 10 X

3 3 2 3

3 3 3 2

MATLAB code is as follows

EDU>> P = [0 1 0 0; 0 0 1 0; 0 0 0 1; 1 0 0 0]; EDU>> M = zeros(4,4); EDU>> for i = 0:10 M = M+P^i; end EDU>> disp(M) 3 3 3 2 2 3 3 3 3 2 3 3 3 3 2 3

13

1 2 3 C C 3 A 3

Chapter 2 (part ii, part iii) Ex. 1 — Consider the two-state DTMC with the transition matrix   1a a P = b 1b a, b 2 [0, 1]. Discuss the reducibility, periodicity and limiting/stationary/occupancy distributions to classify the DTMC for different a, b. Answer (Ex. 1) —

1.a = b = 0 (both a and b are zeros). ✓ ◆ 1 0 P = 0 1

In this DTMC, state 1 can only communicate with state 1 and state 2 can only communicate with state 2 but there is no communication between these two states. Thus it’s reducible. Its limiting distribution is not unique since limn P n = P does not have the same row. In fact a(n) = a(0) , which strongly depends on the initial law a(0) . For any stochastic vector µ, µ = µP holds. Thus any initial distribution µ is a stationary distribution of this DTMC. Its occupancy distribution is also not unique for the same reason that M (n) n n+1 ⌘ P since P ⌘ P for any n.

2.a + b > 0, ab = 0 (i.e., either a or b is zero, but not both). For a = 0, b 6= 0 ◆ ✓ 1 0 P = b 1b

Because state 1 can not communicate with state 2, the DTMC is reducible. Its unique limiting distribution is [1 0] from the calculation of P n for n ! 1, which is ✓ ◆ 1 0 . lim P n = 1 0 n

For b = 0, a 6= 0 P =



1a a 0 1



Because state 2 can not communicate with state 1, the DTMC is reducible. 14

Its unique limiting distribution is [0 1] from the calculation of P n for n ! 1 which is ✓ ◆ 0 1 n lim P = n 0 1 Combining the above two situations, this DTMC is reducible when 0 < a + b < 2, ab = 0. Its stationary distribution satisfies µ = µP . Solve this equation and a b ). , a+b we have the unique stationary distribution µ = ( a+b For a = 0, b 6= 0, the occupancy distribution ⌫ = (1, 0); for a 6= 0, b = a b ). , a+b 0, ⌫ = (0, 1). Or you can write ⌫ = ( a+b remark: This case shows that the Theorems about the unique existence of the limiting/stationary/occupancy measures are sufficient conditions, not necessary conditions. 3.a > 0, b > 0, but a + b < 2. The DTMC is irreducible and aperiodic. (Check a = 1, b < 1 or a < 1 b = 1 special cases! still irreducible !) The limiting distribution, stationary distribution and occupancy distribution uniquely exist by the Theorems given in class for this irreducible and aperiodic two-state DTMC. The measure should satisfy ⇡ = ⇡P , which has the unique solution µ=⌫ =⇡=(

a b , ). a+b a+b

4.a = b = 1 (period-2 case). P =



0 1 1 0



State 1 goes to state 2 with probability 1, and state 2 goes to state 1 with probability 1, thus the DTMC is irreducible but not aperiodic, the period d = 2. The limiting distribution does not exist. The stationary distribution and occupancy distribution uniquely exist by the Theorem since the DTMC is irreducible. Simple calculation for µ = µP gives ⌫ = µ = ( 12 , 12 ). Remark: We can directly calculate P n for this 2-state DTMC when 0 < a + b < 2 (i.e., case 2 and case 3) as follows. Since   1a a , P = b 1b 15

after diagonalizing matrix P , we get P = M ⇥ D ⇥ M 1 where the diagonal matrix   1 0 D= 0  where the eigenvalue  = 1  a  in (1, 1) since 0 < a + b < 2.  b is strictly  1 a and the invertible matrix M = . Then 1 b   1 0 M 1 P n = M Dn M 1 = M 0 n     b a 1 a 1 0 a+b a+b = 1 1 0 n a+b 1 b a+b   1 b + an a(1  n ) = a + b b(1  n ) a + bn   1 b a ! ( as n ! 1) a+b b a

a b ). , a+b So the limiting distribution is unique ⇡ = ( a+b P n b+at 1 Pn b 1 t For occupancy measure, limn n+1 t=0 P11 = lim n n+1 t=0 a+b = a+b t P P a(1 ) n n 1 a 1 t limn n+1 . Similar calculations are = a+b t=0 a+b t=0 P12 = lim n n+1 a b ). , a+b for 21 and 22 components. These limits give that ⌫ = ⇡ = ( a+b

Ex. 2 — Prove the transition matrix used by Metropolis [pij ] satisfies the detailed balance condition with ⇡i . Answer (Ex. 2) — One needs to prove that ⇡i pij = ⇡j pji for arbitrary choice of i and j. When i = j, this equation is straightforward. When i 6= j, let pˆij be the proposal transition matrix, ⇡

•If ⇡j pˆji < ⇡i pˆij , pij = pˆji⇡ji , and pji = pˆji , so ⇡i pij = ⇡j pji . •If ⇡j pˆji  ⇡i pˆij , pij = pˆij , and pji = pˆij⇡⇡ij , so ⇡i pij = ⇡j pji .

1/2 1/2 , i.e., Ex. 3 — Define the matrix P˜ whose (i, j) entry is ⇡ i pij ⇡j

P˜ := D1/2 P D 1/2 , where D , diag {⇡1 , ⇡2 , . . . , ⇡N } is the diagonal matrix with diagonal entry {⇡i }. Verify that the detailed balance condition is equivalent to the 16

symmetry of P˜ :

˜. P˜ T = P

Answer (Ex. 3) — The symmetry of P˜ is equivalent to 1/2

⇡i

1/2

pij ⇡j

1/2

1/2

= ⇡j pji ⇡i

, 8i, j

which is equivalent to ⇡i pij = ⇡j pji . This is exactly the detailed balance condition. Ex. 4 — (p55, 2.19)Classify the DTMCs with the transition matrices given in Computational Problem 2.15 as irreducible or reducible. Answer (Ex. 4) — For convenience, rename transition matrices in (a),(b),(c),(d) as Pa , Pb , Pc , Pd respectively. (a) 3 2 0.1 0.3 0.2 0.4 6 0.1 0.3 0.4 0.2 7 7 Pa = 6 4 0.3 0.3 0.1 0.3 5 > 0 0.15 0.25 0. 35 0.25

The DTMC (a) with the transition matrices Pa is irreducible. (b)

1 1.0

2

1.0

4

1.0

1.0

3 From the graph we can see any two nodes can communicate and thus (b) is irreducible. (c)

17

1

3

2

4

From state class, easily we can see four states are categorized into two com3 and {, 2 }. 4 So (c) is reducible. municating classes { , 1 } (d)

1 2

4 3

The state  3 is not accessible from any other state, and thus (d) is reducible. Ex. 5 — (p55, 2.20)Classify the irreducible DTMCs with the transition matrices given below as periodic or aperiodic: Answer (Ex. 5) — (a) From 2.19, we have (a) is aperiodic. (b) d = 4. (c) d = 2. (d) d = 3. PS: try yourself to draw graph like before, and you can easily see the period from the graph. Ex. 6 — (p56, 2.21)Compute a normalized solution to the balance equations for the DTMC in Computational Problem 2.20(a). When possible, compute: 1.the limiting distribution; 2.the stationary distribution; 3.the occupancy distribution.

18

Answer (Ex. 6) — (a) is a finite-state irreducible aperiodic DTMC, which has unique limiting distribution, stationary distribution and occupancy distribution, denoted as ⇡, µ, ⌫. Solve the balance equation ⇡ = ⇡P ⇥

⇤ ⇡ = µ = ⌫ = 0.1703 0.2297 0.2687 0. 3313 . Ex. 7 — (p57, 2.28)Consider the weather model of Conceptual Problem 2.13. Compute the long-run fraction of days that are sunny. Answer (Ex. 7) — Compute the occupancy distribution, denoted as ⌫ . ⌫ = ⌫P. Its solution is

⇥ ⇤ ⌫ = 0.3736 0. 2126 0. 2011 0.2126

The long-run fraction of days that are sunny is 37.36%.

** Ex. 8 — We consider the annual movement of a stock market as a three-state DTMC with the transition diagram as shown in next page. Suppose that in the year of “Bull market” , “Stagnant market” and “Bear market”, the return rates are 15%, 0% and 12% respectively. Suppose that the investment in money market is no risk at all with compound annualised interest rate 5%. As a long-time value investor, you prefer to investing in stock market or money market ?

19

Answer (Ex. 8) — Denote “Bull market”, “Stagnant market”, “Bear Market” by state 1, 2 and 3 correspondingly. The Markov chain Xn has the following transition probability matrix 3 2 0.9 0.025 0.075 0.25 5 . P = 40.25 0.5 0.15 0.05 0.8

This DTMC is irreducible and aperiodic and thus ⇥ has a unique occupancy ⇤ distribution. We can find this distribution ⌫ = 0.6250 0.0625 0.3125 by solving the balance equation(see MATLAB code ⇥ ⇤ bellow ). The return rate is r(x) = 0. 15, 0, 0.12 . Suppose that initially one unit dollar is invested in this stock market. Then at the end of the year n, the amount of the asset price sn is sn = Πni=1(1 + r(Xi )). Then the annualised return rate is Rn = (sn )1/n  1. We want to compare limn Rn with the money market rate rmm = 5%. After taking log, we need compare 1/n limn log(s n ) with log(1 + rmm ) = 0.04879. P 1/n Since limn log(s n ) = lim n1 ni=1 log(1 + r(Xi )), the cost function is then 1/n c(x) := log(1+r(x)) = [0. 13976, 0, 0. 12783]. In the long run limn log(s n ) = (c, ⇡) = 0.047403 < 0.04879. So investigating in money market is preferred. Remark: (1) Here we used the strong ergodicity; (2) This idealized model is not for any real investment advice; the variance is not considered here. EDU>> P = [0.9 0.025 0.075; 0.25 0.5 0.25; 0.15 0.05 0.8] P = 0.9000 0.2500 0.1500

0.0250 0.5000 0.0500

0.0750 0.2500 0.8000

EDU>> [V,D] = eigs(P’) V = 0.8909 0.0891 0.4454

0.7366 -0.0632 -0.6734

0.2757 -0.8034 0.5277

20

D = 1.0000 0 0

0 0.7414 0

0 0 0.4586

EDU>> Pi = V(:,1)/sum(V(:,1)) Pi = 0.6250 0.0625 0.3125

21

Chapter 2 (part iv)   1a a Ex. 1 — For the two-state DTMC with S = {1, 2} and P = b 1b where a, b 2 [0, 1]. Let Tj (i) be the first passage time of reaching the state j if X0 = i. hj (i) = E[Tj (i)|X0 = i]. (i): Calculate h1 (1), h2 (2) and h2 (1) and h1 (2). (ii): What is the distribution of T2 (1)? Calculate its expectation by this distribution. (iii): Let T jr (i) be the first return time and hrj (i) is its expectation. Show that if b > 0, then hr1 (1) = 1 + ab , h1r(2) = 1b . What is the distribution of T r1 (1)? What is the distribution of T r2 (1)? What is the return probability ⇢11 = Pr(T 1r (1) < 1|X0 = 1)? When do we have ⇢11 = 1, i.e, the state 1 is recurrent ? Answer (Ex. 1) — (i): h1 (1) = h2 (2) = 0 since Ti (i) = 0 almost surely. Since h2 (1) = p11(1 + h2 (1)) + p12 (1 + h2 (2)) = (1  a)(1 + h2 (1)) + a, then h2 (1) = 1/a if a > 0; otherwise h2 (1) = 1. For the same reason h1 (2) = 1/b if b > 0; h1 (2) = 1 if b = 0. (ii): It is clear that Tj (i)  1 for any i 6= j. Pr(T2 (1) = 1) = Pr(X1 = 2|X0 = 1) = p12 = a. For any k  2, 6 2|X0 = 1) Pr(T2 (1) = k) = Pr(Xk = 2, X k1 = 6 2, . . . , X1 = = Pr(Xk = 2, X k1 = 1, . . . , X1 = 1|X0 = 1) = p11p11 . . . p11 ⇥ p12 = (1  a)k1 ⇥ a P k1 a = a · (1/a) = 1 if a > 0. The So, Pr(T2 (1) < 1) = 1 k=1 (1  a) expectation is h2 (1) = E(T2 (1)) =

1 X k=1

k(1  a)k1 ⇥ a

1 X d (1  a)k ) ( =a⇥ da k=1

= a ⇥ (1/a2 ) = 1/a

if a > 0. This direct calculation matches our previous result from the “one-more-step” analysis. (iii): Tjr(i) = Tj (i) if i 6= j. So hr1 (2) = h1 (2) = 1b and T1r(2) has the same distribution of T2 (1). Now we work on T 1r (1) = inf {n  1 : Xn = 1|X0 = 1}, which takes value in 1, 2, . . .. Pr(T1r(1) = 1) = Pr(X1 = 1|X0 = 1) = 1  a. For 22

k  2, then Pr(T 1r (1) = k) = Pr(Xk = 1, X k1 = 2, . . . , X1 = 2|X0 = 1) = p12p22 p22 . . . p22p21 = (1  b)k2 ab. Then ⇢11 =

1 X k=1

Pr(T1r(1) = k) = 1  a +

1 X k=2

(1  b)k2 ab = 1  a + ab/b = 1

if b > 0. If b = 0, then ⇢11 = Pr(T r1 (1) = 1) + 0 = 1  a and Pr(T1r(1) = 1) = 1  ⇢11 = a. So, the state 1 is recurrent (i.e. ⇢11 = 1) if and only if b > 0 or a = b = 0. Now we calculate hr1 (1). By the “one-more-step” analysis hr1 (1) = p11 ⇥ 1(1 + hr1 (1))+p12(1 + hr1 (2)) = (1  a) ⇥ 1 +a(1+ h1 (2)) = 1 + a/b /////////////

since hr1 (2) = h...


Similar Free PDFs