MA4F7 2012-2013 Lecture Notes PDF

Title MA4F7 2012-2013 Lecture Notes
Course Brownian Motion
Institution The University of Warwick
Pages 42
File Size 749.9 KB
File Type PDF
Total Downloads 97
Total Views 147

Summary

Download MA4F7 2012-2013 Lecture Notes PDF


Description

MA4F7 Brownian Motion

March 16, 2013

Contents 1 Brownian Sample Paths 1.1 Brownian Motion as a Gaussian Process . . . . . . . . . . . . . . . . . . . 1.2 Growth rate of paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 2 5 9

2 Brownian motion as a Markov Process 12 2.1 Markov transition functions . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2 Strong Markov Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3 Arcsine Laws for Brownian motion . . . . . . . . . . . . . . . . . . . . . . 21 3 Brownian Martingales

22

4 Donsker’s theorem

32

5 Up Periscope

39

These notes are based on the 2013 MA4F7 Brownian Motion course, taught by Roger Tribe, typeset by Matthew Egginton. No guarantee is given that they are accurate or applicable, but hopefully they will assist your study. Please report any errors, factual or typographical, to [email protected] i

MA4F7 Brownian motion Lecture Notes Autumn 2012

The key aim is to show that scaled random walks converge to a limit called Brownian motion. In 1D, P{t 7→ Bt nowhere differentiable } = 1 E(Bt ) = 0, E(B 2t ) = t and so t 7→ Bt is not differentiable at 0. By shifting gives it at any t. R1 We also have that P ( 0 χ(Bs > 0)ds ∈ dx) = √ 1 dx πx(1−x) ( 1 on A . For a P (x + B exits D in A) = U (x) where ∆U (x) = 0 and U (x) = 0 on∂D \ A b−log |x| and this converges to 1 as disc with inner radius a and outer radius b, U (x) = log log b−log a b → ∞. Thus the probability that Brownian motion hits any ball is 1. For random walks, P (x + r.v. exits at y) = U (x) where U (x) = 14 (U (x + e1 ) + U (x − e1 ) + U (x + e2 ) = U (x − e2 )) which can be thought of as a discrete Laplacian. Thus we have a nice equation for Brownian motion, but a not so nice one for random walks.

1

Brownian Sample Paths

Our standard space is a probability space (Ω, F, P). Definition 1.1 A stochastic process (Bt , t ≥ 0) is called a Brownian Motion on R if 1. t 7→ Bt is continuous for a.s. ω 2. For 0 ≤ t1 < t2 < ... < tn we have Bt2 − Bt1 , ..., Btn − Btn−1 are independent 3. For 0 ≤ s < t we have Bt − Bs is Gaussian with distribution N (0, t − s). But does this even exist, and if it does, do the above properties characterise B. The answer to both is yes, and we will show these later. We now define the terms used in the above definition, to avoid any confusion. Definition 1.2 A random variable Z is a measurable function Z : Ω → R. In full, R has the Borel σ-algebra B(R) and measurable means if A ∈ B(R) then Z −1 (A) ∈ F . Definition 1.3 A stochastic process is a family of random variables (Xt , t ≥ 0) all defined on Ω. We do not worry what Ω is, we are only interested in the law/distribution of Z, i.e. P(Z ∈ A) or E(f (Z)) where P(Z ∈ A) = P{ω : Z(ω) ∈ A} If we fix ω, the function t 7→ Bt (ω) is called the sample path for ω . The first property above means that the evaluation of Bt at ω is continuous, for almost all ω. Sadly some books say that P{ω : t 7→ Bt (ω) is continuous } = 1 but how do we know this set is measurable. Definition 1.4 Z a real variable is Gaussian N (µ, σ 2 ) if it has density (x−µ)2 1 e− 2σ2 dz P(Z ∈ dz) = √ 2πσ 2

for σ 2 > 0, meaning integrate both sides over a set A to get the probability over A. If σ = 0 then P(Z = µ) = 1.

1 of 40

MA4F7 Brownian motion Lecture Notes Autumn 2012

1.0.1

Related Animals

The Brownian Bridge is Xt = Bt − tB1 for t ∈ [0, 1] An Ornstein-Uhlenbeck process is one, for C > 0, of the form Xt = e−C t Be2Ct and is defined for t ∈ R. We will check that X here is stationary. Also (Xt+T : t ≥ 0) is still an √ ˆ = −CXt + 2C ddtB , or O-U process. This arises as the solution of the simplest SDE dX dt Rt Rt√ in other form, Xt = X0 − C 0 Xs ds + 0 2CdBt . A Brownian motion on Rd is a process (Bt : t ≥ 0) such that B = (B 1t , ..., Btd) where each t 7→ Btk is a Brownian motion on R and they are independent.

1.1

Brownian Motion as a Gaussian Process

Proposition 1.5 (Facts about Gaussians) D have cZ ∼ N (cµ, c2 σ 2 )

D

1. Z ∼ N (µ, σ 2 ) then for c ≥ 0 we

D

D

D

2. Z1 ∼ N (µ1 , σ12), Z2 ∼ N (µ2 , σ 22 ) and are independent then Z1 + Z2 ∼ N (µ1 + µ2 , σ12 + σ 22 ) D

D

3. Zk ∼ N (µk , σk2) and if Zk → Z then limk→∞ µk = µ, limk→∞ σk2 = σ 2 and Z ∼ N (µ, σ 2 ). The convergence above can be any one of the following. a.s.

1. Almost surely convergence: Zk → Z means P ({ω : Zk (ω) → Z(ω)}) = 1 prob

2. In probability: Zk → Z means P (|Zk − Z| > ε) → 0 for all ε > 0 k→∞

D

3. In distribution: Zk → Z means E(f (Zk )) → E(f (Z)) for any continuous and bounded f . Example 1.1 I =

R1 0

Bt dt is a Gaussian variable.

1 (B1/N + B2/N + ... + BN/N ) k→∞ N 1 ((BN/N − B(N −1)/N ) + 2(B(N −1)/N − B(N −2)/N ) + ... + N (B1 − B0 )) = lim k→∞ N

I = lim

and all these are independent and so Gaussian. 1.1.1

Transforms

Definition 1.6 We define the Fourier transform, or the characteristic function to be φZ (θ) = E(eiθZ ) D

For example, if Z ∼ N (µ, σ 2 ) then φZ (θ) = eiθµe−σ

2 /2

Proposition 1.7 (More facts about Gaussians) 4. φZ (θ) determines the law of Z, i.e. if φZ (θ) = φY (θ) then P (Z ∈ A) = P (Y ∈ A). 5. Z1 , Z2 independent if and only if E((eiθ1 Z1 eiθ2 Z2 ) = E(eiθ1 Z1 )E(eiθ2 Z2 ) for all θ1 , θ2 .

2 of 40

MA4F7 Brownian motion Lecture Notes Autumn 2012 D

6. φZk (θ) → φZ (θ) if and only if Zk → Z. These all hold true for Z = (Z1 , ..., Zd ) with φZ (θ1 , ..., θd ) = E(eiθ1 Z1 +...+iθd Zd ) Pd Definition 1.8 Z = (Z1 , ..., Zd ) ∈ Rd is Gaussian if i=1 λk Zk is Gaussian in R for all λ1 , ..., λd . (Xt , t ≥ 0) is a Gaussian process if (Xt1 , ..., XtN ) is a Gaussian vector on RN for any t1 , ..., tN and N ≥ 1. Check that Brownian motion is a Gaussian process, i.e. is (Bt1 , ..., BtN ) a Gaussian P vector, or is λk Btk Gaussian on R. We can massage this into µ1 (Bt1 − B0 ) + ... + µN (BtN − BtN −1 ) and so is Gaussian. As an exercise, check this for Brownian bridges and O-U processes. Proposition 1.9 (Even more facts about Gaussians) 7. The Law of the Gaussian Z = (Z1 , ..., Zd ) is determined by E(Zk ) and E(Zj , Zk ) for j, k = 1, ..., d 8. Suppose Z = (Z1 , ..., Zd ) is Gaussian. then Z1 , ..., Zd are independent if and only if E(Zj Zk ) = E(Zj )E(Zk ) for all j 6= k . For 7, it is enough to calculate φZ (θ) and see that it is determined by them. For 8, noe need only check that the transforms factor. Example 1.2 (Bt ) a Brownian motion on R. Then E(Bt ) = 0 and, for 0 ≤ s < t,

E(Bs Bt ) = E((Bt − Bs )(Bs − B0 ) +(Bs − B0 )2 ) = E(Bt −Bs )E(Bs −B0 )+ E(Bs −B0 )2 = s and similarly equals t if 0 ≤ t < s. Do the same for Brownian bridges and O-U processes. Theorem 1.10 (Gaussian characterisation of Brownian motion) If (Xt , t ≥ 0) is a Gaussian process with continuous paths and E(Xt ) = 0 and E(Xs Xt ) = s ∧ t then (Xt ) is a Brownian motion on R. Proof We simply check properties 1,2,3 in the definition of Brownian motion. 1 is immediate. For 2, we need only check that E((Xtj+1 − Xtj )(Xtk+1 − Xtk )) splits. Suppose tj ≤ tj+1 ≤ tk ≤ tk+1 and then E((Xtj+1 − Xtj )(Xtk+1 − Xtk )) = tj+1 − tj+1 − tj + tj = 0 as required. For 3, Xt − Xs is a linear combination of Gaussians and so is Gaussian. It has mean zero and E(Xt − Xs )2 = E(X 2s − 2Xs Xt + X t2 ) = s − 2s ∧ t + t = t − s Q.E.D. R1

R1

Bs ds and E(I) = 0 E(Bs )ds = 0 and also Z 1 Z 1 Z 1Z 1 Z E(I 2 ) = E( Bs ds Br dr) = E(Bs Br )dsdr =

Suppose I =

0

0

0

0

0

1 0

Z

0

1

s ∧ rdsdr =

1 3

R1 R but we need to check that we can use Fubini, so we need to check that K = E( 0 01 |Br ||Bs |drds) < ∞. Now Z 1Z 1 Z 1Z 1 √ E(Br2)E(Bs2 )drds ≤ rs < 1 K= E(|Br ||Bs |)drds ≤ 0

0

0

0

as we wanted.

3 of 40

MA4F7 Brownian motion Lecture Notes Autumn 2012

Lemma 1.11 (Scaling Lemma) Suppose that B is a Brownian motion on R and c > 0. Define Xt = 1c Bc2 t for t ≥ 0. Then X is a Brownian motion on R Proof Clearly it has continuous paths and E(Xt ) = 0. Now 1 1 E(Xs Xt ) = E( Bc2 s Bc2 t ) = s ∧ t c c and also

N X

λk Xtk =

1

N X λk

c

1

and this is Gaussian since Bt is Gaussian.

Bc2 tk Q.E.D.

Lemma ( 1.12 (Inversion lemma) Suppose that B is a Brownian motion on R. Define tB 1 t > 0 t . Then X is a Brownian motion. Xt = 0 t=0 Proof

N X

λk Xtk =

1

N X

λk tk B 1

tk

1

which is still Gaussian for tk > 0. If any of the tk = 0 then the addition to the above sum of this term is zero, so we are fine. Clearly E(Xt ) = 0 and 1 =s s t t for s < t. We also have no problem for t > 0 with the continuity of paths. However we need to check that it is continuous at t = 0, i.e. that tB 1 → 0 as t → 0, or that s1Bs → 0 t √ as s → ∞. We expect that Bt ≈ ± t and so Bt /t → 0 should be clear. ˆ t , ..., Bˆt ) providing ti > 0 for a Brownian However, we know that (Xt1 , ..., XtN ) = ( B 1 N ˆ and since Bˆt → 0 as t → 0 surely Xt → 0 as well. We pin this down precisely: motion B E(Xs Xt ) = E(stB 1 B 1 ) = ts

[Xt → 0 as t → 0] = {Xq → 0, as q → 0, q ∈ Q}

= {∀ε > 0 : ∃δ > 0, q ∈ Q ∩ (0, δ] =⇒ |Xq | < ε} ∞ [ ∞ \ \ 1 {|Xq | < } = N 1 N =1 M =1 q∈Q∩(0,

M

]

and so P[Xt → 0 as t → 0] = lim where q1 , q2 , ... lists Q ∩ (0, M1 ].

lim lim P{|Xq | <

N →∞ M →∞ k→∞

1 ˆ t → 0] } = P[ B N Q.E.D.

We used in the above that

A1 ⊇ A2 ⊇ ... then P(∩AN ) = lim P(AN ) N →∞

A1 ⊆ A2 ⊆ ... then P(∪AN ) = lim P(AN ) N →∞

Corollary 1.13 Bt /t → 0 as t → ∞. √t = ∞ and lim inf t→∞ In fact, Bt /tα → 0 for α > 12 but lim supt→∞ B t Bt visits every x ∈ R infinitely many times. This brings us nicely into the next subsection.

4 of 40

B √t t

= −∞ and so

MA4F7 Brownian motion Lecture Notes Autumn 2012

1.2

Growth rate of paths

Theorem 1.14 (Law of the Iterated Logarithm) Suppose that Bt is a Brownian motion on R. Then B lim sup t = +1 ψ(t) t→∞ and lim inf t→∞

where ψ(t) =

p

Bt = −1 ψ(t)

2t ln(ln t)

lim supt→∞ Xt = limt→∞ sups≥t Xs and lim sup Xt ≤ 1 means that ∀ε > 0 then sups≥t Xs ≤ 1 + ε for large t, which is the same as for all ε > 0 Xt is eventually less than 1 + ε. lim sup Xt ≥ 1 if and only if ∀ε > 0 we have sups≥t Xs ≥ 1 − ε for large t. which is the same as for all ε > 0 there exists a sequence sN → ∞ with XsN ≥ 1 − ε. It is on an example sheet that Xt = e−t Be2t then the Law of the Iterated logarithm can be converted to get that lim sup √Xt = 1. 2 ln t

We can also compare this to ZN an iid N (0, 1) and then Proof We first show that Bt P(lim sup ≤ 1) = 1 ψ(t)

lim √sup ZN 2 ln N

=1

and this is the case if and only if P(Bt ≤ (1 + ε)ψ (t) for large t) = 1 We first perform a calculation: P(Bt > (1 + ε)ψ (t) for large t) = P(N (0, t) > (1 + ε)

p

2t ln(ln t)) p = P(N (0, 1) > (1 + ε) 2 ln(ln t)) Z∞ z2 e− 2 √ dz = 2t √ (1+ε)

2 ln(ln t)

Lemma 1.15 (Gaussian Tails)   Z ∞ 2 2 1 1 2 2 /2 −a 1− 2 e e−z /2 dz ≤ e−a /2 ≤ a a a a Then we get that Z∞



(1+ε)

2 ln(ln t)

2

z √ 2 e− 2 1 p √ e−2((1+ε) 2 ln(ln t)) /2 √ dz ≤ (1 + ε) 2 ln(ln t) 2π 2t

=

(1 + ε)

p

2 1 √ e−(1+ε) ln(ln t) 2 ln(ln t) 2π

The strategy now is to control B along a grid of times tN = θ N for θ > 1. Then 1 1 2 1 2 p P(BθN > (1 + ε)ψ(θ N )) ≤ √ e−(1+ε) ln(N ln θ) ≤ C(θ, ε)N −(1+ε) 2π 1 + ε 2 ln(N ln θ) 5 of 40

MA4F7 Brownian motion Lecture Notes Autumn 2012

Lemma 1.16 (Borel-Cantelli part 1) If

P∞ 1

P(AN ) < ∞ then

P( only finitely many AN happen ) = 1 Proof Let χAN and so

( P 1 AN and then the number of AN s that occur is given by ∞ = 1 χAN c 0 AN E[

∞ X

χAN ] =

1

and so

P∞ 1

∞ X

E[χAN ] =

1

∞ X 1

P(AN ) < ∞

χAN is finite a.s. Q.E.D. Then by BCI we have that BθN ≤ (1 + ε)ψ (θ N ) for

all large N . We now need to control B over (θ N , θN +1 ). Lemma 1.17 (Reflection trick)

P(sup Bs ≥ a) = 2P(Bt ≥ a) s≤t

for a ≥ 0 Proof Define Ω0 = {sups≤t Bs ≥ a} and then P(Ω0 ) = P(Ω0 ∩ {Bt > a}) + P(Ω0 ∩ {Bt = a}) + P(Ω0 ∩ {Bt < a}) = 2P(Ω0 ∩ {Bt > a}) = 2P{Bt > a}

We will carefully justify this later by examining the hitting time Ta = inf {t : Bt = a}. We consider (BTa +t − a, t ≥ 0 and check that this is still a Brownian motion. Z ∞ 1 2 Bt a √ ez /2 dz P(Ta ≤ t) = P(sup Bs ≥ a) = 2P( √ ≥ √ ) = 2 t 2π t a s≤t and also P(Ta ∈ dt) =

1 2 d a 3 2 1 e−a /2t =: φ(t) P(Ta ≤ t) = 2 √ e−a /2t t− 2 = √ dt 2 2π 2πt3

and so E(Ta) = ∞.

Q.E.D.

Thus from this we get that 2

P( sup Bs ) ≥ (1 + ε)ψ(θ N )) = 2P(BθN > (1 + ε)ψ(θ N )) ≤ 2C(ε, θ)N −(1+ε) s≤θ N

Borel-Cantelli part 1 still applies and so for large N we have sup Bs ≤ (1 + ε)ψ(θ N )

s≤θ N

This if t ∈ [θ N , θ N +1 ] we have √ (1 + ε)ψ(θ N +1 ) Bt ≤ = (1 + ε) θ ψ(t) ψ(θ N )

p

6 of 40

√ ln((N + 1) ln θ ) p → (1 + ε) θ ln(N ln θ)

MA4F7 Brownian motion Lecture Notes Autumn 2012

and thus we have that lim sup t→∞

√ Bt ≤ (1 + ε) θ ψ(t)

for all ε > 0 and θ > 1 and so lim sup t→∞

We now show lim sup t→∞

If we choose tN = θ

N

Bt ≤1 ψ(t)

Bt ≥ (1 − ε) ψ(t)

for θ > 1 then 2

P(BθN > (1 − ε)ψ(θ N )) ≥ C(θ, ε)N −(1−ε)

P Lemma 1.18 (Borel-Cantelli part 2) If 1∞ P(AN ) = ∞ and AN s are independent, then P( infinitely many AN occur) = 1 P Proof Z = ∞ 1 χAN is the total number of AN s that occur. From BCI we get that E(Z) < ∞ =⇒ P(Z < ∞) = 1

or E(e−Z ) = 0 ⇐⇒ P(Z = ∞) = 1. Then E(e−

P

χA N

∞ Y ) = E( e−χAN ) 1

=

Y

E(e−χAN )

1∞

=

∞ Y 1



∞ Y

(1 − αP(AN )) e−αP(AN )

1 P −α P(AN )

=e

=0 Q.E.D. We use this on AN = {BθN > (1 − ε)ψ(θ N )} but these are not independent, but ˆ N = {BθN − BθN −1 > nearly √ so for large N . We finalise by correcting this. We define A N −1 (1 − ε) 1 − θ ψ(θ )} and these are independent. Then 2 P(AˆN ) = P(AN ) ≥ C(θ, ε)N −(1−ε)

ˆ N do occur a.s., i.e. BC2 tells us that infinitely many A p p BθN ≥ (1 − ε) 1 − θ −1 ψ(θ N ) − BθN −1 ≥ (1 − ε) 1 − θ −1 ψ(θ N ) − (1 + ε)ψ(θ N −1 ) and so

p BθN ψ(θ N −1 ) −1 − (1 + ε) ≥ (1 − ε) 1 − θ ψ(θ N ) ψ(θ N ) 7 of 40

MA4F7 Brownian motion Lecture Notes Autumn 2012

and

ψ(θ N −1 ) = ψ(θ N )

and so lim sup N →∞

√ ln((N − 1) ln θ ) √ p → θ −1 θ ln(N ln θ)

p

p √ BθN ≥ (1 − ε) 1 − θ −1 − (1 + ε) θ −1 N ψ(θ )

and taking θ large and ε small gives the result

Q.E.D.

We make some observations: ( 1 1. Can we do better? P(Bt ≤ ht for large t) = 0

for ht deterministic. This is √ called the 0-1 law, and we see it in week 4. For ht = Ct ln ln t if C < 0 then we get 0 and if C > 2 then we get 1. This uses an integral test for ht .

2. Random walk analogue. Suppose that X1 , X2 , ... are iid with E(Xk ) = 0, E(Xk2) = 1 and SN = X1 + ... + XN . then SN lim sup √ =1 2N ln ln N N →∞ This was proved in 1913 but the proof was long. Was proved in a shorter manner using the Brownian motion result in 1941. 3. Xt = tB 1 is still a Brownian motion and so lim supt→∞ t

lim q s→0

Bs 2s ln ln( 1s )

tB 1

t

ψ(t)

or alternatively

=1

and so we have a result about small t behaviour. 4. P(B diff at 0) = 0 and if we fix T0 > 0 then define Xt = BT0 +t then this is still a Brownian motion. Thus Corollary 1.19 P(B diff at t0 ) = 0 for all t0 . 5. Suppose U ∼ U [0, 1] is uniform r.v. Then define Xt by the value up until U and then monotone increasing up until 1. Then P(X diff at t0 ) = 1 but it is not differentiable at all t. and so we cannot easily conclude that Brownian motion id differentiable everywhere 6. Corollary 1.20 Leb{t : B is diff at t} = 0 Proof

Z E(

∞ 0

χ(B diff at t)dt) =

Z



Eχ(B diff at t)dt = 0

0

Q.E.D.

The points where it is differentiable are examples of random exceptional points.

8 of 40

MA4F7 Brownian motion Lecture Notes Autumn 2012

1.3

Regularity

Definition 1.21 A function f : [0, ∞) → R is α-Holder continuous, for α ∈ (0, 1], at t if there exists M, δ > 0 such that |ft+s − ft | ≤ M |s|α for |s| ≤ δ . The case of α = 1 is called Lipschitz. The aim of the next part is to show that P(B is α Holder continuous at all t ≥ 0) = 1 provided α < 1/2 and that P(B is α Holder continuous at any t ≥ 0) = 0 provided α > 1/2. Corollary 1.22 P(B differentiable at any t) = 0 The reasons for this are as follows. A differentiable function must lie in some cone as f (t + s) − f (s) →a s and so (f (t+s) − f (s))/s ∈ (a − ε, a +ε) for small s and thus |f (t+s) − f (s)| ≤ (|a| +ε)|s| for small s, and so Lipschitz holds with M = |a| + |eps. Proposition 1.23 Define ΩM,δ = {for some t ∈ [0, 1], |Bt+s −Bt | ≤ M |s| for all |s| ≤ δ } ∞ ∞ and then P(ΩM,δ ) = 0 and thus P(∪M =1 ∪ N =1 ΩM, 1 ) = 0 N

Proof This hasn’t been bettered since 1931. Suppose that there exists a t ∈ [K/N, (K + 1)/N ] where it is Lipschitz, i.e. |Bt+s − Bt | ≤ M |s| for |s| < δ. Then if (K + 1)/N, (K + 2)N ∈ [t, t + δ] then |B(K+1)N − Bt | ≤

M N

|B(K+2)N − Bt | ≤

2M N

and so by the triangle inequality we get |B(K+1)N − B(K+2)/N | ≤

3M N

and then we have P(ΩM,δ ) ≤ P



for some K = 1, ..., N − 1 : |B(K+1)N

3M − B(K+2)/N | ≤ N



We first calculate the probability of the event on the right hand side.

P[|N (0, 1/N )| ≤

3M 3M ] = P[|N (0, 1)| ≤ √ ] = N N

3M √

ZN

3M −√

6M 2 1 √ e−z /2 dz ≤ √ 2π 2πN

N

and then the last part of equation (1.1) is equal to = P(∪1N −1|B(K+1)N − B(K+2)/N | ≤

N −1 X 6M 6M √ 3M (√ )≤ )=√ N N 2πN 2π 1

9 of 40

(1.1)

MA4F7 Brownian motion Lecture Notes Autumn 2012

but this is not useful, because it does not tend to zero. We modify this by taking more points. We already know that |B(K+2)N − B(K+1)/N | ≤

3M N

but we also have that 5M N

|B(K+3)N − B(K+2)/N | ≤

|B(K+4)N − B(K+3)/N | ≤

and then we say that 



P(ΩM,δ ) ≤ P  for some K = 1, ..., N − 1 :  and and

with 4/N ≤ δ. and this i...


Similar Free PDFs