UW MATH-STAT395 moment-functions PDF

Title UW MATH-STAT395 moment-functions
Author John Smith
Course Statistical Methods In Engineering And Science
Institution University of Washington
Pages 9
File Size 196.1 KB
File Type PDF
Total Downloads 70
Total Views 180

Summary

Moment generating functions...


Description

STAT/MATH 395 A - PROBABILITY II – UW Winter Quarter 2017

Néhémy Lim

Moment functions 1

Moments of a random variable

Definition 1.1. Let X be a rrv on probability space (Ω, A, P). For a given r ∈ N, E[X r ], if it exists, is called the r-th moment of X. In particular, • If X is a discrete rrv with pmf pX and X(Ω) is the set of all values that X can take on, then X xr pX (x) (1) E[X r ] = x∈X (Ω)

• If X is a continuous rrv with pdf fX , then Z ∞ xr fX (x) dx E[X r ] =

(2)

−∞

Remarks : • The zero-th moment is 1. • The first moment is the expected value of X Definition 1.2. Let X be a rrv on probability space (Ω, A, P). For a given r ∈ N, E[(X − E[X])r ], if it exists, is called the r-th moment about the mean or r-th central moment of X. In particular, • If X is a discrete rrv with pmf pX and X(Ω) is the set of all values that X can take on, then X r (x − E[X ]) pX (x) (3) E [(X − E[X])r ] = x∈X (Ω)

• If X is a continuous rrv with pdf fX , then Z ∞ r E [(X − E[X])r ] = (x − E[X ]) fX (x) dx −∞

Remarks : • The zero-th central moment is 1. • The first central moment is 0. • The second central moment is the variance of X . 1

(4)

2

Moment generating functions

Definition 2.1. Let X be a rrv on probability space (Ω, A, P). For a given t ∈ R, the moment generating function (m.g.f.) of X, denoted MX (t), is defined as follows   (5) MX (t) = E etX

where there is a positive number h such that the above summation exists for −h < t < h. In particular, • If X is a discrete rrv with pmf pX and X(Ω) is the set of all values that X can take on, then X etxpX (x) (6) MX (t) = x∈X (Ω)

• If X is a continuous rrv with pdf fX , then Z ∞ etxfX (x) dx MX (t) =

(7)

−∞

Remarks : • The m.g.f. MX (0) always exists and is equal to 1. Theorem 2.1. A moment generating function completely determines the distribution of a real-valued random variable. Proposition 2.1. Let X be a rrv on probability space (Ω, A, P). The r-th moment of X can be found by evaluating the r-th derivative of the m.g.f. of X at t = 0. (r) M X (0) = E[X r ] (8) Proof. Using the expansion of the exponential function as a series, we have that : etX =

∞ X (tX )n n=0

Hence,

n!

MX (t) = E[etX ] "∞ # X (tX )n =E n! n=0 =

∞ n X t n=0

n!

E [X n ]

= 1 + t E [X] +

t2  2  t3  3  E X + E X + ... 2 6 2

Differentiating MX r times, the first r − 1 terms vanish. We now have the following :   (r + 2)(r + 1) . . . 3 2  r+2  d MX (t) (r + 1)r . . . 2 r(r − 1) . . . 1 t E X r+1 + E[X r ] + t E X + ... = r (r + 1)! r! (r + 2)! dt ∞ k X  t  = E[X r ] + E X r+k k! k=1 It now becomes clear that evaluating

d MX (t) dtr

at t = 0 gives the result.

Lemma 2.2. Let X be a rrv on probability space (Ω, A, P). 1. The expected value of X, if it exists, can be found by evaluating the first derivative of the moment generating function at t = 0 : ′ E[X] = MX (0)

(9)

2. The variance of X, if it exists, can be found by evaluating the first and second derivatives of the moment generating function at t = 0 : 2 ′′ (0) − (M ′ Var(X) = M X X (0))

3 3.1

(10)

Moment generating functions of Common Discrete Distributions Discrete Uniform Distribution

Definition 3.1 (Discrete Uniform Distribution). Let (Ω, A, P) be a probability space and let X be a random variable that can take n ∈ N∗ values on X(Ω) = {1, . . . , n}. X is said to have a discrete uniform distribution Un if its probability mass function is given by : pX (i) = P(X = i) =

3

1 , for i = 1, . . . , n n

(11)

Computation of the mgf. Let X be a random variable that follows a discrete uniform distribution Un . The mgf of X is given by :   MX (t) = E etX n X etx pX (x) = x=1

=

n X

1 n

x

(et )

x=1

| {z }

sum of a geometric sequence

1 1 − ent = et 1 − et n t e − e(n+1)t = n(1 − et )

The above equality is true for all t 6= 0. If t = 0, we have : n

MX (0) =

1 X 0·x e n x=1 n

=

1X 1 n x=1

=1

In a nutshell, the mgf of X is given by : ( t (n+1)t MX (t) =

3.2

e −e n(1−et )

1

if t 6= 0 if t = 0

Binomial Distribution

Definition 3.2 (Bernoulli process). A Bernoul li or binomial process has the following features : 1. We repeat n ∈ N∗ identical trials 2. A trial can result in only two possible outcomes, that is, a certain event E , called success, occurs with probability p, thus event E c , called failure, occurs with probability 1 − p 3. The probability of success p remains constant trial after trial. In this case, the process is said to be stationary. 4. The trials are mutual ly independent. 4

Definition 3.3 (Binomial Distribution). Let (Ω, A, P) be a probability space. Let E ∈ A be an event labeled as success, that occurs with probability p. If n ∈ N∗ trials are performed according to a Bernoul li process, then the random variable X defined as the number of successes among the n trials, is said to have a binomial distribution Bin(n, p) and its probability mass function is given by :   n x pX (x) = P(X = x) = p (1 − p)n−x for x = 0, . . . , n (12) x Computation of the mgf. Let X be a random variable that follows a binomial distribution Bin(n, p). The mgf of X is given by : MX (t) =

n X

etx pX (x)

x=0 n X

  n x p (1 − p)n−x x x=0 n   X n  t x = pe (1 − p)n−x x x=0  n = pet + 1 − p

=

etx

where the last equality comes from the binomial theorem.

3.3

Poisson Distribution

Definition 3.4 (Poisson Distribution). Let (Ω, A, P) be a probability space. A random variable X is said to have a Poisson distribution P(λ), with λ > 0 if its probability mass function is given by : pX (x) = P(X = x) = e−λ

λx , for x ∈ N x!

(13)

Computation of the mgf. Let X be a random variable that follows a Poisson distribution P(λ). The mgf of X is given by : MX (t) =

∞ X

etx pX (x)

x=0 ∞

=

X

etx e−λ

x=0

λx x!

∞ x X (λet ) x! x=0   = e−λ exp λet   = exp λ(et − 1)

= e−λ

5

3.4

Geometric Distribution

Definition 3.5 (Geometric Distribution). Let (Ω, A, P) be a probability space. Let E ∈ A be an event labeled as success, that occurs with probability p. If all the assumptions of a Bernoulli process are satisfied, except that the number of trials is not preset, then the random variable X defined as the number of trials until the first success is said to have a geometric distribution G(p) and its probability mass function is given by : pX (x) = P(X = x) = (1 − p)x−1 p, for x ∈ N∗

(14)

Computation of the mgf. Left as an exercise (Homework 3) !

Distribution

Probability Mass Function pX (x) = P(X = x)

Uniform Un n∈N

1 n



Binomial Bin(n, p) n ∈ N∗ , p ∈ (0, 1) Poisson P(λ) λ>0 Geometric G(p) p ∈ (0, 1)

  n x p (1 − p)n−x ≥ 0 x for x = 0, . . . , n λx e−λ x! for x ∈ N (1 − p)x−1 p for x ∈ N∗

6

MX (t)  

E[X]

Var(X )

n+1 2

n2 − 1 12

 t n pe + 1 − p

np

np(1 − p)

  exp λ(et − 1)

λ

λ

pet 1 − (1 − p)et for t < − ln(1 − p)

1 p

1−p p2

et −e(n+1)t n(1−et )

 1

if t 6= 0 if t = 0

4 4.1

Moment generating functions of Common Continuous Distributions Continuous Uniform Distribution

Definition 4.1. A continuous r.r.v. is said to fol low a uniform distribution U (a, b) on a segment [a, b], with a < b, if its pdf is fX (x) =

1 1[a,b] (x) b−a

(15)

Computation of the mgf. Let X be a continuous random variable that follows a uniform distribution U (a, b). The mgf of X is given by : Z ∞ etx fX (x) dx MX (t) = −∞

=

1 b−a

Z

b

etx dx a

x=b 1 etx   b − a t x=a etb − eta = t(b − a)

=

The above equality holds for t 6= 0. We notice that MX (0) = 1.

4.2

Normal Distribution

Definition 4.2. A continuous random variable is said to follow a normal (or Gaussian) distribution N (µ, σ 2 ) with parameters, mean µ and variance σ 2 if its pdf fX is given by: (  )  1 x−µ 2 1 fX (x) = √ exp − , for x ∈ R (16) σ 2 σ 2π

7

Computation of the mgf. We start by computing the mgf a standard normal random variable Z ∼ N (0, 1). The mgf of Z is given by : Z ∞ etx fZ (x) dx MZ (t) = −∞ Z ∞ 1 x2 etx √ e− 2 dx = 2π −∞ Z ∞ 1 − 12 (x2 −2xt) √ e dx = −∞ 2π Z ∞ 2 1 2 1 √ e− 2 (x−t) dx = et /2 2π −∞ | {z } pdf of N (t,1)

2

= et

/2

We know (Property 4.5 in Chapter 5) that X = µ + σ Z follows a normal distribution N (µ, σ 2 ). Therefore the mgf of a normal distribution N (µ, σ 2 ) is given by : i h Mµ+σ Z (t) = E et(µ+σ Z )   = etµE etσ Z = etµMZ (tσ ) 2

2

= etµ et σ /2   σ 2 t2 = exp µt + 2

4.3

Exponential Distribution

Definition 4.3. A continuous random variable is said to follow an exponential distribution E(λ) with λ > 0 if its pdf fX is given by: fX (x) = λe−λx1R+ (x)

8

(17)

Computation of the mgf. Let X be a continuous random variable that follows an exponential distribution E(λ). The mgf of X is given by : Z ∞ etx fX (x) dx MX (t) = −∞ Z ∞ etxλe−λx dx = 0 Z ∞ e−x(λ−t) dx =λ 0 ∞ e−x(λ−t)  =λ −(λ − t)  x=0 λ [0 − 1] = −(λ − t) λ = λ−t When integrating the exponential, we must be aware that limx→∞ e−x(λ−t) = 0 if and only if λ − t > 0. Therefore the derived formula holds if and only if t < λ. Distribution

Probability Density Function fX (x)

Uniform U (a, b) a, b ∈ R with a < b 2

Normal N (µ, σ ) µ ∈ R,σ > 0 Exponential E(λ)

1 1[a,b] (x) b−a ( 2 )  1 1 x−µ √ exp − σ 2 σ 2π for x ∈ R λe−λx1R+ (x)

λ>0

9

MX (t)  

etb −eta t(b−a)

 1

if t 6= 0 if t = 0



σ 2 t2 exp µt + 2 λ λ−t for t < λ



E[X]

Var(X )

a+b 2

(b − a)2 12

µ

σ2

1 λ

1 λ2...


Similar Free PDFs