Title | Formula sheet |
---|---|
Author | Ashwin Philip |
Course | Probability and Statistics in Engineering |
Institution | Concordia University |
Pages | 8 |
File Size | 1 MB |
File Type | |
Total Downloads | 49 |
Total Views | 166 |
ENGR 371 Formula sheet...
CribSheetsforENGR371 n! ( n − r )!
•
The number of permutations of n distinct objects taken r objects at a time: Prn =
•
The number of permutations of n objects of which n1 are of one kind, n2 are of second kind, …. nk are of kth kind:
n! , where n1 + n2 +iii+ nk = n . n1 ! n2 !iii n k ! ⎛n⎞
n!
•
The number of combinations of n distinct objects taken r objects at a time: ⎜ ⎟ = ⎝ r ⎠ r !( n − r ) !
•
Multiplication rule: an operation has k steps, and the number of ways for completing step k is nk, the total number of way for completing the operation is n1 × n 2 ×iii×n k
•
Probability of a union: P ( A ∪ B ) = P ( A ) + P ( B ) − P ( A ∩ B ) , where A and B are two events
•
Conditional probability & Bayes theorem: P ( A B ) =
•
Total probability: the sample space S constitutes a partitions of B1, B2 and B3, the probability of any event A of S, where A is overlapped with parts of B1, B2 and B3,
P (A ∩ B ) P (B )
=
P ( B A) P ( A) P (B )
, P (B) ≠ 0 .
P ( A) = P ( A ∩ B1 ) + P ( A ∩ B 2 ) + P (A ∩B 3 ) = P (A B 1 )P (B 1 ) +P (A B 2 )P (B 2 ) +P (A B 3 )P (B 3 )
•
Bayes theorem: the sample space S constitutes a partitions of B1, B2 and B3, then for any event A of S, P ( Bk A ) =
P ( A Bk ) P ( Bk ) P (A)
, k=1, 2, or 3
•
Independence: P ( A B ) = P ( A ) , P ( A ∩ B ) = P ( A ) P ( B )
•
Cumulative probability distribution: F ( x ) = P ( X ≤ x ) , where X is a random variable.
•
Mean of a random variable X: μ = E ( X ) , E (X ) =
∑ xf (x ) for a discrete random variable, all x
E( X ) =
∫ xf ( x) dx
for a continuous random variable, where f(x) is probability density function
all
•
Mean of a random function g(X): E ⎡⎣ g ( X )⎤⎦ =
∑ g (x ) f ( x ) for a discrete random variable X, all x
E ⎡⎣ g ( X )⎤⎦ =
∫ g ( x ) f ( x) dx
for a continuous random variable X.
all
•
Variance of a random variable X: σ 2 = E ⎡⎢ (X − μ )2 ⎤⎥ ⎣ ⎦
•
Binominal distribution: probability function: ⎜ ⎟ p x (1 − p)
⎛n ⎞ ⎝x ⎠
number of successes in n trials
1
n −x
, μ = np and σ 2 = np (1 − p ) , x is the
•
⎛ x −1 ⎞ r x−r ⎟ p (1 − p ) , μ = r / p and r 1 − ⎝ ⎠
Negative binominal distribution: probability function: ⎜
σ 2 = r (1 − p ) p 2 , x is the number of trials for r successes
•
⎛ K ⎞⎛ N − K ⎞ ⎜ ⎟⎜ ⎟ x ⎠⎝ n − x ⎠ ⎝ , where N objects contain K objects Hypergeometric distribution: probability function ⎛N⎞ ⎜ ⎟ ⎝n⎠
as success, a random sample of n objects selected from N objects, x is the number of successes ⎛ N −n ⎞ ⎟ ⎝ N −1 ⎠
μ = np and σ 2 = np (1 − p ) ⎜
e −λλ x , μ = λ and σ 2 = λ , x is the number of events x!
•
Poisson distribution:
•
Normal or Gauss distribution: ⎛ x2 exp ⎜⎜ − 2π ⎝ 2
1
•
⎛ ( x − μ )2 exp ⎜ − ⎜ 2σ 2 σ 2π ⎝ 1
⎞ ⎟⎟ ⎠
X is a binominal random variable with parameter n and p, the probability of X can be approximated by standard Normal distribution with using Z =
•
• • • •
⎞ ⎟ , and standard Normal distribution: ⎟ ⎠
X − np np (1 − p )
X is a Poisson random variable with parameter λ , the probability of X can be approximated by X−λ standard Normal distribution using Z = λ ⎧1 ⎛ x ⎞ x >0 ⎪ exp ⎜− ⎟ , β > 0 , μ = β and σ 2 = β Exponential distribution: f ( x ) = ⎨β ⎝ β ⎠ ⎪ elesewhere 0 ⎩
Covariance between random variables X and Y: σ XY = E ⎡⎣( X − μ X )(Y − μ Y )⎤⎦ = E ( XY ) − μ X μ Y σ Correlation between random variables X and Y: ρ XY = XY σ XσY Marginal probability function of X and Y: fx ( X ) =
∫
f ( x , y )dy and fy ( Y ) =
all y
•
•
2
2
∫
f ( x, y )dx for
all x
continuous random variables X and Y; Marginal probability function of X and Y: f ( x , y ) and fy (Y ) = f ( x , y ) for discrete random variables X and Y fx(X) =
∑
∑
all y
all x
Conditional probability: f (Y X ) =
f (x , y ) fx ( x )
with
∫ f (Y X ) dy =1 or ∑ f (Y X ) =1 all Y
•
Conditional mean: E (Y x ) = Y f (Y X ) dY for continuous random variables
•
Independence: f ( x , y ) = f ( x) f ( y ) , or f (Y X ) = f (Y ), f ( x ) and f ( y ) are marginal probability
∫
functions. •
Mean of a function h ( x , y ) , x and y are two random variables, E ⎡⎣h ( x , y ) ⎤⎦ = E ⎡⎣ h ( x, y )⎤⎦ =
y
Multinomial distribution: n trials in total, the success probability of class 1, 2 … k is p1, p2, p3…. pk, n! p1x1 p2x2 iii pkxk , where x1, x2 … and xk are the number of trials x1 ! x2 !iii xk !
the probability function:
•
corresponding to class 1, 2 ….. and k. Linear combination: Z=aX+bY+C, the mean of Z: E ( Z ) = aE ( X ) + bE (Y ) + C , the variance of Z: σ Z2 = a 2σ X2 + b 2σ Y2 + 2 abσ
XY
. n
•
∑X Sample mean for sample size of n: X = n
∑( X
•
Sample variance: S2 =
•
Test statistic: Z =
−X)
n
2
i
n −1
X−μ
σ
i
i
i= 1
, for n random samples
has Normal distribution, for n random samples, where μ and σ are the
n
•
population mean and standard deviation, respectively. X− μ has t-distribution with n-1 degrees of freedom, Test statistic: T = S
n
•
Two independent populations with means μ1 and μ2 and variances σ12 and σ 22 , Test statistic Z =
X 1 − X 2 − ( μ1 − μ 2 )
σ12 n1
+
σ22
has standard Normal distribution, where X 1 are X 2 are two
n2
independent sample means from sample size of n1 and n2 •
ˆ )= θ . An unbiased point estimator Θˆ for a parameter θ must satisfy E ( Θ
•
If X is the sample mean with sample size n from a population with known variance σ 2 , a σ σ (1− α )100% confidence interval of the population mean: X − Z α 2 < μ < X + Z α 2 n
Note: (1 − α )100% confidence upper bound: μ < X + Zα 3
or
∑∑ h ( x, y ) f ( x, y ) x
•
∫∫h ( x , y ) f ( x , y ) dxdy
σ n
n
(1 − α )100% confidence lower bound: •
μ > X − Zα
σ n
If X is the sample mean with sample size n from a population with unknown variance, a
(1− α )100% confidence interval of the population mean:
X − Tα
Note: (1 − α )100% confidence upper bound: μ < X + T α ,n −1
(1 − α )100% confidence lower bound: •
χα2 2
≤ σ2 ≤
,n−1
χ 2α 1−
2
•
S n S n
(n − 1)S 2 χα2,n − 1
≤σ 2
( n− 1) S2 χ12− α ,n−1
A ( 1− α ) 100% prediction interval on a single future observation from a Normal distribution: X − Tα
•
2,n −1S
1+
1 < X n +1 < X +Tα n
1 n
2,n −1S
1+
⎡
⎡ δ n⎤ δ n⎤ ⎥ − Φ ⎢ −Z α − ⎥ for two sided hypothesis, δ = μ − μ 0 . σ ⎥⎦ σ ⎥⎦ ⎢⎣ 2
Probability of type II error: β = Φ ⎢ Z α − ⎢⎣
2
⎡
Probability of type II error: β = Φ ⎢ Z α − ⎢⎣
2
δ n⎤ ⎥ for upper sided hypothesis σ ⎥⎦
⎡
Probability of type II error: β = 1 − Φ ⎢ − Zα − ⎢⎣
2
δ n⎤ ⎥ for lower sided hypothesis σ ⎥⎦
If the population variance is unknown, calculate β with replacement Z by T with T distribution for the mean hypothesis 2
•
⎛ ⎜ Zα + Zβ ⎜ Sample size for a two-sided test: n ≈ ⎝ 2 2
•
Hypothesis test for a population variance: test statistic: χ 2 =
4
S n
,n−1
upper bound: σ 2 ≤
•
2 ,n−1
( n− 1 ) S2
Note lower bound:
•
S < μ < X +Tα n
If S 2 is the sample variance, a (1 −α )100% confidence interval for the population σ 2:
( n− 1 ) S2
•
μ > X − Tα ,n −1
2 ,n−1
⎞ 2 ⎟ σ ⎟ ⎠
δ
( n −1) S 2 σ2...