Final 1 - material PDF

Title Final 1 - material
Course Mathematical Statistics
Institution The University of Texas at Dallas
Pages 10
File Size 217.9 KB
File Type PDF
Total Downloads 38
Total Views 148

Summary

material...


Description

CS/SE 3341

SOLUTIONS

Anti-Final 1. Users call help desk every 15 minutes, on the average. There is one help desk specialist on duty, and her average service time is 9 minutes. Modeling the help desk as an M/M/1 queuing system, compute: (a) the expected number of users in the system at any time, including the users talking to the specialist and those waiting to be served; (b) the probability that the next caller does not have to wait until the help desk specialist responds to him; (c) the proportion of time when exactly two users are waiting. Solution. Given: µA = 15 min, µS = 9 min. Then r = λA /λS = µS /µA = 9/15 = 0.6. 0.6 r = 1.5 users = (a) E(X) = 0.4 1−r (b) P (W = 0) = π 0 = 1 − r = 0.4 (c) P (Xw = 2) = P (X = 3) = π 3 = (1 − r)r 3 = (0.4)(0.6)3 = 0.0864

2. Unauthorized attempts to log into a certain computer system occur according to a Poisson process with the rate of one attempts every five days. (a) What is the probability of at least 3 unauthorized attempts during the next week? (b) After the 5th unauthorized attempt, the computer manager will change the password. What is the probability that this will happen during the next 12 days? Solution. (a) We need P (X ≥ 3), where X is the number of unauthorized attempts during the next 7 days. This X has Poisson distribution with parameter λt = (1/5)(7) = 1.4. From the given table of Poisson distribution, P (X ≥ 3) = 1 − F (2) = 1 − 0.833 = 0.167 (b) We need P (T ≤ 12), where T is the time of the 5th event. This T has Gamma distribution with parameters r = 5 and λ = 1/5. Using the Poisson-Gamma formula, P (T ≤ 12) = P (Y ≥ 5) = 1 − F (4) = 1 − 0.904 = 0.096 , from the Poisson table, where Y has Poisson distribution with parameter λt = (1/5)(12) = 2.4.

3. A dog can eat one piece of meat at a time. When he is busy eating, the other pieces of meat will be eaten by other pets. On the average, the dog’s owner throws him a piece of meat every 20 minutes, and it takes the dog 10 minutes to eat it. Assume Bernoulli single-server queuing process with 5-minute frames and capacity limited by 1 piece of meat. (a) Find the transition probability matrix for the amount of meat (the number of pieces) that a dog has at any time. (b) Find the steady-state proportion of time when the dog has some meat to eat. Solution. (a) We are given λA = 1/20 min−1 , λS = 1/10 min−1 , ∆ = 5 min, and C = 2. Compute probabilities pA = λA ∆ = (1/20)(5) = 1/4,

pS = λS ∆ = (1/10)(5) = 1/2,

and the transition probability matrix (with X ∈ {0, 1, 2} due to the limited capacity) P =

(

1 − pA pA pS (1 − pA ) 1 − pS (1 − pA )

(b) Solve the steady-state equations  {  (3/4)π 0 + (3/8)π 1 = π 0 πP = π ∑ (1/4)π 0 + (5/8)π 1 = π 1 ⇒ πi = 1  π0 + π1 = 1 ⇒

{

)

=

(

3/4 1/4 3/8 5/8

)

.

  (3/8)π 1 = (1/4)π 0 (1/4)π 0 = (3/8)π 1 ⇒  π0 + π1 = 1

π 0 = 3/5 π 1 = (2/3)π 0 = 2/5

The dog has some meat to eat 2/5 or 40% of the time.



  π 1 = (2/3)π 0  (5/3)π = 1 0

4. In a recent survey of 500 U.S. workers, 18% say it is likely that they will lose their job in the next year. Construct a 95% confidence interval for the true proportion of U.S. workers who think they are likely to lose their job in the next year. Solution. Let p be the proportion of U.S. workers who think they are likely to lose their job in the next year. We want a 95% confidence interval for p. In this case, we would like to use a z-interval. Using the formula, the confidence interval is: √ √ (0.18)(0.82) pˆ(1 − pˆ) = 0.18 ± 0.034 = (0.146, 0.214). = 0.18 ± 1.96 pˆ ± zα/2 500 n

5. Suppose a researcher claims that the mean age of founders of start-up companies in Silicon Valley is less than 30 years. To test his claim, he randomly selected 7 start-up companies and obtained the following data on the age of their founders: 27, 28, 32, 33, 24, 28, 31 years Is there significant evidence that the mean age of founders is less than 30 years? Answer this question by performing an appropriate hypothesis test at 5% level of significance. Assume normal distribution for the age random variable. Solution. Let µ denote the mean age (in years) of founders of start-up companies in Silicon Valley. We want to test the null hypothesis H0 : µ = 30 (or µ ≥ 30) against the alternative hypothesis H1 : µ < 30. Since the data are normally distributed with unknown population variance, we would like to use a left-tailed t-test for these hypotheses. From the given data, we can compute x = 29, s = 3.162 and t=

x − 30 √ = −0.837. s/ 7

Further, the critical point for this test is tn−1,α = t6,0.05 = 1.943. Since t = −0.837 > −t6,0.05 = −1.943, we accept H0 . Therefore, at 5% level, there is no significant evidence that µ is less than 30 years.

6. Let the random variable X denote the severity of a virus attack on a scale of zero to one. Suppose X follows a continuous distribution with probability density function { (θ + 1)xθ , 0 < x < 1 fθ (x) = 0, otherwise, where θ > −1 is an unknown parameter. The severity of the last four virus attacks are as follows: 0.10, 0.30, 0.50, 0.20. Use these data to estimate θ using either the method of moments or the method of maximum likelihood. Assume that the virus attacks are independent. Solution. Method of moments: First, we compute ∫ 1 ∫ 1 θ+1 xθ+1 dx = xfθ (x)dx = (θ + 1) E(X) = , θ+2 0 0 and x = 0.275. Next, we solve 0.275 = (θ + 1)/(θ + 2) for θ to get ˆθMOME = −0.45/0.725 = −0.621. Method of maximum likelihood: The log-likelihood function is: ) ( 4 ∏ ∑4 θ (θ + 1)xi = 4 ln(θ + 1) + θ ln xi . ln L(θ) = ln i=1

i=1

Differentiating this function with respect to θ and equating it to zero, we get the likelihood equation as: 4 ∑ 4 ∂ + L(θ) = ln xi . 0= θ+1 ∂θ i=1

Solving it for θ, we get the MLE as

ˆθMLE = −1 − ∑ 4 4 i=1

ln xi

= −0.311

Cheat sheet for the Final Exam Discrete Distributions ∑ µ = E(X) = xP (x) ∑ x g(x)P (x) Eg(X) = x ∑ σ 2 = V ar(X) = (x − µ)2 P (x)

Expected value Expected value of a function Variance

x

n! Binomial probability mass function P (x) = px (1 − p)n−x for x = 0, 1, ..., n, x!(n − x)! Geometric probability mass function P (x) = (1 − p)x−1 p for x = 1, 2, ... e−λ λx for x = 0, 1, ... Poisson probability mass function P (x) = x! Continuous Distributions ∫

Expected value

µ = E(X) = xf(x)dx ∫ Eg(X) = g(x)f (x)dx ∫ 2 σ = V ar(X) = (x − µ)2 f (x)dx

Expected value of a function Variance Exponential density

f (x) = λe−λx for 0 < x < ∞ 1 f (x) = for a < x < b b−a λr r−1 −λx x e for 0 < x < ∞ f (x) = Γ(r)

Uniform density Gamma density Gamma-Poisson formula

P (X < x) = P (Y ≥ r) ; P (X > x) = P (Y < r )

for X ∼ Gamma(r, λ), Y ∼ Poisson(λx) 1 2 2 f (x) = √ e−(x−µ) /2σ for −∞ < x < ∞ σ 2π ( ) √ Binomial(n, p) ≈ Normal µ = np, σ = np(1 − p) for n ≥ 30, 0.05 ≤ p ≤ 0.95 √ {(X1 + . . . + Xn ) − nµ}/(σ n) → Normal(0,1) as n → ∞

Normal density Normal approximation Central Limit Theorem

Expected values and variances of some distributions Distribution

Bernoulli (p)

Binomial (n, p)

Geometric (p)

Poisson (λ)

Exponential (λ)

Gamma (r, λ)

E(X)

p

np

1 p

λ

1 λ

r λ

Var(X)

p(1 − p)

np(1 − p)

1−p p2

λ

1 λ2

r λ2

Uniform (a, b) a+b 2 (b − a)2 12

Normal (µ, σ) µ σ2

Stochastic processes t ; number of events in time t is X(t) ∼Binomial(n, p); ∆ interarrival time is T = Y ∆, where Y ∼Geometric(p) Poisson process: number of events in time t is X(t) ∼Poisson(λt); interarrival time is T ∼Exponential(λ) Binomial process: p = λ∆, number of frames n =

Markov chains k-step transition probability matrix Pk ={P k πP = π ∑ Steady state distribution is a solution of πi = 1

Queuing Systems

Bernoulli single-server queuing system with capacity C Arrival probability pA = λA ∆, departure probability pS = λS ∆. Transition probabilities: p00 = 1 − pA , p01 = pA ; pk,k−1 = pS (1 − pA ), pk,k = pA pS + (1 − pA )(1 − pS ), pk,k+1 = pA (1 − pS ) for 1 ≤ k ≤ C − 1; pC,C−1 = pS (1 − pA ), pC,C = pA pS + (1 − pA )(1 − pS ) + pA (1 − pS ) = 1 − pC,C−1 . M/M/1 queueing system Distribution of the number of jobs πx = P {X = x} = r x (1 − r) for x = 0, 1, 2, . . . r r E(X) = , where r = λA /λS = µS /µA , Var(X) = (1 − r)2 1−r Performance characteristics

E(T ) =

1 µS , = λS (1 − r) 1−r

E(W ) =

r µS r , = λS (1 − r) 1−r

E(Xw ) =

r2 1−r

Statistics ∑ 1 ¯ 2. (Xi − X) Xi , sample variance s2 = n−1 ¯ ± zα/2 √σ (known σ), ¯ ± tα/2 √s (unknown σ ) Confidence interval for the mean X X n√ n pˆ(1 − pˆ) Confidence interval for the proportion pˆ ± zα/2 n Hypothesis testing

¯ = Sample mean X

H0

HA

µ = µ0

µ < µ0 µ > µ0 µ = µ0

1 n



Test statistic ¯ − µ0 X √ σ/ n (known σ )

Z=

Rejection region Z < −zα Z > zα |Z| > |zα/2 |

H0

HA

p = p0

p < p0 p > p0 p = p0

H0

HA

Test statistic

µ = µ0

µ < µ0 µ > µ0 µ = µ0

¯ − µ0 X √ s/ n (unknown σ )

Test statistic pˆ − p0 Z=√

p0 (1−p0 ) n

Rejection region Z < −zα Z > zα |Z| > |zα/2 |

t=

Rejection region t < −tα t > tα |t| > |tα/2 |

Binomial Cumulative Distribution Function p n

x

.05

.10

.15

.20

.25

.30

.35

.40

.45

.50

.55

.60

.65

.70

.75

.80

5

0 1 2 3 4

.774 .977 .999 1.0 1.0

.590 .919 .991 1.0 1.0

.444 .835 .973 .998 1.0

.328 .737 .942 .993 1.0

.237 .633 .896 .984 .999

.168 .528 .837 .969 .998

.116 .428 .765 .946 .995

.078 .337 .683 .913 .990

.050 .256 .593 .869 .982

.031 .188 .500 .813 .969

.018 .131 .407 .744 .950

.010 .087 .317 .663 .922

.005 .054 .235 .572 .884

.002 .031 .163 .472 .832

.001 .016 .104 .367 .763

.000 .007 .058 .263 .672

.000 .002 .027 .165 .556

.000 .000 .009 .081 .410

.000 .000 .001 .023 .226

10

0 1 2 3 4 5 6 7 8

.599 .914 .988 .999 1.0 1.0 1.0 1.0 1.0

.349 .736 .930 .987 .998 1.0 1.0 1.0 1.0

.197 .544 .820 .950 .990 .999 1.0 1.0 1.0

.107 .376 .678 .879 .967 .994 .999 1.0 1.0

.056 .244 .526 .776 .922 .980 .996 1.0 1.0

.028 .149 .383 .650 .850 .953 .989 .998 1.0

.013 .086 .262 .514 .751 .905 .974 .995 .999

.006 .046 .167 .382 .633 .834 .945 .988 .998

.003 .023 .100 .266 .504 .738 .898 .973 .995

.001 .011 .055 .172 .377 .623 .828 .945 .989

.000 .005 .027 .102 .262 .496 .734 .900 .977

.000 .002 .012 .055 .166 .367 .618 .833 .954

.000 .001 .005 .026 .095 .249 .486 .738 .914

.000 .000 .002 .011 .047 .150 .350 .617 .851

.000 .000 .000 .004 .020 .078 .224 .474 .756

.000 .000 .000 .001 .006 .033 .121 .322 .624

.000 .000 .000 .000 .001 .010 .050 .180 .456

.000 .000 .000 .000 .000 .002 .013 .070 .264

.000 .000 .000 .000 .000 .000 .001 .012 .086

15

0 1 2 3 4 5 6 7 8 9 10 11

.463 .829 .964 .995 .999 1.0 1.0 1.0 1.0 1.0 1.0 1.0

.206 .549 .816 .944 .987 .998 1.0 1.0 1.0 1.0 1.0 1.0

.087 .319 .604 .823 .938 .983 .996 .999 1.0 1.0 1.0 1.0

.035 .167 .398 .648 .836 .939 .982 .996 .999 1.0 1.0 1.0

.013 .080 .236 .461 .686 .852 .943 .983 .996 .999 1.0 1.0

.005 .035 .127 .297 .515 .722 .869 .950 .985 .996 .999 1.0

.002 .014 .062 .173 .352 .564 .755 .887 .958 .988 .997 1.0

.000 .005 .027 .091 .217 .403 .610 .787 .905 .966 .991 .998

.000 .002 .011 .042 .120 .261 .452 .654 .818 .923 .975 .994

.000 .000 .004 .018 .059 .151 .304 .500 .696 .849 .941 .982

.000 .000 .001 .006 .025 .077 .182 .346 .548 .739 .880 .958

.000 .000 .000 .002 .009 .034 .095 .213 .390 .597 .783 .909

.000 .000 .000 .000 .003 .012 .042 .113 .245 .436 .648 .827

.000 .000 .000 .000 .001 .004 .015 .050 .131 .278 .485 .703

.000 .000 .000 .000 .000 .001 .004 .017 .057 .148 .314 .539

.000 .000 .000 .000 .000 .000 .001 .004 .018 .061 .164 .352

.000 .000 .000 .000 .000 .000 .000 .001 .004 .017 .062 .177

.000 .000 .000 .000 .000 .000 .000 .000 .000 .002 .013 .056

.000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .001 .005

20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

.736 .925 .984 .997 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

.392 .677 .867 .957 .989 .998 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

.176 .405 .648 .830 .933 .978 .994 .999 1.0 1.0 1.0 1.0 1.0 1.0 1.0

.069 .206 .411 .630 .804 .913 .968 .990 .997 .999 1.0 1.0 1.0 1.0 1.0

.024 .091 .225 .415 .617 .786 .898 .959 .986 .996 .999 1.0 1.0 1.0 1.0

.008 .035 .107 .238 .416 .608 .772 .887 .952 .983 .995 .999 1.0 1.0 1.0

.002 .012 .044 .118 .245 .417 .601 .762 .878 .947 .980 .994 .998 1.0 1.0

.001 .004 .016 .051 .126 .250 .416 .596 .755 .872 .943 .979 .994 .998 1.0

.000 .001 .005 .019 .055 .130 .252 .414 .591 .751 .869 .942 .979 .994 .998

.000 .000 .001 .006 .021 .058 .132 .252 .412 .588 .748 .868 .942 .979 .994

.000 .000 .000 .002 .006 .021 .058 .131 .249 .409 .586 .748 .870 .945 .981

.000 .000 .000 .000 .002 .006 .021 .057 .128 .245 .404 .584 .750 .874 .949

.000 .000 .000 .000 .000 .002 .006 .020 .053 .122 .238 .399 .583 .755 .882

.000 .000 .000 .000 .000 .000 .001 .005 .017 .048 .113 .228 .392 .584 .762

.000 .000 .000 .000 .000 .000 .000 .001 .004 .014 .041 .102 .214 .383 .585

.000 .000 .000 .000 .000 .000 .000 .000 .001 .003 .010 .032 .087 .196 .370

.000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .001 .006 .022 .067 .170

.000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .002 .011 .043

.000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .003

Poisson Cumulative Distribution Function x 0 1 2 3 4 5

0.1 .905 .995 1.00 1.00 1.00 1.00

0.2 .819 .982 .999 1.00 1.00 1.00

0.3 .741 .963 .996 1.00 1.00 1.00

0.4 .670 .938 .992 .999 1.00 1.00

0.5 .607 .910 .986 .998 1.00 1.00

0.6 .549 .878 .977 .997 1.00 1.00

0.7 .497 .844 .966 .994 .999 1.00

λ 0.8 .449 .809 .953 .991 .999 1.00

0.9 .407 .772 .937 .987 .998 1.00

1.0 .368 .736 .920 .981 .996 .999

1.1 .333 .699 .900 .974 .995 .999

1.2 .301 .663 .879 .966 .992 .998

1.3 .273 .627 .857 .957 .989 .998

1.4 .247 .592 .833 .946 .986 .997

1.5 .223 .558 .809 .934 .981 .996

x 0 1 2 3 4

1.6 .202 .525 .783 .921 .976

1.7 .183 .493 .757 .907 .970

1.8 .165 .463 .731 .891 .964

1.9 .150 .434 .704 .875 .956

2.0 .135 .406 .677 .857 .947

2.1 .122 .380 .650 .839 .938

2.2 .111 .355 .623 .819 .928

λ 2.3 .100 .331 .596 .799 .916

2.4 .091 .308 .570 .779 .904

2.5 .082 .287 .544 .758 .891

2.6 .074 .267 .518 .736 .877

2.7 .067 .249 .494 .714 .863

2.8 .061 .231 .469 .692 .848

2.9 .055 .215 .446 .670 .832

3.0 .050 .199 .423 .647 .815

5 6 7 8 9

.994 .999 1.00 1.00 1.00

.992 .998 1.00 1.00 1.00

.990 .997 .999 1.00 1.00

.987 .997 .999 1.00 1.00

.983 .995 .999 1.00 1.00

.980 .994 .999 1.00 1.00

.975 .993 .998 1.00 1.00

.970 .991 .997 .999 1.00

.964 .988 .997 .999 1.00

.958 .986 .996 .999 1.00

.951 .983 .995 .999 1.00

.943 .979 .993 .998 .999

.935 .976 .992 .998 .999

.926 .971 .990 .997 .999

.916 .966 .988 .996 .999

.85

.90

.95

Standard Normal Cumulative Distribution Function z

0.00

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0

.5000 .5398 .5793 .6179 .6554 .6915 .7257 .7580 .7881 .8159 .8413 .8643 .8849 .9032 .9192 .9332 .9452 .9554 .9641 .9713 .9772 .9821 .9861 .9893 .9918 .9938 .9953 .9965 .9974 .9981 .9987

.5040 .5438 .5832 .6217 .6591 .6950 .7291 .7611 .7910 .8186 .8438 .8665 .8869 .9049 .9207 .9345 .9463 .9564 .9649 .9719 .9778 .9826 .9864 .9896 .9920 .9940 .9955 .9966 .9975 .9982 .9987

.5080 .5478 .5871 .6255 .6628 .6985 .7324 .7642 .7939 .8212 .8461 .8686 .8888 .9066 .9222 .9357 .9474 .9573 .9656 .9726 .9783 .9830 .9868 .9898 .9922 .9941 .9956 .9967 .9976 .9982 .9987

.5120 .5517 .5910 .6293 .6664 .7019 .7357 .7673 .7967 .8238 .8485 .8708 .8907 .9082 .9236 .9370 .9484 .9582 .9664 .9732 .9788 .9834 .9871 .9901 .9925 .9943 .9957 .9968 .9977 .9983 .9988

.5160 .5557 .5948 .6331 .6700 .7054 .7389 .7704 .7995 .8264 .8508 .8729 .8925 .9099 .9251 .9382 .9495 .9591 .9671 .9738 .9793 .9838 .9875 .9904 .9927 .9945 .9959 .9969 .9977 .9984 .9988

.5199 .5596 .5987 .6368 .6736 .7088 .7422 .7734 .8023 .8289 .8531 .8749 .8944 .9115 .9265 .9394 .9505 .9599 .9678 .9744 .9798 .9842 .9878 .9906 .9929 .9946 .9960 .9970 .9978 .9984 .9989

.5239 .5636 .6026 .6406 .6772 .7123 .7454 .7764 .8051 .8315 .8554 .8770 .8962 .9131 .9279 .9406 .9515 .9608 .9686 .9750 .9803 .9846 .9881 .9909 .9931 .9948 .9961 .9971 .9979 .9985 .9989

.5279 .5675 .6064 .6443 .6808 .7157 .7486 .7794 .8078 .8340 .8577 .8790 .8980 .9147 .9292 .9418 .9525 .9616 .9693 .9756 .9808 .9850 .9884 .9911 .9932 .9949 .9962 .9972 .9979 .9985 .9989

.5319 .5714 .6103 .6480 .6844 .7190 .7517 .7823 .8106 .8365 .8599 .8810 .8997 .9162 .9306 .9429 .9535 .9625 .9699 .9761 .9812 .9854 .9887 .9913 .9934 .9951 .9963 .9973 .9980 .9986 .9990

.5359 .5753 .6141 .6517 .6879 .7224 .7549 .7852 .8133 .8389 .8621 .8830 .9015 .9177 .9319 .9441 .9545 .9633 .9706 .9767 .9817 .9857 .9890 .9916 .9936 .9952 .9964 .9974 .9981 .9986 .9990

Table of Student’s T-distribution. Contains tα; critical values, such that P {t > tα} = α ν (d.f.) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 22 24 26 28 30 32 34 36 38 40 45 50 60 70 80 90 100 200 ∞

α, the right-tail probability .10 3.078 1.886 1.638 1.533 1.476 1.440 1.415 1.397 1.383 1.372 1.363 1.356 1.350 1.345 1.341 1.337 1.333 1.330 1.328 1.325 1.321 1.318 1.315 1.313 1.310 1.309 1.307 1.306 1.304 1.303 1.301 1.299 1.296 1.294 1.292 1.291 1.290 1.286 1.282

.05 6.314 2.920 2.353 2.132 2.015 1.943 1.895 1.860 1.833 1.812 1.796 1.782 1.771...


Similar Free PDFs