Chapter 5 Solutions - solution manual PDF

Title Chapter 5 Solutions - solution manual
Author Jo Korich
Course Quantitative Business Analysis
Institution SUNY Polytechnic Institute
Pages 11
File Size 295.2 KB
File Type PDF
Total Downloads 43
Total Views 162

Summary

solution manual...


Description

Cha pt e r5 Ut i l i t yandGa meThe or y

Le ar ni ngObj e c t i v e s 1 .

Kn o wwh a ti sme a ntb yu t i l i t y .

2 .

Un de r s t a ndwh yu t i l i t yi sabe t t e rc r i t e r i o nt ha nmo ne t a r yv a l u ei ns omed e c i s i o nma ki n gs i t ua t i on s .

3 .

Kn o wh o wt ode v e l opau t i l i t yf unc t i o nf ormo ne y .

4 .

Le a r na bo utt h er ol eal o t t e r ypl a y si nhe l p i n gade c i s i onma k e ra s s i g nu t i l i t yv a l u e s .

5 .

Un de r s t a ndwh yr i s ka v o i di n ga n dr i s kt a ki n gde c i s i onma k e r swo ul da s s i gndi ffe r e ntu t i l i t yv a l ue si n t hes a med e c i s i onma ki n gs i t ua t i on .

6 .

Bea bl et od i s c us st her e l a t i v eme r i t so fe xp e c t e dmon e t a r yv a l uea nde x pe c t e dut i l i t ya sde c i s i on ma ki n gc r i t e r i a .

7 .

Kn o wwh a ti sme a ntb yat wo pe r s on ,z e r os umg a me .

8 .

Bea bl et oi de n t i f yap ur es t r a t e gyf o rat wop e r s o n, z e r os umg a me .

9 .

Bea bl et oi de n t i f yami x e ds t r a t e gya n dc o mpu t eop t i ma lp r ob a bi l i t i e sf o rt hemi x e ds t r a t e g i e s .

1 0.

Kn o wh o wt ous edo mi na nc et or e d uc et hes i z eofag a me .

1 1.

Un de r s t a ndt h ef ol l o wi n gt e r ms : u t i l i t y l o t t e r y u t i l i t yf un c t i on r i s ka v oi d e r r i s kt a k e r e x pe c t e du t i l i t y

g a met h e or y t wo pe r s on ,z e r os umg a me s a dd l epo i n t p ur es t r a t e gy mi x e ds t r a t e gy d omi n a t e ds t r a t e gy

5-

Markov Processes

So l ut i ons : 1 .a .

Th el a r g e s t e xp e c t e dv a l uei sp r o v i de db yd s owec h oo s ed2( I n v e s t me ntB) . 2, b . De c i s i o nMa k e rA U( 7 5)=0. 8 0( 1 0)+( 10 . 8) ( 0)=8 U( 5 0)=0. 6 0( 1 0)+( 10 . 60 ) ( 0)=6 U( 2 5)=0. 3 0( 1 0)+( 10 . 30 ) ( 0)=3 De c i s i o nMa k e rB U( 7 5)=0. 6 0( 1 0)+( 10 . 6) ( 0)=6 U( 5 0)=0. 3 0( 1 0)+( 10 . 30 ) ( 0)=3 U( 2 5)=0. 1 5( 1 0)+( 10 . 15 ) ( 0)=1. 5 De c i s i o nMa k e rA EU( d1)=0. 4 0( 1 0)+0. 3 0( 3 )=4 . 9

De c i s i o nMa k e rB EU( d ) = 0 . 4 0 ( 1 0)+0. 3 0( 1 . 5)=4. 4 5 1

EU( d2)=0. 4 0( 8 )+0 . 30( 6 )+0. 3 0( 3 )=5. 9

EU( d 4 0( 6 )+0. 3 0( 3 )+0 . 30 ( 1. 5 )=3. 7 5 2)=0.

EU( d3)=0. 4 0( 6 )+0 . 30( 6 )+0. 3 0( 6 )=6. 0

EU( d 4 0( 3 )+0. 3 0( 3 )+0 . 30 ( 3)=3. 0 3)=0.

ForDe c i s i o nMa k e rA,d3i st heb e s t de c i s i on . ForDe c i s i o nMa k e rB, d1i st heb e s tde c i s i o n. c . Th edi ffe r e nc ei sdu et ot h ed i ffe r e n ta t t i t ud e st o wa r dr i s k.De c i s i onma ke rAt e nd st oa v oi dr i s k, wh i l ede c i s i onma k e rBt e n dst ot a k ear i s kf ort h eop po r t u n i t yo fal a r g ep a y off. 2 .a . EV( d1)=10 , 00 0 EV( d2)=0. 9 6( 0 )+0. 0 3( 1 00 , 00 0)+0. 0 1( 2 00 , 00 0)=5, 0 00 Us i n gEVa pp r oa c h NoI n s u r a nc e( d2) b . Lot t e r y : p =p r ob a b i l i t yo fa$0Cos t 1-p =pr o ba bi l i t yo fa$ 20 0, 0 00Cos t c .

I n s ur a nc e NoI n s u r a n c e

d 1 d 2

s 1 Non e 9. 9

s 2 Mi n or 9 . 9

s 3 Ma j or 9 . 9

1 0. 0

6. 0

0 . 0

EU( d1)=9. 9 EU( d2)=0. 9 6( 1 0. 0)+0 . 03 ( 6. 0)+0 . 01 ( 0. 0)=9 . 78

5-

Markov Processes

 Us i n gEUa p pr o a c h I ns ur a nc e( d ) 1 d . Us ee xpe c t e du t i l i t ya pp r oa c h. 3 .a . P( Wi n)=1/ 25 0, 0 00

P( Lo s e )=24 9, 9 99 / 25 0, 0 00

EV( d1)=1/ 2 50 , 00 0( 3 00 , 00 0)+24 9, 9 99 / 25 0, 0 00 ( 2)=0 . 80 EV( d2)=0

 d2-Don o tpu r c h a s el o t t e r yt i c k e t . b .

Pu r c h a s e DoNo tPu r c h a s e

d 1 d2

s 1 Wi n 1 0

s 2 Lo s e 0

0 . 0 00 01

0 . 00 00 1

EU( d1)=1/ 25 0, 0 00 ( 10)+2 49 , 99 9/ 2 50 , 00 0( 0 )=0. 0 00 04 EU( d2)=0. 0 000 1

 d1-pu r c h a s el ot t e r yt i c k e t . 4 .a . EV(A) 0.80(60) 0.20(70) 62.0  Route B EV(B) 0.70(45)  0.30(90) 58.5  b . Lot t e r y : p =p r ob a b i l i t yo fa4 5mi nu t et r a v e lt i me ( 1-p ) =pr o ba bi l i t yo fa9 0mi n ut et r a v e lt i me c .

Ro ut eA Ro ut eB

d1 d2

EU(A) 0.80(8.0) 0.20(6.0) 7.6  Route A EU(B) 0.70(10.0)  0.30(0) 7.0  Ri s ka v o i de rs t r a t e gy .

Ro ut e Op e n 8. 0

Ro ut e De l a y s 6 . 0

1 0. 0

0. 0

Chapter 17

5 .a . 1 . 0 . 9 C

Pr ob a b i l i t y

. 8 . 7

A

. 6

B

. 5 . 4 . 3 . 2 . 1 10 0

50

0 P a yo ff

b . A-r i s ka v oi de r B-r i s kt a k e r C-r i s kn e ut r a l c . Ri s ka v o i de rA, a t $2 0p a y offp=0 . 70 Th us , EV( Lot t e r y)=0. 70 ( 10 0)+0. 3 0( 1 00 )=$4 0 Th e r e f o r e , wi l lpa y4 0-2 0=$2 0 Ri s kt a k e rB, a t$ 20p a y offp=0. 4 5 Th us , EV( Lot t e r y)=0. 45 ( 10 0)+0. 5 5( 1 00 )=$1 0 Th e r e f o r e , wi l lpa y2 0-( 10 )=$3 0 6 .

De c i s i o nMa k e rA EU( d 1) 0.25(7.0)  0.50(9.0)  0.25(5.0) 7.5   d1 EU( d 2) 0.25(9.5) 0.50(10.0) 0.25(0.0) 7.375  De c i s i o nMa k e rB EU( d1) 0.25(4.5)  0.50(6.0)  0.25(2.5) 4.75   d2 EU( d 2) 0.25(7.0) 0.50(10.0) 0.25(0.0) 6.75  De c i s i o nMa k e rC

5 0

1 00

Markov Processes

EU( d1 ) 0.25(6.0) 0.50(7.5) 0.25(4.0) 6.175  d 2 EU( d 2) 0.25(9.0) 0.50(10.0) 0.25(0.0) 7.25  7 .a . EV( d1)=0. 6 0( 10 00 )+0. 4 0( 10 00 )=$ 20 0 EV( d2)=$0

 d1  Be t Lottery: p of winning $1,000    vs.  $0 (1  p) of losing $1,000   b . Mos ts t ud e nt s , i fr e a l i s t i c , s h ou l dr e q ui r eah i ghv a l u ef orp .Whi l es t u de nt swi l ldi ffe r , l e tu su s e p=0. 90a sa ne xa mpl e . c . EU( d1)=0. 6 0( 1 0. 0)+0 . 40 ( 0. 0)=6 . 0 EU( d2)=0. 6 0( 9 . 0 )+0 . 40 ( 9. 0 )=9 . 0

 d2 DoNo tBe t( Ri s kAv oi d e r ) d . No ,d i ffe r e n td e c i s i onma k e r sha v ed i ffe r e n ta t t i t ud e st o wa r dr i s k, t he r e f o r edi ffe r e ntu t i l i t i e s . 8 .a .

Be t DoNo tBe t

d1 d2

s 1 Wi n 3 50

s 2 Los e 10

0

0

b . EV( d1)=1/ 3 8( 3 50 )+3 7/ 3 8( 1 0)=$0 . 53 EV( d2)=0

 d2 DoNo tBe t c . Ri s kt a k e r s ,b e c a u s er i s kne ut r a la ndr i s ka v o i de r swou l dn ot be t . d . EU( d1) EU( )f orde c i s i onma k e rt op r e f e rBe td e c i s i on . d2 1 / 38 ( 10 . 0)+37 / 38 ( 0. 0 ) EU( d2) ) 0. 2 6 EU( d2

 Ut i l i t yo f$0p a y offmu s t beb e t we e n0a n d0. 2 6. 9 .a . EV=0. 1 0( 1 50 , 00 0)+0. 2 5( 1 00 , 00 0)+0. 2 0( 5 0, 00 0)+0 . 15( 0 )+0 . 20 ( 50 , 00 0) +0 . 1 0( 1 00 , 00 0)=$3 0, 0 00 Ma r k e tt hen e wpr o du c t . b . Lot t e r y p

=p r ob a b i l i t yo f$1 50 , 0 00

Chapter 17

( 1-p ) =pr o ba bi l i t yo f$1 00 , 00 0 c . Ri s kAv oi d e r .

d . EU( ma r k e t )=0 . 10 ( 10 . 0 )+0 . 25 ( 9. 5)+0 . 2 0( 7 . 0)+0. 1 5( 5 . 0)+0. 2 0( 2 . 5)+0. 1 0( 0 . 0)=6. 0 25 EU( d on ' tma r k e t )=EU( $ 0)=5. 0 Ma r k e tt hen e wpr o du c t . e . Ye s-Bo t hEVa n dEUr e c o mme ndma r k e t i n gt hep r o duc t . 10. a.

EV(Comedy) = .30(30%) + .60(25%) + .10(20%) = 26.0% and EV(Reality Show) = .30(40%) + .40(20%) + .30(15%) = 24.5% Using the expected value approach, the manager should choose the Comedy.

b.

p = probability of a 40% percentage of viewing audience 1 - p = probability of a 15% percentage of viewing audience

c.

Arbitrarily using a utility of 10 for the best payoff and a utility of 0 for the worst payoff, the utility table is

Percentage of Viewing Audience

Indifference value of p

Utility Value

40%

Does not apply

10

30%

0.40

4

25%

0.30

3

20%

0.10

1

15%

Does not apply

0

and so the expected payoffs in terms of utilities are: EU(Comedy) = .30(4) + .60(3) + .10(1) = 3.1 and EU(Reality Show) = .30(10) + .40(1) + .30(0) = 3.4 Using the expected utility approach, the manager should choose the Reality Show. Although the Comedy has the higher expected payoff in terms of percentage of viewing audience, the Reality Show has the higher expected utility. This suggests the manager is a risk taker. 1 1.

Pl a y e rA

a 1

b 1 8

a 2

2

Pl a y e rB b 2 5 4

b3 7

Mi n i mum

10

2

Markov Processes

Ma xi mum

8

1 0

Th ema xi mumo ft h er o wmi ni mumsi s5a ndt h emi n i mumoft h ec ol umnma x i mumsi s5 .Theg a me h a sapu r es t r a t e gy . Pl a y e rAs h ou l dt a k es t r a t e gya1a n dPl a y e rBs h ou l dt a k es t r a t e gyb Thev a l u e 2. o ft h eg a mei s5. 12. a.

The payoff table is: Blue Army Attack Defend Minimum Attack 30 50 30 Red Army Defend 40 0 0 Maximum 40 50 The maximum of the row minimums is 30 and the minimum of the column maximums is 40. Because these values are not equal, a mixed strategy is optimal. Therefore, we must determine the best probability, p, for which the Red Army should choose the Attack strategy. Assume the Red Army chooses Attack with probability p and Defend with probability 1-p. If the Blue Army chooses Attack, the expected payoff is 30p + 40(1-p). If the Blue Army chooses Defend, the expected payoff is 50p + 0*(1-p) Setting these equations equal to each other and solving for p, we get p = 2/3. Red Army should choose to Attack with probability 2/3 and Defend with probability 1/3.

b.

Assume the Blue Army chooses Attack with probability q and Defend with probability 1-q. If the Red Army chooses Attack, the expected payoff for the Blue Army is 30q + 50*(1-q). If the Red Army chooses Defend, the expected payoff for the Blue Army is 40q + 0*(1-q). Setting theses equations equal to each other and solving for q we get q = 0.833. Therefore the Blue Army should choose to Attack with probability 0.833 and Defend with probability 1-0.833 = 0.167.

1 3.

Re pu bl i c a n Ca n di d a t e

a 1 a2 a 3 a 4

b1 0 3 0 1 0 2 0

Ma xi mum

3 0

De mo c r a tCa n di d a t e b b b4 2 3 1 5 8 2 0 5 5 10 25 0 2 0 20 1 0 15 2 0

Mi n i mum 15 1 0 2 5

2 0

Th ema xi mumo ft h er o wmi ni mumsi s1 0a n dt hemi ni mu moft hec ol umnma xi mu msi s1 0. Th e g a meh a sap ur es t r a t e gy . TheRe p ub l i c a nc a n di d a t es ho ul dc h oo s es t r a t e gya vi s i t Sou t hBe n d )a n d 4( t heDe moc r a tc a n di d a t es hou l dc ho os es t r a t e gyb3( vi s i t Fo r tWa y ne ) .Thev a l ueo ft h ega mei s 1 0, 00 0v o t e r sf ort h eRe p ub l i c a nc a n di d a t e . 1 4.a . St r a t e gya sdo mi na t e db ya Th e ns t r a t e gyb sdo mi n a t e db yb The2x2g a mebe c ome s : 3i 2. 1i 2.

Pl a y e rA

a 1 a 2

Pl a y e rB b b 3 2 1 2 4 3

Chapter 17

b . ForPl a y e rA, l e tp=p r ob a bi l i t yo fa1a n d1–p=p r ob a bi l i t yofa 2. I fb EV=1p+4 ( 1–p ) 1, I fb EV=2p–3 ( 1–p ) 2, 1p+4( 1–p) 1p+4–4 p 1 0p p

= 2p–3 ( 1–p ) = 2p–3+3p =7 = 0. 7 0

( 1–p)=1-0 . 70 = 0. 3 0 For Player A, P(a 1) = 0.70, P(a 2) = 0.30, P(a 3) = 0 as a 3 was dominated. So Player A should randomly choose a strategy with a1having a probability of 0.7 and a2having a probability of 0.3. ForPl a y e rB,l e tq=p r ob a bi l i t yo fb2a n d1–q=pr o ba bi l i t yo fb 3. I fa EV=1q+2 ( 1–q ) 1, I fa EV=4q–3 ( 1–q ) 2, 1 q+2( 1–q) = 4q–3( 1–q ) 1 q+2–2 q =4 q–3+3q 1 0q = 5 q = 0. 50 ( 1–q)=1-0 . 50 = 0. 5 0 For Player B, P(b 1) = 0 because b 1was dominated. P(b 2) = 0.50, P(b 3) = 0.50. c.

Value of game using Player A 1 p+4( 1–p) = 1( 0 . 7 0)+4( 1-0. 70 ) = 0. 5 0

1 5.a .

Pl a y e rA

$ 1 $ 5

Pl a y e rB $ 1 $5 1 5 1 5

b . Pl a y e rB $ 1 $ 5

Pl a y e rA

$ 1 $ 5 Ma xi mum

1 1 1

5 5 5

Ma xi mu m 1 5

The maximum of the row minimums is -1 and the minimum of the column maximums is 1. The game does not have a pure strategy. This makes sense because if Player A adopted a pure strategy

Markov Processes

c.

such as always selecting $1, Player B would learn and always select $1 in order to win Player A’s $1. The players must adopt a mixed strategy in order to play the game. ForPl a y e rA, l e tp=p r ob a bi l i t yof$1a nd( 1–p )=p r ob a b i l i t yo f$5 I fb 1 , EV=1p+1 ( 1–p ) 1 =$ I fb 5 , EV=5 p–5( 1–p) 2=$ 1 p+1( 1–p) 1 p+1–1 p 1 2p p

= 5p–5( 1–p ) =5 p–5+5p =6 = 0. 50

( 1–p)=1-0 . 50 = 0. 5 0 For Pl a y e rB, l e tq=p r ob a bi l i t yof$1a nd( 1–q )=p r ob a b i l i t yo f$5 I fa 1 , EV=1 q+5 ( 1–q ) 1=$ I fa 5 , EV=1 q–5( 1–q) 2=$ 1 q+5( 1–q) 1 q+5–5 q 1 2q q

= 1q–5( 1–q ) =1 q–5+5q = 10 = 5/ 6

( 1–q)=1–5/ 6 = 1/ 6 or q = 0.8333 d.

(1-q) = 0.1667

Value of game using Player A EV=1p+1 ( 1–p) = 1 ( 0. 50 )+1( 0 . 50 ) =0 This is a fair game. Neither player is favored.

e.

If Player A realizes Player B is using a 50/50 strategy, we can use an expected value with these probabilities to show: EV( a1=$1 ) =1( 0 . 50)+5 ( 0. 5 0)=2. 0 0 EV( a2=$5 ) =1( 0 . 50)-5 ( 0. 5 0)=2. 0 0 Player A should see that the expected value of a1is now larger than the expected value of a2. If Player A believes Player B will continue with a 50/50 strategy, then Player A should always play strategy a 1: reveal $1. But, if Player A does this, Player B will catch on and begin revealing a $1 bill all the time. The only way for a player to protect against the opponent taking advantage is to play the optimal strategy all the time.

Chapter 17

16.

Comp a n yA

b1 3 2 4

a 1 a 2 a 3 a 4 Ma xi mum

Compa n yB b b3 2 0 2 2 1 2 5

b4 4 0 6

Mi n i mum 0 2

6 6

0 6

2

2

1 5

Th ema xi mumo ft h er o wmi ni mumsi s2a ndt h emi n i mumoft h ec ol umnma x i mumsi s4 .Theg a me d oe sn o tha v eapu r es t r a t e gy . Th ef o l l o wi n gd omi n a nc eo bs e r v a t i on sc a nbeus e dt or e du c et h eg a met oa2x2g a me . o mi n a t e sa n da l i mi n a t es t r a t e gi e sa1a n da a 3d 1a 2 ,e 2 o mi n a t e sb n db l i mi n a t es t r a t e gi e sb3a n db4 b 1d 3a 4 ,e Th er e du c e dga met he or yp r ob l e mi sa sf o l l o ws :

Compa n yA

a 3 a 4

Comp a n yB b b1 2 4 2 2 6

ForCo mpa n yA,l e tp=p r ob a bi l i t yo fa n d( 1–p)=p r ob a bi l i t yo fa4 3a I fb EV=4p-2( 1–p) 1, I fb EV=2p+6 ( 1–p ) 2, 4 p-2( 1–p ) 4 p-2+2p 1 0p p

=2 p+6 ( 1–p ) = 2p+6–6p =8 = 0. 80

( 1–p)=1-0 . 80 = 0. 2 0 Company A: P(a 3) = 0.80, P(a 4) = 0.20 For Compa n yB,l e tq=p r ob a bi l i t yo fb n d( 1–q)=pr o ba bi l i t yo fb 1a 2 I fa EV=4q+2 ( 1–q ) 3, I fa EV=2q+6 ( 1–q ) 4, 4 q+2( 1–q) 4 q+2–2q 1 0q q

=2q+6 ( 1–q ) =2q+6–6 q =4 = 0. 4 0

( 1–q)=1–0. 4 0 = 0. 6 0 Company B: P(b 1) = 0.40, P(b 2) = 0.60

Markov Processes

Value of the game 4 p-2( 1–p ) =4 ( . 80 )-2( 0 . 20 ) = 2. 8 17. a.

Center strategy for Shooter is dominated by Left, so we remove the Center row. We then see that Center strategy is dominated by Right for Keeper, so we remove the Center column, leaving only Left and Right in a 2x2 payoff table as shown below. Keeper Shooter

b.

Left Right

Left .35, .65 .95, .05

Right .85, .15 .30, .70

Here a mixed strategy is optimal. Assume the Shooter chooses Left with probability p and Right with probability 1-p. If the Keeper chooses Left, the Shooter will score with probability 0.35*p + 0.95*(1-p). If the Keeper chooses Right, the Shooter will score with probability 0.85p + 0.30*(1-p). Setting these equations equal yields 0.35*p + 0.95*(1-p) = 0.85p + 0.30*(1-p). Solving for p results in p = 0.565. Therefore, the Shooter should choose Left with probability 0.565 and Right with probability 1-0.565 = 0.435. Assume that the Keeper chooses Left with probability q and Right with probability 1-q. This yields the equation 0.65q + 0.05*(1-q) = 0.15q + 0.7*(1-q). Solving for q yields q = 0.565. Therefore, the Keeper should choose Left with probability 0.565 and Right with probability 0.435.

c.

Because the Shooter chooses Left with probability 0.565 and Right with probability 0.435, the Shooter’s expected scoring probability is 0.35*0.565 + 0.95 * 0.435 = 0.611. (Conversely, the Keeper’s probability of stopping the shot is 1-0.611 = 0.389.)...


Similar Free PDFs