Operating systems lecture notes stanford cs140 PDF

Title Operating systems lecture notes stanford cs140
Course Information Technology
Institution Walter Sisulu University
Pages 97
File Size 1.6 MB
File Type PDF
Total Downloads 61
Total Views 151

Summary

hgx,jhc.ycvh jvu.uoh/ijipj...


Description

OperatingSystemsLectureNotes (StanfordCS140) From:CS140:OperatingSystems(Spring2014)

Introduction LectureNotesforCS140 Spring2014 JohnOusterhout Evolutionofoperatingsystems,phase1: Hardwareexpensive,humanscheap Oneuseratatime,workingdirectlyatconsole First"operatingsystem":I/Osubroutinelibrariessharedbyusers Simplebatchmonitor:getuserawayfromthecomputer.OS= programtoloadandrunuserjobs,takedumps. Datachannels,interrupts,overlapofI/Oandcomputation. Memoryprotectionandrelocationenablemultitasking:several userssharethesystem OSmustmanageinteractions,concurrency Bymid-1960'soperatingsystemshadbecomelarge,complicated. OSfieldemergesasimportantdisciplinewithprinciples Evolutionofoperatingsystems,phase2: Hardwarecheap,humansexpensive Interactivetimesharing Fancyfilesystems Issuesofresponsetime,thrashing Personalcomputers:computersarecheap,soputoneineach terminal. Networking:allowsharingandcommunicationbetweenmachines. Embeddeddevices:putcomputersincellphones,stereoplayers,

TVs,lightswitches Areallthefancyfeaturesdevelopedfortimesharingstillneeded? ThefutureofOSes: Verysmall(devices) Verylarge(datacenters,cloud) CharacteristicsofcurrentOSes: Enormous:millionsoflinesofcode,100-1000engineer-years Complex:asynchronous,hardwareidiosyncrasies,performanceis crucial. Poorlyunderstood Mostofanoperatingsystem'sfunctionsfallinthecategoryof coordination:allowingseveralthingstoworktogetherefficientlyand fairly: Concurrency:allowseveraldifferenttaskstobeunderwayatthe sametime,asifeachhadaprivatemachine.Tokeeptrackof everything,processesandthreadswereinvented. I/Odevices.Don'twantCPUtositidlewhileanI/Odeviceis working. Memory:howcanasinglememorybesharedamongseveral processes? Files:allowmanyfiles,formanydifferentusers,tosharespaceon thesamephysicaldisk. Networks:allowgroupsofcomputerstoworktogether. Security:howtoallowinteractionswhileprotectingeach participantfromabusebytheothers?

Threads,Processes,andDispatching LectureNotesforCS140 Spring2014 JohnOusterhout ReadingsforthistopicfromOperatingSystems:Principlesand Practice:Chapter4.

ThreadsandProcesses Thread:asequentialexecutionstream Executesaseriesofinstructionsinorder(onlyonethinghappens atatime). Process:oneormorethreads,alongwiththeirexecutionstate. Executionstate:everythingthatcanaffect,orbeaffectedby,a thread: Code,data,registers,callstack,openfiles,network connections,timeofday,etc. Partoftheprocessstateisprivatetoathread Partissharedamongallthreadsintheprocess Evolutionofoperatingsystemprocessmodel: Earlyoperatingsystemssupportedasingleprocesswithasingle threadatatime(singletasking).Theyranbatchjobs(oneuserat atime). Someearlypersonalcomputeroperatingsystemsusedsingletasking(e.g.MS-DOS),butthesesystemsarealmostunheardof today. Bylate1970'smostoperatingsystemsweremultitaskingsystems:

theysupportedmultipleprocesses,buteachprocesshadonlya singlethread. Inthe1990'smostsystemsconvertedtomultithreading:multiple threadswithineachprocess. Isaprocessthesameasaprogram?

Dispatching Almostallcomputerstodaycanexecutemultiplethreads simultaneously: Eachprocessorchiptypicallycontainsmultiplecores EachcorecontainsacompleteCPUcapableofexecutingthreads Manymodernprocessorssupporthyperthreading:eachphysical corebehavesasifitisactuallytwocores,soitcanruntwo threadssimultaneously(e.g.executeonethreadwhiletheotheris waitingonacachemiss). Forexample,aservermightcontain2IntelXeonE5-2670 processors,eachwith8coresthatsupports2-wayhyperthreading. Overall,thiscomputercanrun32threadssimultaneously. Mayhavemorethreadsthancores Atanygiventime,mostthreadsdonotneedtoexecute(theyare waitingforsomething). OSusesaprocesscontrolblocktokeeptrackofeachprocess: Executionstateforeachthread(savedregisters,etc.) Schedulinginformation Informationaboutmemoryusedbythisprocess Informationaboutopenfiles Accountingandothermiscellaneousinformation

Atanygiventimeathreadisinoneof3states: Running Blocked:waitingforanevent(diskI/O,incomingnetworkpacket, etc.) Ready:waitingforCPUtime Dispatcher:innermostportionoftheOSthatrunsoneachcore: Runathreadforawhile Saveitsstate Loadstateofanotherthread Runit... Contextswitch:changingthethreadcurrentlyrunningonacoreby firstsavingthestateoftheoldprocess,thenloadingthestateofthe newthread. Note:thedispatcherisnotitselfathread! Corecanonlydoonethingatatime.Ifathreadisexecuting, dispatcherisn't:OShaslostcontrol.HowdoesOSregaincontrolof core? Traps(eventsoccurringincurrentthreadthatcauseachangeof controlintotheoperatingsystem): Systemcall. Error(illegalinstruction,addressingviolation,etc.). Pagefault. Interrupts(eventsoccurringoutsidethecurrentthreadthatcausea stateswitchintotheoperatingsystem): Charactertypedatkeyboard. Completionofdiskoperation. Timer:tomakesureOSeventuallygetscontrol. Howdoesdispatcherdecidewhichthreadtorunnext?

Plan0:searchprocesstablefromfront,runfirstreadythread. Plan1:linktogetherthereadythreadsintoaqueue.Dispatcher grabsfirstthreadfromthequeue.Whenthreadsbecomeready, insertatbackofqueue. Plan2:giveeachthreadapriority,organizethequeueaccording topriority.Or,perhapshavemultiplequeues,oneforeachpriority class.

ProcessCreation Howtheoperatingsystemcreatesaprocess: Loadcodeanddataintomemory. Createandinitializeprocesscontrolblock. Createfirstthreadwithcallstack. Provideinitialvaluesfor"savedstate"forthethread Makethreadknowntodispatcher;dispatcher"resumes"tostartof newprogram. SystemcallsforprocesscreationinUNIX: forkmakescopyofcurrentprocess,withonethread. execreplacesmemorywithcodeanddatafromagiven executablefile.Doesn'treturn("returns"tostartingpointofnew program). waitpidwaitsforagivenprocesstoexit. Example: intpid=fork(); if(pid==0){ /*Childprocess*/ exec("foo"); }else{ /*Parentprocess*/

waitpid(pid,&status,options); } Advantage:canmodifyprocessstatebeforecallingexec(e.g. changeenvironment,openfiles). Disadvantage:wastedwork(mostofforkedstategetsthrown away). SystemcallsforprocesscreationinWindows: CreateProcesscombinesforkandexec BOOLCreateProcess( LPCTSTRlpApplicationName, LPTSTRlpCommandLine, LPSECURITY_ATTRIBUTESlpProcessAttributes, LPSECURITY_ATTRIBUTESlpThreadAttributes, BOOLbInheritHandles, DWORDdwCreationFlags, PVOIDlpEnvironment, LPCTSTRlpCurrentDirectory, LPSTARTUPINFOlpStartupInfo, LPPROCESS_INFORMATIONlpProcessInformation ); Mustpassargumentsforanystatechangesbetweenparentand child. ProcesscreationinPintos:execcombinesUNIXforkandexec.

Concurrency LectureNotesforCS140 Spring2014 JohnOusterhout ReadingsforthistopicfromOperatingSystems:Principlesand Practice:Chapter5upthroughSection5.1.

IndependentandCooperatingThreads Independentthread:onethatcan'taffectorbeaffectedbytherestof theuniverse. Itsstateisn'tsharedinanywaybyanyotherthread. Deterministic:inputstatealonedeterminesresults. Reproducible. Canstopandcontinuewithnobadeffects(onlytimevaries). Therearemanydifferentwaysinwhichacollectionofindependent threadsmightbeexecutedonacomputer: Single-tasking:eachthreadrunstocompletionbeforethenextone starts. Multitaskingwithonecorethatissharedamongseveralthreads. Doestheorderofdispatchingaffectthebehavior? Multitaskingwithseveralcores(multiprocessing):runthreadsin parallelonseparatecores. Agiventhreadrunsononlyonecoreatatime. Athreadmayrunondifferentcoreatdifferenttimes(move state,assumeprocessorsareidentical). Fromthestandpointofathread,can'ttellthedifference

betweenonecoreandmanycores. Cooperatingthreads:thosethatsharestate. Behaviorisnondeterministic:dependsonrelativeexecution sequenceandcannotbepredictedinadvance. Behaviormaybeirreproducible. Example:onethreadwrites"ABC"toaconsolewindow,another writes"CBA"concurrently. Whypermitthreadstocooperate? Basicassumptionforcooperatingthreadsisthattheorderofsome operationsisirrelevant;certainoperationsareindependentofcertain otheroperations.Examples: Thread1:A=1; Thread2:B=2; Thread1:A=B+1; Thread2:B=2*B;

AtomicOperations BeforewecansayANYTHINGaboutcooperatingthreads,wemust knowthatsomeoperationisatomic:iteitherhappensinitsentirety withoutinterruption,ornotatall.Cannotbeinterruptedinthemiddle. Referencesandassignmentsareatomicinalmostallsystems. A=BwillalwaysreadacleanvalueforBandsetacleanvaluefor A(butnotnecessarilytrueforarraysorrecords). Inuniprocessorsystems,anythingbetweeninterruptsisatomic. Ifyoudon'thaveanatomicoperation,youcan'tmakeone. Fortunately,hardwaredesignersgiveusatomicops. Ifyouhaveanyatomicoperation,youcanuseittogenerate

higher-levelconstructsandmakeparallelprogramsworkcorrectly. Thisistheapproachwe'lltakeinthisclass.

The"TooMuchMilk"Problem Thebasicproblem: PersonAPersonB 3:00Lookinfridge:nomilk 3:05Leaveforstore 3:10ArriveatstoreLookinf 3:15LeavestoreLeavehom 3:20Arrivehome,putmilkawayArriveat 3:25Leavesto 3:30Arriveho Whatisthecorrectbehavior? Moredefinitions: Synchronization:usingatomicoperationstoensurecorrect operationofcooperatingthreads. Criticalsection:asectionofcode,orcollectionofoperations,in whichonlyonethreadmaybeexecutingatagiventime.E.g. shopping. Mutualexclusion:mechanismsusedtocreatecriticalsections. Typically,mutualexclusionachievedwithalockingmechanism: preventothersfromdoingsomething.Forexample,beforeshopping, leaveanoteontherefrigerator:don'tshopifthereisanote. Firstattemptatcomputerizedmilkbuying(assumeatomicreadsand writes): 1if(milk==0){ 2if(note==0){

3note=1; 4buy_milk(); 5note=0; 6} 7} Secondattempt:changemeaningofnote.Abuysifnonote,Bbuysif thereisanote. ThreadA 1if(note==0){ 2if(milk==0){ 3buy_milk(); 4} 5note=1; 6} ThreadB 1if(note==1){ 2if(milk==0){ 3buy_milk(); 4} 5note=0; 6} Thirdattempt:useseparatenotesforAandB. ThreadA 1noteA=1; 2if(noteB==0){ 3if(milk==0){ 4buy_milk(); 5} 6} 7noteA=0;

ThreadB 1noteB=1; 2if(noteA==0){ 3if(milk==0){ 4buy_milk(); 5} 6} 7noteB=0; Fourthattempt:justneedawaytodecidewhowillbuymilkwhenboth leavenotes(somebodyhastohangaroundtomakesurethatthejob getsdone): ThreadB 1noteB=1; 2while(noteA==1){ 3//donothing; 4} 5if(milk==0){ 6 buy_milk(); 7} 8noteB=0; Thissolutionworksbuthastwodisadvantages: Asymmetric(andcomplex)code. WhileBiswaitingitisconsumingresources(busy-waiting). Forasymmetricsolutionwithoutbusy-waiting,seePeterson's Algorithm.

DemandPaging LectureNotesforCS140 Spring2014 JohnOusterhout ReadingsforthistopicfromOperatingSystems:Principlesand Practice:Chapter9. Demandpaging:notallofaprocess'svirtualaddressspaceneedsto beloadedinmainmemoryatanygiventime.Eachpagecanbe either: Inmemory(physicalpageframe) Ondisk(backingstore)

PageFaults Whathappenswhenaprocessreferencesapagethatisinthe backingstore? Forpagesinthebackingstore,thepresentbitisclearedinthe pagetableentries. Ifpresentisnotset,thenareferencetothepagecausesatrapto theoperatingsystem. Thesetrapsarecalledpagefaults. Tohandleapagefault,theoperatingsystem Findsafreepageframeinmemory Readsthepageinfrombackingstoretothepageframe Updatesthepagetableentry,settingpresent Resumesexecutionofthethread HowdoestheOSfigureoutwhichpagegeneratedthefault?

x86:hardwaresavesthevirtualaddressthatcausedthefault (CR2register) OnearliermachinesOSgotonlyaddressoffaultinginstruction, mustsimulatetheinstructionandtryeveryaddresstofindtheone thatgeneratedthefault Restartingprocessexecutionafterapagefaultistricky,sincethefault mayhaveoccurredinthemiddleofaninstruction. Ifinstructionsareidempotent,justrestartthefaultinginstruction (hardwaresavesinstructionaddressduringpagefault). Non-idempotentinstructionsaremoredifficulttorestart: MOV+(SP),R2 Withouthardwaresupportitmaybeimpossibletoresumea processsafelyafterapagefault.Hardwaremustkeeptrackof sideeffects: Undoallsideeffectsduringapagefault? Saveinfoaboutsideeffects,useittorestartinstruction"inthe middle"

PageFetching Oncethebasicpagefaultmechanismisworking,theOShastwo schedulingdecisionstomake: Pagefetching:whentobringpagesintomemory. Pagereplacement:whichpagestothrowoutofmemory. Overallgoal:makephysicalmemorylooklargerthanitis. Locality:mostprogramsspendmostoftheirtimeusingasmall fractionoftheircodeanddata. Keepinmemorytheinformationthatisbeingused.

Keepunusedinformationondiskinpagingfile(alsocalledbacking store,orswapspace) Ideally:pagingproducesamemorysystemwiththeperformance ofmainmemoryandthecost/capacityofdisk! MostmodernOSesusedemandfetching: Startprocesswithnopagesloaded,don'tloadapageinto memoryuntilitisreferenced. Thepagesforaprocessdivideintothreegroups: Read-onlycodepages:readfromtheexecutablefilewhen needed. Initializeddatapages:onfirstaccess,readfromexecutable file.Onceloaded,savetothepagingfilesincecontentsmay havechanged. Uninitializeddatapages:onfirstaccess,justclearmemoryto allzeros.Whenpagingout,savetothepagingfile. Prefetching:trytopredictwhenpageswillbeneededandloadthem aheadoftimetoavoidpagefaults. Requirespredictingthefuture,sohardtodo. Oneapproach:whentakingapagefault,readmanypagesinstead ofjustone(winsifprogramaccessesmemorysequentially).

PageReplacement Onceallofmemoryisinuse,willneedtothrowoutonepageeach timethereisapagefault. Random:pickanypageatrandom(workssurprisinglywell!) FIFO:throwoutthepagethathasbeeninmemorylongest. MIN:Theoptimalalgorithmrequiresustopredictthefuture. LeastRecentlyUsed(LRU):usethepasttopredictthefuture.

ImplementingLRU:needhardwaresupporttokeeptrackofwhich pageshavebeenusedrecently. PerfectLRU? Keepahardwareregisterforeachpage,storesystemclock intothatregisteroneachmemoryreference. Tochoosepageforplacement,scanthroughallpagestofind theonewiththeoldestclock. Hardwarecostsprohibitiveintheearlydaysofpaging;also, expensivetoscanallpagesduringreplacement. Nomachineshaveactuallyimplementedthis. Currentcomputerssettleforanapproximationthatisefficient.Just findanoldpage,notnecessarilytheoldest. Clockalgorithm(alsocalledsecondchancealgorithm):keep referencebitforeachpageframe,hardwaresetsthereferencebit wheneverapageisreadorwritten.Tochoosepageforplacement: Cyclethroughpagesinorder(circularly). Ifthenextpagehasbeenreferenced,thendon'treplaceit;just clearthereferencebitandcontinuetothenextpage. Ifthepagehasnotbeenreferencedsincethelasttimewe checkedit,thenreplacethatpage. Dirtybit:onebitforeachpageframe,setbyhardwarewheneverthe pageismodified.Ifadirtypageisreplaced,itmustbewrittentodisk beforeitspageframeisreused. Theclockalgorithmtypicallygivesadditionalpreferencetodirty pages.Forexample,ifthereferencebitforapageisclear,butthe dirtybitisset,don'treplacethispagenow,butclearthedirtybitand startwritingthepagetodisk. Freepagepool:manysystemskeepasmalllistofcleanpagesthat

areavailableimmediatelyforreplacement. Duringreplacement,takethepagethathasbeeninthefreepool thelongest,thenrunthereplacementalgorithmtoaddanewpage tothefreepool. Pagesinthefreepoolhavetheirpresentbitoff,soanyreferences tothosepagescauseapagefault Ifapagefaultoccursforapageinthefreepool,removeitfrom thefreepoolandputitbackinservice;muchfasterthanreading fromdisk. Providesanextraopportunityforrecoveryifwemakeapoorpage replacementdecision. Howtoimplementpagereplacementwhentherearemultiple processesrunninginthesystem? Globalreplacement:allpagesfromallprocessesarelumpedinto asinglereplacementpool.Eachprocesscompeteswithallthe otherprocessesforpageframes. Per-processreplacement:eachprocesshasaseparatepoolof pages.Apagefaultinoneprocesscanonlyreplaceoneofthat process'sframes.Thiseliminatesinterferencefromother processes. Unfortunately,per-processreplacementcreatesanewscheduling dilemma:howmanypageframestoallocatetoeachprocess?If thisdecisionismadeincorrectly,itcanresultininefficientmemory usage. Mostsystemsuseglobalreplacement.

ThrashingandWorkingSets LectureNotesforCS140 Spring2014 JohnOusterhout ReadingsforthistopicfromOperatingSystems:Principlesand Practice:Section9.7. Normally,ifathreadtakesapagefaultandmustwaitforthepageto bereadfromdisk,theoperatingsystemrunsadifferentthreadwhile theI/Oisoccurring.Thuspagefaultsare"free"? Whathappensifmemorygetsovercommitted? Supposethepagesbeingactivelyusedbythecurrentthreads don'tallfitinphysicalmemory. Eachpagefaultcausesoneoftheactivepagestobemovedto disk,soanotherpagefaultwilloccursoon. Thesystemwillspendallitstimereadingandwritingpages,and won'tgetmuchworkdone. Thissituationiscalledthrashing;itwasaseriousprobleminearly demandpagingsystems. Howtodealwiththrashing? Ifasingleprocessistoolargeformemory,thereisnothingtheOS cando.Thatprocesswillsimplythrash. Iftheproblemarisesbecauseofthesumofseveralprocesses: Figureouthowmuchmemoryeachprocessneedstoprevent thrashing.Thisiscalleditsworkingset. Onlyallowafewprocessestoexecuteatatime,suchthattheir

workingsetsfitinmemory. Pagefaultfrequency:onetechniqueforcomputingworkingsets Atanygiventime,eachprocessisallocatedafixednumberof physicalpageframes(assumesper-processreplacement). Monitortherateatwhichpagefaultsareoccurringforeach process. Iftherategetstoohighforaprocess,assumethatitsmemoryis overcommitted;increasethesizeofitsmemorypool. Iftherategetstoolowforaprocess,assumethatitsmemorypool canbereducedinsize. Schedulingwithworkingsets Ifthesumofallworkingsetsofallprocessesexceedsthesizeof memory,thenstoprunningsomeoftheprocessesforawhile. Divideprocessesintotwogroups:activeandinactive: Whenaprocessisactiveitsentireworkingsetmustalwaysbe inmemory:neverexecuteathreadwhoseworkingsetisnot resident. Whenaprocessbecomesinactive,itsworkingsetcanmigrate todisk. Threadsfrom...


Similar Free PDFs