Cmpt300 lecture 16 colour PDF

Title Cmpt300 lecture 16 colour
Course Operating Systems I
Institution Simon Fraser University
Pages 31
File Size 1.1 MB
File Type PDF
Total Downloads 3
Total Views 137

Summary

Download Cmpt300 lecture 16 colour PDF


Description

CMPT 300

Memory Management

Page 1

Today’s Plan Upcoming:

Today’s topics:

ì Assignment 3

ì From last time: ì Static vs. Dynamic Relocation

ì Quiz 4

Last time: ì Basics of

Memory Management

ì Swapping ì Multiprogramming with a Fixed

Number of Tasks ì Multiprogramming with a Variable Number of Tasks ì General Dynamic Storage Problem ì Paging

CMPT 300

Memory Management

Page 2

Swapping ì Allows the monitor to switch from one process to

another ì Current memory contents

written to a backing store (disk) ì Memory image for the next

user process read in ì Ready queue contains

processes whose memory images are on disk (and ready to run)

CMPT 300

Memory Management

Page 3

Swapping ì Fast device required (i.e., disk) ì ì Dependent upon the device performance ì If we used a scheduling scheme like Round Robin, we would

want a VERY large quantum

CMPT 300

Memory Management

Page 4

Background Swapping ì We need to set up two buffers: “incoming” and

“outgoing” ì At context switch time we do two memory-to-memory

copies ì During application execution,

we do two disk-to-memory copies to update the buffers ì

CMPT 300

Memory Management

Page 5

Swapping with Partitions ì Swapping with multiple programs (buffers) ì

ì one solution is to do some copying at context switch

time - just more overhead ì need to protect access both before and beyond the

user program

CMPT 300

Memory Management

Page 6

Upper/Lower Bound

CMPT 300

Memory Management

Page 7

Base/Limit Registers

CMPT 300

Memory Management

Page 8

Swapping with Partitions ì Part of the context switching is the reloading of these

registers ì Now, swapping can be done outside of the user space,

with no copying ì Most significant: swapping can be eliminated in some

cases ì

CMPT 300

Memory Management

Page 9

Multiprogramming with a Fixed Number of Tasks ì Main memory is divided into a set of fixed partitions ì Each partition can be scheduled separately ì possibly a separate ready queue for each memory

partition

ì Bounds are set just before and after the

partition for the running job ì A process cannot span partition boundaries ì A partition cannot be shared ì

CMPT 300

Memory Management

Page 10

Multiprogramming with a Fixed Number of Tasks ì We could add swapping done separately for each

partition ì when a 6K job begins an I/O burst, we could swap it out and

put another job in

ì What if a 1.5K process in the 2K spot asks for another

1K of memory? ì we can do any of the following: ì ì ì

ì Region size selection is difficult

CMPT 300

Memory Management

Page 11

Multiprogramming with a Fixed Number of Tasks ì MFT suffers from fragmentation: ì internal fragmentation: memory internal to one partition

that is wasted because of the way we have set up the partitions ì

ì external fragmentation: enough total free space exists to

run a process, but the space can't be used for that process because no single partition can hold the entire process

CMPT 300

Memory Management

Page 12

Multiprogramming with a Fixed Number of Tasks

CMPT 300

Memory Management

Page 13

Multiprogramming with a Variable Number of Tasks ì MVT - “multiple, contiguous variable partition

allocation” ì The O/S keeps track of which memory is in use ì initially, all is free, and kept in one block

ì A job arrives ì O/S searches the free memory block(s) ì If one is found that is big enough ì we allocate the required portion to the job ì mark the remainder of that block as a new, smaller, free block

CMPT 300

Memory Management

Page 14

Multiprogramming with a Variable Number of Tasks ì When a job finishes: ì its memory is returned to the system ì If the returned memory is next to another free block, the

two are joined into one contiguous block

CMPT 300

Memory Management

Page 15

Multiprogramming with a Variable Number of Tasks

Jobs 1 2 3 4 5

Memory 60 100 30 70 50

Time 10 5 20 8 15

CMPT 300

Memory Management

Page 16

Multiprogramming with a Variable Number of Tasks ì Generally, free blocks can be kept in a list. ì When we want memory for a job, the list must be

searched for a memory chunk that is big enough. ì Which do we choose? ì This is an example of the general, dynamic storage

allocation problem

CMPT 300

Memory Management

Page 17

General Dynamic Storage Allocation Problem ì Problem: ì allocating contiguous blocks of storage from an initially free,

single, large block

ì Goals: ì ì

CMPT 300

Memory Management

Page 18

General Dynamic Storage Allocation Problem ì First Fit: allocate the first block on the list that is big

enough to satisfy the request. ì

ì Best Fit: allocate the smallest free block bigger than

the block size that is needed. ì

ì Worst Fit: allocate the largest block available. ì

CMPT 300

Memory Management

Page 19

General Dynamic Storage Allocation Problem ì What if a request is made for 19995 blocks of storage,

and there is a 20000 block space free? ì The overhead to keep track of the wasted five bytes is more than it

is worth (more than 5 bytes needed to track a memory block)

ì ì

ì There are many algorithms for the general storage

allocation problem ì Many trade off one of allocation/freeing time and fragmentation

for the other ì Some of these algorithms are, or are close to O(1) in both

allocation and freeing (at the expense of fragmentation)

CMPT 300

Memory Management

Page 20

Compaction ì With MVT, external fragmentation can get very severe. ì

ì Compaction: ì shuffle all used blocks of memory into one, contiguous,

large block

CMPT 300

Memory Management

Page 21

Compaction ì External fragmentation eliminated by combining all

fragments into one large block ì may be big enough to run a program while each of the

smaller ones were not

ì Can compaction always be done?

What is necessary to make it possible? ì ì logical addresses must be bound to physical addresses at run

time so we can move the program in memory, once execution has begun ì

CMPT 300

Memory Management

Page 22

Compaction ì Alternatives: ì move job 4 to the slot above

job 3 (copy only 400K) ì move job 3 to below job four,

leaving a large, 900K hole in the middle (copy only 200K) ì move both jobs 3 and 4 up

ì Swapping / Compaction can be used together ì When the jobs are swapped back in, they can be placed

together to leave the largest hole.

CMPT 300

Memory Management

Page 23

Paging ì Fragmentation is a big problem with MVT ì

ì We could greatly reduce the cost of dealing with

fragmentation if the memory didn't need to be contiguous ì Paging allows a logical block to be scattered

throughout physical memory ì any available chunk of memory is usable even if it isn't large

enough to hold the entire object

CMPT 300

Memory Management

Page 24

Paging ì Requires hardware support ì The load module is broken into a series of fixed-size pages ì size determined by hardware (2k and 4k are common) ì all pages but the last page are the same sizes ì

ì Main memory divided into frames ì

frame size = page size

ì

one page fits into one frame exactly

ì

last page for each object may be smaller, therefore occupies only part of one frame (remainder is wasted)

CMPT 300

Memory Management

Page 25

Paging ì Each logical address consists of two parts: ì page number:

ì displacement within the page:

CMPT 300

Memory Management

Page 26

Paging ì How do we derive these page numbers and displacements? ì We use a page size of 2N ì Displacement à ì Page number à ì E.g.

logical address

ì Why do we do it this way? ì

CMPT 300

Memory Management

Page 27

Paging ì page table: data structure

for each process ì entries for the base address

of each page in physical memory ì map showing where each part of the process is loaded in memory

CMPT 300

Memory Management

Page 28

Paging ì physical address calculation: ì use page number as an index into the page table ì derives the base address of the frame

ì add the displacement to derive the physical address

CMPT 300

Memory Management

Page 29

Paging ì When a job is to be loaded: ì We see how many frames are required for the job ì If enough frames are available, we load the job into the free

frames, and update the page table for this job

ì What about fragmentation with this paging scheme?

Can we have: ì External fragmentation? ì Internal fragmentation?

ì The page table for each job is kept in its PCB ì There’s also a hardware register that has a pointer to the page table for the currently executing process

CMPT 300

Memory Management

Page 30

Page Table Implementation ì Where do we store the page table? ì ì

ì Average memory access time:

CMPT 300

Memory Management

Page 31

Page Table Implementation ì What if we increase the cache size?

ì Paging allows code (pages) to be shared ì

E.g. editor can be loaded by the first user to load it, and subsequent users page table will point to the previously loaded code

ì

ì Paging separates logical (contiguous) memory and

physical memory...


Similar Free PDFs