CS3060 Week7 Lectures&Textbook Notes PDF

Title CS3060 Week7 Lectures&Textbook Notes
Course Operating Systems Theory
Institution Utah Valley University
Pages 50
File Size 1.6 MB
File Type PDF
Total Downloads 25
Total Views 148

Summary

Week 7 lectures and textbook notes for Spring 2019 CS-3060 with Jingpeng Tang....


Description

1

CS-3060 Week 7 Lectures and Textbook Notes Table of Contents WEEK 7 Lecture

3 3

Lecture 13 Lecture 14 Chapter 6 CPU Scheduling 6.1 Basic Concepts

3 5 7 7

6.1.1 CPU-I/O Burst Cycle 6.1.2 CPU Scheduler

7 8

6.1.3 Preemptive Scheduling 6.1.4 Dispatcher

8 9

6.2 Scheduling Criteria 6.3 Scheduling Algorithms

9 10

6.3.1 First-Come, First-Served Scheduling 6.3.2 Shortest-Job-First Scheduling

10 10

6.3.3 Priority Scheduling 6.3.4 Round-Robin Scheduling

11 12

6.3.5 Multilevel Queue Scheduling 6.3.6 Multilevel Feedback Queue Scheduling

12 12

6.4 Thread Scheduling 6.4.1 Contention Scope

13 13

6.4.2 Pthread Scheduling 6.5 Multiple-Processor Scheduling

13 14

6.5.1 Approaches to Multiple-Processor Scheduling 6.5.2 Processor Affinity

14 15

6.5.3 Load Balancing 6.5.4 Multicore Processors

15 16

6.6 Real-Time CPU Scheduling 6.6.1 Minimizing Latency

16 17

6.6.2 Priority-Based Scheduling 6.6.3 Rate-Monotonic Scheduling

17 18

6.6.4 Earliest-Deadline-First-Scheduling 6.6.5 Proportional Share Scheduling

18 18

6.6.6 POSIX Real-Time Scheduling 6.7 Operating-System Examples 6.7.1 Example: Linux Scheduling

18 19 19

2

6.7.2 Example: Windows Scheduling 6.7.3 Example: Solaris Scheduling 6.8 Algorithm Evaluation 6.8.1 Deterministic Modeling 6.8.2 Queueing Models 6.8.3 Simulations 6.8.4 Implementation 6.9 Summary Chapter 6 Slides

21 22 23 23 23 23 24 24 26

3 WEEK 7 Lecture Lecture 13 ●

Synchronization attempts to solve a problem called race condition.



Entry section can be called "acquire lock"



Sequences: acquire lock → critical section → release lock



Assumption for test_and_set(): Assume that it is atomic.



Peterson's solution takes care of two processes; however, in real life, there are more than 2 processes that want to be in a critical section. ○

This is what Bounded-Waiting Mutual Exclusion with test_and_set() is for.

compare_and_swap Instruction int c  ompare_and_swap(int *value, int expected, int new_value) {  nt temp = * value; i  f (*value == expected) i *v  alue = new_value; r  eturn temp; }



This algorithm is pretty much the same as test_and_set(). ○

If 0 = False (Unlocked) and 1 = True (Locked), ■

value can be either 0 or 1; we expect value   to be 0 (because, for example,

when you go to the restroom, you expect the door to be unlocked), and the new_value you expect to be 1.



If you make the method signature part to int compare_swap(int value, 0, 1), it will be pretty much the same as test_and_set().

Bounded-Waiting Mutual Exclusion do { /  / Start of Entry Section: { w  aiting[i] = true; key = true;  while (w  aiting[i] &&  key) key = test_and_set(&lock);

w  aiting[i] = false;

/  * Explanation of what the algorithm is doing */ // / / // // // /  / /  / // //

waiting is a global shared boolean array of size n. Process i wants to go to the critical section. key is a local lock. There is only 1 key. It is shared. Keep testing the crit. sect. until it becomes unlocked. Test lock to see if it is unlocked. (lock is global). test_and_set will return true if the lock is locked, false if the lock is unlocked. (This is busy waiting) When the key is false, i is no longer waiting. i goes to the critical section.

4

// } End of Entry Section /  * Critical Section */  // Start of Remainder Section: { j  = (i + 1) % n; while ((j !=  i) && !waiting[j]) j =  (j +  1) % n;

if (j  == i) lock = false;  else waiting[j] =  false; /  / } End of Entry Section } while (true);



// / / / / // / / / / //

Move to the next process. Mod (%) n means go back to the beginning when we reach the end of the array. Checks if j want to enter the critical section. Keep iterating until we find a process that wants to go to the critical section. We break out of the loop if no other processes want to go or if a different process wants to go.

// Checked everyone, and no one wants to go. // A process wants to go, so let the process go.

For Assignment #4, you do not need to write your own test_and_set() or compare_and_swap() programs; the operating system provides system calls for them.





You can use mutex locks.



You could also use semaphores. Semaphores can solve more problems than mutex locks.

Don't use busy waiting in Assignment #4; think of a way to not use busy waiting. ○

Find a way to put a process in the WAIT state.



(Busy waiting does not put a process in the WAIT state; it puts it in the READY → RUNNING state).



An example of avoiding busy waiting is: wait(semaphore *S)  { S  ->value--; i  f (S >value < 0) { // Add this process to S->list (a waiting list); block(  ); } } signal(semaphore *S  ) { S  ->value++; i  f (S >value list; wakeup(P  ); } }

5 ■

This program puts the process in the WAIT state.

Lecture 14 ●

A semaphore is a shared integer. It models multiple instances of one resource.



Instead of busy waiting, we want to put the process is an actual waiting state—the process is in main memory, not in the CPU.



The operating system provides the two operations block and wakeup for putting processes into waiting states.





The wait() and signal() functions above is similar to registration: ○

Students register simultaneously.



When there are no more seats in a class, students are added to the waiting list.



If a student decides to drop the class, seats++ and a student can get into the class.

Two kinds of semaphores: 1. Counting semaphore: Multiple instances (more than one resource). 2. Binary semaphore: Only one instance. Same as mutex lock. ■

Example: (synch is a semaphore initialized to 0). P1: S1; // This process will run before S2. signal(synch); P2: wait(synch); S2;

■ ●

This example illustrates sequencing control.

We use semaphores and mutex locks to take care of the race condition. However, these problems introduce other problems, such as deadlock, starvation, and priority inversion.



Chapter 7 is where we solve the deadlock problem.



In Linux, there are nice values between 20 and -19. Nice values tell the OS how "important" a process is.



Three famous synchronization problems are: 1. Bounded-Buffer Problem ●

The Bounded-Buffer problem differs from the previous Bounded-Buffer, Producer, and Consumer in that there are multiple buffers in this problem, but each can only hold one item.

6 ○

In the producer and consumer buffer, there is only one buffer that can hold a BUFFER_SIZE number of items.



To solve this problem, three semaphores are designed: 1. Semaphore mutex initialized to the value 1. 2. Semaphore full initialized to the value 0. 3. Semaphore empty initialized to the value n. ○

Semaphore mutex: A binary semaphore.



Semaphore full and Semaphore empty are counting semaphores. ■

When a buffer is being used, we add 1 to the full semaphore and subtract one from the empty semaphore.



Solution: ○

Producer do { ... /* Produce an item in next_produced */ ... wait(empty); // Wait until there is an empty buffer wait(mutex); ... /* Add next_produced to the buffer */ // This is the critical section. ... signal(mutex); signal(full); } while (true);



Consumer: do { wait(full); /  / Wait until there is a full buffer wait(mutex); ... /* Remove an item from buffer to next_consumed */ // This is the critical section. ... signal(mutex); signal(e  mpty); ... /* Consume the item in next_consumed */ ... } while (true);

7 ●

For Assignment #4, you can use this solution, but it is easier to use 1 buffer with multiple spots.



This solution uses more resources (because it has three semaphores). And the more semaphores you use, the more likely you are to get into a deadlock situation.



This solution solves the race condition but does nothing to check for deadlock.

2. Readers and Writers Problem ●

An example is of purchasing items online. You may be looking at an item that costs $1, but a writer updates it to $2 and you don't see the updated price.



Three semaphores: 1. Semaphore rw_mutex initialized to 1. Binary semaphore. 2. Semaphore mutex initialized to 1. Binary semaphore. 3. Integer read_count initialized to 0. Counting semaphore.

3. Dining-Philosophers Problem Chapter 6 CPU Scheduling 6.1 Basic Concepts ●

Several processes are kept in memory at one time. When one process has to wait, the operating system takes the CPU away from that process and gives the CPU to another process. This pattern continues. Every time one process has to wait, another process can take over use of the CPU.



Scheduling of this kind is a fundamental operating-system function. Almost all computer resources are scheduled before use

6.1.1 CPU-I/O Burst Cycle ●

The success of CPU scheduling depends on an observed property of processes: process execution consists of a cycle of CPU execution and I/O wait. Processes alternate between these two states. ○

Process execution begins with a C  PU burst. That is followed by an I/O burst, which is followed by another CPU burst, then another I/O burst, and so on. Eventually, the final CPU burst ends with a system request to terminate execution.



The durations of CPU have been measured extensively.



An I/O-bound program typically has many short CPU bursts. A CPU-bound program might have a few long CPU bursts.

8 6.1.2 CPU Scheduler ●

Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The selection process is carried out by the s hort-term scheduler, or CPU scheduler. The scheduler selects a process from the processes in memory that are ready to execute and allocates the CPU to that process.



Note that the ready queue is not necessarily a first-in, first-out (FIFO) queue.



The records in the queues are generally process control blocks (PCBs) of the process.

6.1.3 Preemptive Scheduling ●

CPU-scheduling decisions may take place under the following four circumstances: 1. When a process switches from the running state to the waiting state (for example, as the result of an I/O request or an invocation or wait() for the termination of a child process) 2. When a process switches from the running state to the ready state (for example, when an interrupt occurs) 3. When a process switches from the waiting state to the ready state (for example, at completion of I/O) 4. When a process terminates



For situations 1 and 4, there is no choice in terms of scheduling. There is a choice for situations for 2 and 3.



When scheduling takes place only under circumstances 1 and 4, we say that the scheduling scheme is non preemptive or cooperative. Otherwise, it is preemptive. ○

Under non preemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. ■



Windows uses this.

The Mac OS X operating system also uses preemptive scheduling. ■

Previous versions of the Macintosh operating system relied on cooperative scheduling. ●

Cooperative scheduling is the only method that can be used on certain hardware platforms, because it does not require the special hardware (for example, a timer) needed for preemptive scheduling.



Preemptive scheduling can result in race conditions when data are shared among several processes.



Preemption also affects the design of the operating-system kernel.

9 ●

Because interrupts can occur at any time, and because they cannot always be ignored by the kernel, the sections of code affected by interrupts must be guarded from simultaneous use.

6.1.4 Dispatcher ●

The d  ispatcher is the module that gives gives control of the CPU to the process selected by the short-term scheduler. This function involves the following:





Switching context



Switching to user mode



Jumping to the proper location in the user program to restart that program

The dispatcher should be as fast as possible, since it is invoked during every process switch. The time it takes for the dispatcher to stop one process and start another running is known as the dispatch latency.

6.2 Scheduling Criteria ●

Different CPU-scheduling algorithms have different properties, and the choice of a particular algorithm may favor one class of processes over another.



Many criteria have been suggested for comparing CPU-scheduling algorithms. Which characteristics are used for comparison can make a substantial difference in which algorithm is judged to be best. The criteria include the following: ○

CPU utilization . We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily loaded system).



Throughput. If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes that are completed per time unit, called throughput.



Turnaround time. From the point of view of a particular process, the important criterion is how long it takes to execute that process. The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.



Waiting time. The CPU-scheduling algorithm affects only the amount of time that a process spends waiting in the ready queue. Waiting time is the sum of the periods spent waiting in the ready queue.

10 ○

Response time. In an interactive system, turnaround time may not be the best criterion. Response time is the time it takes to start responding, not the time it takes to output the response.



It is desirable to maximize CPU utilization and throughput and to minimize turnaround time, waiting time, and response time.



For interactive systems, it is more important to minimize the variance in the response time than to minimize the average response time.

6.3 Scheduling Algorithms ●

CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU.

6.3.1 First-Come, First-Served Scheduling ●

The f irst-come, first-served (FCFS) scheduling algorithm is the simplest CPU-scheduling algorithm. With this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue.



Gantt Chart: Bar chart that illustrates a particular schedule, including the start and finish times of each of the participating processes.



The average waiting time under the FCFS policy is generally not minimal and may vary substantially if the processes' CPU burst times vary greatly.



There is a c onvoy effect as all the other processes wait for the one big process to get off the CPU. this effect results in lower CPU and device utilization than might be possible if the shorter processes were allowed to go first.



FCFS is non preemptive.



It is particularly troublesome for time-sharing systems, where it is important that each user get a share of the CPU at regular intervals. It would be disastrous to allow one process to keep the CPU for an extended period.

6.3.2 Shortest-Job-First Scheduling ●

Shortest-Job-First (SJF): This algorithm associates with each process the length of the process's next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie.



This scheduling algorithm is provably optimal, in that it gives the minimum average waiting time for a given set of processes.



The real difficulty with the SIF algorithm is knowing the length of the next CPU request.

11 ●

SIJ scheduling is used frequently in long-term scheduling



The next CPU burst is generally predicted as an exponential average of the measured lengths of previous CPU bursts.



This algorithm can be either preemptive or nonpreemptive. ○

A preemptive SJF algorithm will preempt the currently executing process, whereas a non-preemptive SIJ algorithm will allow the currently running process to finish its CPU burst. ■

Preemptive SJF scheduling is sometimes called s hortest-remaining-time-first scheduling.

6.3.3 Priority Scheduling ●

The SJF algorithm is a special case of the general p  riority-scheduling algorithm. A priority is associated with each process, and the CPUis allocated to the process with the highest priority. Equal-priority processes are scheduled in FCFS order. An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted) next CPU burst. The larger the CPU burst, the lower the priority, and vice versa.



Note that we discuss scheduling in terms of high priority and low priority. Priorities are generally indicated by some fixed range of numbers, such as 0 to 7 or 0 to 4,095. However, there is no general agreement on whether 0 is the highest or lowest priority.



Priorities can be defined either internally or externally. Internally defined priorities use some measurable quantity or quantities to compute the priority of a process.



Priority scheduling can be either preemptive or nonpreemptive. When a process arrives at the ready queue, its priority is compared with the priority of the currently running process. A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. A non-preemptive priority scheduling algorithm will simply put the new...


Similar Free PDFs