COEN 283 - COEN 283 lecture notes PDF

Title COEN 283 - COEN 283 lecture notes
Course Mobile Computing
Institution Santa Clara University
Pages 12
File Size 694.6 KB
File Type PDF
Total Downloads 31
Total Views 136

Summary

COEN 283 lecture notes...


Description

1. Suppose we have process A that fetches some results from disk (fetching result from disk may take long period of time), and we have process B, C, D that do other stuff. In operating system, suppose the scheduling is round-robin, then process A will be put into blocked state, and then controller will actually find the result from disk, at the same time, CPU is executing process B, C, and D in a round-robin way. Once the result is available from disk, controller will send an interrupt to CPU indicating process A is Ready so that process A can be put to scheduler for it to be scheduled..

2.

User-level threads: run in user space, created usually by program. Kernel in this case, knows nothing about the threads we created. Kernel only invokes and sees process, not threads inside process. Each process needs customized scheduling algorithm. You do not have overhead when you do context switching for threads in user space.

I could have these threads and the thread table will monitor and maintain these so apparently when I look at this, I say, well, this operating system obviously supports it. Operating system handles the scheduling of threads. Context switching between threads takes more time because we need system call here. user-level threads: if one thread blocks for IO the entire process (containing all threads) will block. So non blocking IO for the individual thread does not exist. blocking and non-blocking refer to the entire process.

user-level threads (pthread_create, pthread_join): Within your program, the program controls the scheduling of user-level thread, user have to specify the scheduling of threads. Kernel-level threads: all threads are visible to kernel and OS, OS manages the threads through kernel table. OS will handle everything.

a. visible to OS b. invisible to OS TextBook: When a disk interrupt occurs, the system makes a decision to stop running the current process and run the disk process, which was blocked waiting for that interrupt. Ex: disk interrupt lets scheduler in CPU to put process A to Ready state from Blocked state. A process may be interrupted thousands of times during its execution, but the key idea is that after each interrupt the interrupted process returns to precisely the same state it was in before the interrupt occurred. the state means all the information of process for it to go back to the state where it is stopped, e.g. register or stuff on top of stack that are saved If the process is running, and interrupt happens, it will be put to Ready state, and we handle the interrupt through interrupt handler, and then after the interrupt finishes handling, CPU scheduler will choose the next process to execute.

01/21/2021: Mutual exclusion with busy waiting

Disadvantage: another process keeps busy doing nothing but just to check whether it can go in a critical region.

Q: Result depends on order of access means if P1 comes first and P2 comes next, the result may be different from the result if P2 comes first and P1 comes next? A: Yes, Due to context switching, it is possible for P2 to read the old data before P1 writes the new data.

A third example where threads are useful is in applications that must process very large amounts of data. The normal approach is to read in a block of data, process it, and then write it out again. The problem here is that if only blocking system calls are available, the process blocks while data are coming in and data are going out. Having the CPU go idle when there is lots of computing to do is clearly wasteful and should be avoided if possible. Threads offer a solution. Assumption: CPU does not have other processes to execute.

The process could be structured with an input thread, a processing thread, and an output thread. The input thread reads data into an input buffer. The processing thread takes data out of the input buffer, processes them, and puts the results in an output buffer. The output

buffer writes these results back to disk. In this way, input, output, and processing can all be going on at the same time. Of course, this model works only if a system call blocks only the calling thread, not the entire process. Explanation: If the system call blocks the entire process, then other threads within the process get blocked, decrease efficiency. Within the entire process, threads that deal with I/O should go to blocking and those threads that deal with CPU is not going to be blocking. The idea of whole process being unblocking, is that, the threads that deals with I/O can be blocked while threads using CPU can still continue with operation. the idea that I/O being non blocking is incorrect. Non blocking is referring to entire process, one threads goes to block, does not mean entire process has to do that. So entire process as a whole is considered non blocking. The system calls could all be changed to be nonblocking (e.g., a read on the keyboard would just return 0 bytes if no characters were already buffered), but requiring changes to the operating system is unattractive. if any thread need I/O, it will cause entire process to go into blocking, thus, threads need to be CPU-bound. In particular, user threads should not have to make special nonblocking system calls or check in advance if it is safe to make certain system calls. Another common thread call is thread yield, which allows a thread to voluntarily give up the CPU to let another thread run. Such a call is important because there is no clock interrupt to actually enforce multiprogramming as there is with processes. Thus it is important for threads to be polite and voluntarily surrender the CPU from time to time to give other threads a chance to run. Above case applies to user-level threads only. pthread_yield: manually control the scheduling between threads within process.

If the server is entirely CPU bound, there is no need to have multiple threads. Why? A: Because if the server is entirely CPU bound, then there is no thread blocked waiting for I/O, and thus there is no possibility for server to wait together with blocked thread to wait for I/O. Suppose the server is not entirely CPU bound, then we have threads blocked waiting for I/O, and then CPU can handle other threads that are in Ready state, thus having multiple threads is necessary, however, if no thread is waiting for I/O, then CPU will be in full utilization based on formula 1 - p^n, p = 0, then no matter how many threads there are, the CPU utilization will be 1, and thus there is no need to have multiple threads.

Deadlock: Resource: table in databases, semaphore, locks A resource is anything that is acquired by process, used by process, and released by process. Preemptable Resource: can be taken by process without major impact. process starvation: a process may wait for a long long time so that it will prevent other processes from acquiring resources owned by this process, thus causing deadlock. live lock: ethernet transmission collision: collision on wire, both wait and try again. Multiple processes wait the same amount of time and retransmit. if none of the processes does any I/O at all, shortest job first is better than round robin, so under some circumstances running all processes sequentially may be the best way. Explain: if none of the process using I/O, then from CPU’s perspective, it makes more sense for CPU to do shortest job first because for shortest job first, the average waiting time is minimum.

Midterm: What is O/S and what are some of its major areas of responsibilities? An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs. operating systems perform two essentially unrelated functions: providing application programmers (and application programs, naturally) a clean abstract set of resources instead of the messy hardware ones and managing these hardware resources. What is a process? A process is just an instance of an executing program, including the current values of the program counter, registers, and variables. What is shared in processes? Shared memory is shared in processes. What is a multiprocessing system? Multiple CPU What is a thread? a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler

user-space thread implementation: put the threads package entirely in user space. The kernel knows nothing about them. As far as the kernel is concerned, it is managing ordinary, single-threaded processes. advantages: 1. The first, and most obvious, advantage is that a user-level threads package can be implemented on an operating system that does not support threads. 2. They allow each process to have its own customized scheduling algorithm so that each process can control the scheduling for its threads. disadvantages: 1. hard to implement efficient blocking system call since if user-level thread within process is blocked, all other threads within that process are blocked. 2. if a thread starts running, no other thread in that process will ever run unless the first thread voluntarily gives up the CPU. kernel-space thread implementation: kernel has a thread table that keeps track of all the threads in the system. When a thread wants to create a new thread or destroy an existing thread, it makes a kernel call, which then does the creation or destruction by updating the kernel thread table. Advantages: 1. When a thread blocks, the kernel, at its option, can run either another thread from the same process (if one is ready) or a thread from a different process 2. Operating system handles the scheduling of threads. Disadvantages: 3. substantial cost for threads to make system call, thus have large overhead in creating thread, destroy thread and context switching between threads.

Context switch: switch from one process to another or from one thread to another. What is saved during this context switching? Program state such as CPU registers’ values. What constitutes a program state?

system call: In computing, a system call (commonly abbreviated to syscall) is the programmatic way in which a computer program requests a service from the kernel of the operating system on which it is executed. Scheduling Algorithm: First come, first serve: Advantage: The great strength of this algorithm is that it is easy to understand and equally easy to program. Disadvantage: slow down the compute-bound process very much Shortest Job First shortest job first always produces the minimum average response time for batch systems

Main memory (RAM) acts like cache of disk CPU always refers to virtual address. MMU maps virtual address to physical address. If happens to be page table translation, that page table must be in main memory.

When A goes to pink, at some moment when A is referenced again, it will become blue again When A goes to blue, if not been referenced for a long time, the reference bit will be set to 0, and A goes to pink. There is slim chance that every page is used recently. When {A, t = 0} goes from blue to pink, it will be removed from linked list and {A, t=32} will be inserted to the tail of linked list. Difference between LRU and NRU?

Q: (b) Suppose that instead of a clock interrupt, a page fault occurs at tick 10 due to a read request to page 4. If we have page 5, time stamp 6, V = 1, R= 0, M = 0, which page will be replaced? Page 3 or Page 5 or random?

A 32-bit system can access 2 32 memory addresses, i.e 4 GB of RAM or physical memory ideally, it can access more than 4 GB of RAM also. A 64-bit system can access 2 64 memory addresses, i.e actually 18-Quintillion bytes of RAM. In short, any amount of memory greater than 4 GB can be easily handled by it.

point w and above cannot be reached because at point w, process B requests a printer that is owned by process A, point z and right cannot be reached because at point z, process A requests a plotter which is owned by process B, thus only path 1 and 2 can reach to point u. LRU: a lot of bookkeeping operation needs to be done because we need to keep track of order that each page is referenced. Example: page 1 waste CPU cycle to polling, provides much lower overhead. Interrupt: send from device to CPU to indicate it need to be serviced. -> handle unpredictable event well, device will notify you through interrupt notification....


Similar Free PDFs