Assignment 3 PDF

Title Assignment 3
Author Mashoor Jamal
Course Operating Systems
Institution Athabasca University
Pages 4
File Size 59.9 KB
File Type PDF
Total Downloads 80
Total Views 178

Summary

Assignment 3 completed...


Description

Mashoor Jamal Assignment 3 COMP 314 1) Dynamic loading is the idea of loading parts or routines of a program into memory only as they are needed, rather than loading the whole program into memory at the start. When the routine is needed by the program, the routine is called and then relocated by the source into memory. On relocating, the routine is given control by the memory. Dynamic loading is advantageous when it comes to large programs. It helps in execution time as only a small amount of code is required to be loaded into memory. Dynamic loading also does not require any special support from the operating system. 2) Paging is a memory management scheme which allows the physical address of a process to be non-contiguous. The basic method for implementing paging involves breaking up the memory into fixed blocks called frames also breaking up the logical memory into same block sizes called pages. When a process is executed, the pages of the process are loaded into any available frames in the physical memory. This is useful as the size of the logical address space is not bound by the size of the physical memory. It is important to note that each logical address should map correctly to the right physical address. To achieve this, a logical address space is made up of two parts. The first part is the page number and the second is the page offset. We can map the page number to a correct frame number with the correct page number of the logical address. To locate the right physical address we can combine that frame number with the offset provided. 3) The segmentation memory management scheme views a logical address as a collection of segments. Each of these segments has its number and a length. The length of the logical address can be referred to as the limit. When mapping from the logical to a physical address, the offset provided should fall within the segments base address and its limit. If for instance, it does not fall between the range, it indicates that the value provided was incorrect and the process is illegally trying to gain access part of the memory which it is not permitted to. The difference between segmentation and paging management scheme in terms of user’s view of memory is that with segmentation when the user is trying to access a memory address, they provide the segment number and offset number which is used to locate the physical address. Whereas, with paging management scheme, when the user is trying to access a memory address, they need to provide only one address which is then translated to a page and offset number by the operating system. 4) The demand-paging system allows the pages of a process to be loaded from the disk only when it is needed. This is important so that only the required part of a program that is needed is loaded from the disk. Paging system with swapping is connected with swapping an entire process from memory into a backing-store and swapping an entire process as well from the backing store to the physical memory. Both the concepts have similarities, however, with demand paging, the entire process is not swapped in and out. Only pages of a process that will be needed by the user are swapped into memory. Paging system with swapping takes the entire process and swaps it into memory.

5) Page replacement algorithms are used to find a faulty frame that needs to be removed from the main memory so that a new page can get space and be used for another process. The first frame that was brought into memory is the best choice for the faulty page when a page needs to be replaced for another new page in the FIFO page replacement algorithm. However, this may not be the best choice because if the first page is still actively in use by its process, then we will have to bring back the page almost immediately after replacing it. This will increase the fault rate. The secondchange algorithm makes use of a reference bit to be able to know if a page has been used recently. If the page has been used its reference bit is set to 1 and the second chance algorithm resets its reference bit to 0 and saves it. It goes ahead and checks the next page using the same process. However, this time if the page has not been used it is most likely to be the faulty page and is then selected to be removed. The second chance algorithm tries to remove the least recently used frame. To achieve this the algorithm will require hardware which not all computers can provide. 6) When a new child process is created from a parent process, it is not required that all the pages of the parent process need to be duplicated. Copy on write helps to minimize the number of pages that are created. This method allows the child process to be able to share pages of its parent’s process and if a child process will like to make any changes to the page, it has to create a copy of that page for its use rather than making changes to the page that belongs to the parent process. With this in place, the child process is not given a duplicate of every page that may or may be used. This method makes the process of creation much faster and efficient. Pages that are shareable by a child process always have to be able to be modified and marked as “copy on write pages”. 7) The six basic file operations that an operating system should be able to do are creating a file, writing a file, reading a file, deleting the file, truncating the file and reposition the file. These are basic operations, we can combine these operations to perform more complex operations. For example, copying a file can be done by combining both create a file, read and write operation. A file requires both finding spaces in the file system for the file and making an entry into the file directory using the name and attribute of the file. Writing a file requires both the name of the file and the data that is meant to be written to the file. With the name of the file, the OS can locate the file in the system with the help of the file directory. The writer pointer is used to keep track of where the next write is to start from after writing is done. A file requires knowing the file name and part of the memory that the next block of the file should be used. A read pointer is also used to know the next read start point. For efficiency, we can use one pointer to make the next start position rather than using one for reading and others from write pointed. This one combined pointer can be called current-file position pointer. To reposition the current-file position pointer we can call the name of the file and the value it needs to be repositioned too. A file will require just the name of the file. When the file is found, the space of memory it occupies can be used by other files. The file name also must be removed from the file directory. A file is useful in cases where the content of the file needs to be deleted rather than the file itself.

8) When creating a new file, the logical file system allocates a new File Control Block. The right file directory is then updated with the new file name and the FCB has been allocated to the file. After that, the file directory is written back to the disk. Depending on the file system, FCB’s can be created initially at the file-system creation time and the free ones are then allocated as needed. To create a file, a file name is required. Space needs t to be found in the current file structure for the new file. On creating the file, the file name and attributes also require to be added to the file directory. 9) Linear list structure and the hash table structure are both data structures used for file directories. The simple linear list structure is easy to implement but is slow in execution. This leads to significant lags which are not suitable for great user experience. The linear list structure makes use of a simple list of the file name with pointers of the data block. Adding a new file, deleting or updating a file requires a search to be implemented. This search is slow because of the way the data is stored. When a file is to be created, we need to make sure that the file with the same name never existed previously. In deleting or updating a file, we need to search for the file to be deleted or updated. To help out with the search, some operating systems make use of a cache to store recently used files so they can be easily retrieved if needed rather than re-reading them from the disk. A hash table is a more sophisticated data structure for a file directory when used together with a linear list. The has table aims to reduce the search time by converting file names to a special identifier integer known as a hash. Each has a pointer to file name in the linear list. However, it introduces a new problem which is a collision. A collision is a situation that arises when two files names have the same hash value. 10) Factors influencing the selection of a disk scheduling algorithm are the number of requests and the type of requests. All algorithms are similar when there is only one request in the queue as there is only one position they could go but with multiple requests, the movement needs to be coordinated in a way which is efficient as well. We need to consider the type of request to ensure this efficiency. A request requires the disk to read data is largely dependent on how the data is stored. If the data is stored contiguously, then it reduces the amount of disk head movements. When the data to be read is not stored contiguously, the disk head will have e to perform multiple disk head movements. For efficiency, an OS can implement multiple disk scheduling algorithms and choose an appropriate one based on the request. 11) SSFT stands for the shortest seek time first algorithm. This algorithm tries to handle the request that requires the shortest movement no matter the direction. Though this algorithm seems logical on the surface, it has a huge problem of possible starvation to a request. For example, if the disk has its read head at 50, and a new request comes to read form a string of 72 and 199 it will read next 72. However, if a request comes for 88 it will read 88 before 199. This cycle can continue with the new request coming in while reading another which will starve 199. When the disk head has to switch directions, it slows things down too. 12) A computer is made up of various devices which work together. To work together, these devices need to communicate with each other. These devices share a common set of wires which can be referred to as a bus. Buses are common in computer system architecture. A bus has a protocol that defines how messages are sent to these set of wires. Some buses include an expansion bus and PCI bus. A PCI bus is generally used

for connection to fast devices, and the expansion bus is used for connection to slower devices. It is also possible that these devices are connected through cable and the last devices in the series is connected to a computer system by a port. This arrangement is known as the daisy chain. The bus and daisy chain both serve as a communication channel between devices. Due to the scalability of the daisy chain, new devices can be added to the chain of device by adding new nodes to the chain. 13) A buffer serves as a “middleman” storage to facilitate the transfer of data between two devices or a device and an application. The three main reasons are 1. The difference in the speed of sending device and the receiving device – if the sending device has a lower speed than the receiving device, it is important to avoid multiple write() operations for very little data. With the help of a buffer, the data sent by a much slower device can be accumulated in the buffer and when it is full, the write operation is done as a batch. 2. The difference in data transfer size of the sending device and the receiving device – this reason is useful for networking. A buffer serves as space where packets of data that has been sent over the network can be reassembled correctly to produce the original sequence in which the data was sent. 3. Having a buffer could help with data integrity – data integrity is very important when data is copied from a source to a destination. If a system call is made for a data to be transferred and after that, the data in the application buffer is changed, to make sure that the correct data was copied, the OS makes use of a system kernel buffer to save a copy of the data so in cases where the source has been edited, we can be sure that the right piece of data is kept....


Similar Free PDFs