CS643 homeworkset 2 Arpitha Vasudeva PDF

Title CS643 homeworkset 2 Arpitha Vasudeva
Author Kevin Shah
Course Cloud Computing
Institution New Jersey Institute of Technology
Pages 1
File Size 67 KB
File Type PDF
Total Downloads 50
Total Views 126

Summary

Cloud Computing Practicals - 2...


Description

CS 643, Cloud Computing – Homework 2 1. (3 points) What is the typical application structure in Windows Azure? What type of communication is used to exchange data between application components and why? Ans: Typical application structure in windows Azure is called service. The user will have to provide definition information, configuration information, and at least one role. The roles represent code, programs, with an entry point that runs in its own virtual machine. Each role runs in a virtual machine. Microsoft provides mainly two roles: a Web role and a worker role. Web role will be accessed generally from outside the Cloud, from the Internet, and it will pass work to the worker role. The worker role is arbitrary code in Windows Azure, so it could be anything and recently, Microsoft started to provide a VM role. The VM role is uploaded by a virtual hard disk. The communication between web roles, and worker role goes through the fabric. You see that both roles have an agent that interacts with the fabric. Communication also happens through queues. It may be asynchronous or synchronous. 2. (3 points) What is the most important consideration when assigning map tasks to workers in MapReduce? Why? How does the Master/Job Tracker deal with it? Ans: When assigning map tasks to workers in MapReduce the most important consideration is locality of data to worker. This is because the worker reads task input from the local disk. The intermediate key/value pairs are written to local disks, divided into R (Number of Reduce tasks) regions and the locations of the regions are passed to the master. The Master/Job Tracker assigns each reduce task to a free worker. Worker reads intermediate key/value pairs from map workers. Worker applies users reduce operation to produce the output (stored in the GFS). 3. (4 points) Assume this simple Dryad topology: A  B. If task B crashes, will both tasks have to be re-executed? Justify your answer. Ans: Due to different channel we use there are multiple possibilities. If the channel is file, only B needs to be re-executed because the file process by A will not be affected by B’s crash. If the channel is TCP pipe both A & B needs to be re-executed since both end-point vertices must be scheduled to run at the same time, which means both A & B needs to be re-built the TCP link and exchange data via the new link. If the channel is shared – memory FIFO both A & B needs to be re-executed since both end-point vertices must run within the same process, B’s crash will cause the whole process to be crashed....


Similar Free PDFs