COP 4534-Homework 3 PDF

Title COP 4534-Homework 3
Author Tomas Ortega Rojas
Course Introduction to Algorithms
Institution Florida International University
Pages 4
File Size 53.5 KB
File Type PDF
Total Downloads 23
Total Views 155

Summary

Solutions for homework 3 in the course COP 4534 please don't kill me professor!...


Description

COP-4534: Algorithm Techniques Homework 3 1) In the AVL tree data structure, we need to add an extra field height(x) to bookkeep the height of each node x so that whenever a node becomes unbalanced, we can perform rotations to correct this. How much space is required for this field for each node? Can you improve the space complexity for each node to O(1)? Since the height of an n node AVL tree is O(log n) we will need O(log(log n)) bits in each node to store the node’s height. The largest integer that a k bit number can represent is 2! − 1. Since we need to be able to represent the height of the whole tree we have that 2! − 1 = log 𝑛 so 𝑘 = log*(log(𝑛) + 1). We can improve this by storing only the balance factor which can be -1, 0 or 1 and it will only take 2 bits so it will take O(1) or constant space at each node but we will need to recursively calculate the height every time we need to recalculate the balance factor for inserts and deletes so we are trading time efficiency for space efficiency. 2) Design a data structure which stores a sorted list of elements and supports the following operations: Insert(x): insert a new element x into the list Delete(x): delete an existing element x from the list Query(x): search in the list for element x Rank(x): find the rank of element x in the list (x can be already in the list or a new element) Select(k): select the kth smallest element in the list An augmented AVL tree could be a good data structure to solve this problem. In this AVL tree each node stores the size of the subtree rooted at that node. It is important that we update the size field at each node on the path of an insertion or deletion just like we update the height on a regular AVL tree when doing inserts and deletes. The Insert(x) operation can be implemented by inserting the element in the same way as in a regular binary search tree but we need to balance every node we encounter in our way when necessary by performing right or left rotations (this include double rotations). The insert(x) operation takes O(log n) since the height is O(log n) in an AVL tree and the rotate operations take constant time. For the Delete(x)operation, we delete the element the same way as in a standard BST but we need to balance each node we visit to maintain the AVL property as in the insert operation. Time complexity of delete(x) is O(log n).

For Query(x) we only have to search for the item x in the tree as in a regular BST, comparing currentNode with x and recursively calling Query on the left subtree if x < currentNode or calling it on the right subtree when x > currentNode. Since in a BST the search operation takes O(h) where h is the height of the tree, the time complexity in the case of an AVL tree the height is O(log n) so the search operation Query(x) will take O(log n). The Rank(x) operation is implemented using a size function that just returns the size field of a node. Here is some pseudo code to illustrate a helper method to find the rank of x, the rank(x) function will call this function with the root node to start searching for x. rank(x, AvlNode t){ if ( t == null ){ return 0; } if ( x < t.element ) { return rank( x, t.left ); }else if( x > t.element ){ //when going right, add the size of left subtree + 1 (values less than x) return rank(x, t.right) + size(t.left) + 1; }else { return size(t.left) + 1; } } The time complexity of the Rank(x) operation is O(log n). To implement the Select(k) operation we use a helper method select(k, t) so that Select(k) can call select(k, t) with the root node of the AVL tree. This will take O(log n) select(k, AvlNode t){ int r = size(t.left) + 1; if( k < r){ return select(k, t.left); }else if(k > r){ return select(k - r, t.right); } return t.element; } 3) Given an array of n integers, how to find the maximum element with as few comparisons as possible? What if we need to find both the maximum and the second maximum elements? What about the k largest elements in the array? (Hint: use extra space to build a tree).

To solve this problem we can build a heap from the given array and use an extra heap of size k. First we must build a MaxHeap from the given array using the function BuildHeap which takes O(n) time, lets call this heap Heap1. To find the first maximum element we just need to removeMax from Heap1 and we are done. For k = 2 we would need to create a second heap lets call it Heap2 and add the two children of the max of Heap1 and then call removeMax on Heap2. For k = 3 we must take the two children of the previous max in Heap1 (which was one of the children that we added from Heap1 to Heap2 in k =2) and add them to Heap2 so that we have 2 new elements in Heap2, we then extract max on Heap2 to get the third largest. We keep doing this for the Kth element, adding the two children of the previous max to Heap2 and extractingMax from Heap2. Since at any point Heap2 have at most k elements then removeMax will take O(log k ) to remove the max and it will also take O(log k) to add an element to Heap2. Since we need to do 2 things, 1) add 2 elements and 2) remove max from Heap2 and we do them k -1 times, the running time of this algorithm is O(k log k).

4) Suppose you are given an unsorted list of n distinct numbers. However, the list is close to being sorted in the sense that each number is at most k positions away from its original position in the sorted list. For example, suppose the sorted list is in ascending order, then the smallest element in the original list will lie between position 1 and position k + 1, as position 1 is its position in the original sorted list. Design an efficient algorithm that sorts such a list. The running time of your algorithm should depend on both n and k (a helpful sanity check whether your running time bound makes sense is that, when k is close to n, then the input list is just a generic unsorted list). To solve this problem, we need a Min Heap data structure containing the first k + 1 elements of the list. If the list is given as an array we can use that same array to contain the heap ignoring the rest of the elements that are not in the range we want. At this point the first K + 1 elements of the array are in the heap. Then we call heapify once so that each element satisfies the heap property, this will take O(k). Now we need to remove min and add it to the output array and add one new element from the array to the MinHeap, this two operations will take O(log k) time. We keep doing this until we reach the end of the array. The time complexity of the algorithm is O(n log k) since we remove min and add a new element to the heap n – k times.

5) Let H1 and H2 be two (binary) max-heaps with n1 and n2 elements respectively. If every element in H1 is larger than every element in H2, design an algorithm that merges these two heaps into one (binary) max-heap in O(n2) time (assume that both H1 and H2 are large enough to hold n1 + n2 elements). Since all the elements in H1 are larger than the elements in H2, we can add each element of H2 to H1 as a leaf and swap when we see that we are violating the heap

property. Since both H1 and H2 are represented as arrays and H1 has enough space to contain both heaps we can just append the elements from H2 to H1 one at a time and check that it is smaller than its parent, if it is not then we must swap them and continue appending elements from H2 to H1. This algorithm will take O(h2) where h2 is the number of elements H2 since the time it takes to swap elements is constant....


Similar Free PDFs