PS2 Solutions - Niall Mangan HW Solution Autumn 2016 PDF

Title PS2 Solutions - Niall Mangan HW Solution Autumn 2016
Course Applies Linear Algebra and Numerical Analysis
Institution University of Washington
Pages 4
File Size 168 KB
File Type PDF
Total Downloads 22
Total Views 286

Summary

Niall Mangan HW Solution Autumn 2016...


Description

AMATH 352 PS2 Solutions (Written) Cosmin Naziru & Niall Mangan November 4, 2016

Problem Set 2 Problem. 2.1 Computational complexity. (a) Now we are going to see how the computational complexity scales with the size of our N × N matrix. Write a loop from N = 1 to 100 that: – creates matrix of size N with random entries, – calls GEcount for each matrix. – saves the number of operations in a vector count vector (one entry for each matrix) – define a vector, Nvector from 1 to 100. You can generate random matrices of size N × N by using MATLAB command: rand(N). (b) Make a plot of the size of the matrix (x-axis) by the number of scaling operations and number of addition/replacement operations (y-axis). Solution: Here is the code I used to generate these plots: %script to make plots for PS2 Nvector = 1:100; for kk = Nvector A = rand(kk,kk); [a_out, count] = GE_count(A); countvector(kk) = count; end figure(1) %generates a new figure plot(Nvector, countvector, ’o’) %Nvector and countvector need to be the same length hold on plot(Nvector, Nvector.^3) xlabel(’Size of Matrix, N’) 1

ylabel(’Number of operations’) title(’Computational complexity of GE’) figure(2) %generates a new figure loglog(Nvector, countvector, ’o’) %Nvector and countvector need to be the same lengt hold on loglog(Nvector, Nvector.^3) xlabel(’Size of Matrix, N’) ylabel(’Number of operations’) title(’Computational complexity of GE’)

The plot shows how the number of operations required to perform Gaussian Elimination increases with the size of a square matrix (N × N ). The blue circles show the operation count for each matrix. For simplicity randomly generated square matrices were used. Also plotted is N vs N 3 as an orange line. For the most part, the blue circles from the numerical experiment fall on the orange line. (c) Do the operations both grow like O(N 3 )? Plot on a log-log scale to check. You can also plot (N vs N 3 ) to compare the slope. Solution:

2

This plot is the same as the plot for 2.1c, except that it is plotted on a loglog scale. Because the operation count got very large for matrices near N = 100, it was hard to see the growth on the standard plot. On the loglog scale, we see that N 3 appears as a line, with slope 3. Our operation count follows this line almost exactly, especially as N → ∞, which is what we are interested in to determine the order of the operations, O(N 3 ). If you go back and look at the formula for order of operations, it has other N dependence that may influence the operation count for small N . From both of these plots we can determine that the operation count does grow like N 3 . There is some deviation at long times. Note that this deviation always decreases the number of operations. This deviation comes from the fact that the Gaussian elimination code I gave you checks to see if an entry is already zero before preforming the operation. If it is zero it skips that row, reducing the number of operations required. (d) Find the number of necessary floating point operations required to compute the following operations (using big-oh notation introduced in class). Explain your reasoning in each case.

3

(a) Compute the sum A + B for A, B ∈ Rm×n . Solution: Each of the mn entries in A + B requires one addition to compute: (A + B)ij = aij + bij . Hence the cost of computing A + B is mn floating point operations, which is O(mn). (b) Compute the produce Ax for x ∈ Rn and A ∈ Rn×n where A is upper triangular. Solution: Since A is upper triangular, we can ignore the entries below the diagonal when computing Ax. The first entry of Ax is the dot product between the first row of A and x, so it costs 2n − 1 operations (n multiplications and n − 1 additions). The next entry is a dot product between the second row of A and x, but we can ignore the first entry in the row of A, so it is really the dot product of two vectors of length n−1. Hence the cost is 2(n−1)−1 = 2n−3 floating point operations. In general, to compute the kth row of Ax we must calculate the dot product between two vectors of length k, costing us 2k − 1 floating point operations. Adding up the contributions from every row we see that total number of floating point operations will be n n n X X X n(n + 1) − n = n2 . k− 1=2 (2k − 1) = 2 2 k=1

k=1

k=1

This is O(n2 ).

4...


Similar Free PDFs