Course Summary 2 PDF

Title Course Summary 2
Course Elementary Linear Algebra
Institution University of Virginia
Pages 3
File Size 60.1 KB
File Type PDF
Total Downloads 13
Total Views 161

Summary

Summary of a third of the course....


Description



3351/2017Spring/Prep sheet for the Second midterm.



Disclaimer: The following topics are not meant to be exhaustive, but do reflect the instructor’s view on the material.

General We’ll test on materials from 4.1 ∼ 4.7, 5.1 ∼ 5.3.

Computations We saw diverse computations in these chapters–you’ve taken shots at these in homework. These are the ones that come to my mind first: 1 • Know how to reduce a given spanning set of a finite dimensional vector space to a basis. This is the computational side of the spanning set theorem, and will likely take place in Rn (4.3). For example, see 15 ∼ 18 from page 216. This is also the method you use to find the dimensions of subspaces that are given as span{v~1 . . . ~vn } • Know how to enlarge a given linearly independent set of a finite dimensional vector space to a basis. This is the computational side of theorem 11, and will likely take place in Rn (4.3). See the example given in class and old handouts. • Fix a matrix A. Find the Null space, column space and row space of the matrix. Find the bases to these spaces(e.g. Pivot columns for ColA, non-zero rows in the reduced echelon for RowA, the vectors with weights being free variables for NulA)(4.2, 4.6); compute rankA=dimRowA=dimColA, dimNulA. • Fix a basis B of V , write down [~x]B for ~x ∈ V (e.g.pg223/30,31 ). Fix another basis C of V , •1 compute P when the C−coordinates of elements of B are known.(4.4, 4.7) C←B

•2 compute P when the E−coordinates(i.e. the standard coordinates) of elements C←B

of B and C are known. Here P = P P = P C←B

C←EE ←B

−1

E ←C

P (4.7)

E ←B

Do note the notational difference between 4.4 and 4.7 (e.g. P in (4.7) is PB in (4.4)) E ←B

1

For starters, don’t forget how to do the three fundamental calculations (as shown on the Midterm 1 prep sheet): Row reduction, Matrix multiplication and computing determinants.

1

• Be able to tell if a n×n( n is likely ≤ 4) matrix A is diagonalizable (into a real matrix). Find the P , D in P −1 AP = D(5.3)—follow our algorithm given in class. (One remark for advanced students: to be precise,if there are ANY complex eigenvalues, then A won’t be diagonalizable into a real matrix. So you can have only real eigenvalues and can’t move on with the algorithm. See one of the practice problems below.) Advice: Read one example from the text, look up one HW problem, do one more on your own. Note: We now have two ways of computing the inverse of a invertible square matrix: one is by row reduction and one is by Cramer’s rule(pg 181). How do you pick between these two? Well, it depends on whether you like row reduction or computing determinants more.

Theory This is mainly tested through choice and True/False problems. You’re responsible of knowing the standard terminology(e.g. Null space, kernel/range, subspaces, eigenspaces...etc.) and using them correctly, especially the ones we defined in lecture. Like last time, we won’t mention here the theorems that are more algorithmic or descriptive(like Pg212/thm6 and Pg231/thm13—-You’ll have to know them to do computations though) in nature. The following comes to my mind: Let our vector space be V . When is a subset of V a subspace? Specific example that is surely a subspace: span{v~1 . . . ~vn } for {~ v1 . . . ~vn } ∈ V . The Null space and the Column space of A are usually (Very) different. Kernel and range of a linear transformation. When the linear transformation is a matrix transformation ~x → A~x, they conicide with NulA and ColA. Linear indep/dep in a general vector space are defined like in Rn , with similar general arguments(e.g. Theorem 4). What is a basis of V? The spanning set theorem. You can think a basis as largest linearly independent sets and smallest spanning sets(see pg 213). Isomorphic vector spaces. They need not be equal, but can be considered as faithful “mirror images” of each other–isomorphism gives identification of elements and operations. 2

Dimension is the size of ANY basis of V. V is finite or infinite dimensional. Examples: R (finite), P:the space of all real polynomials(infinite); the space of all real functions from n

R to R(infinite); Pn : the space of all real polynomials with degree ≤ n(finite). The last example has basis {1, t, . . . tn } One more:{~0} (Zero dimensional: the basis is an empty set) For section 4.5, think as if you’re in Rn .Theorem 9, Theorem 12 have parallels in chapter one. Theorem 11 is a fundamental one that rules out possibilities of weird things like Large dimensional spaces curling up in smaller dimensional spaces as a subspace. The Rank Theorem! Additional entries to the IMT(pg 237,277) Eigenvalues, eigenvectors, eigenspaces, multiplicity, characteristic polynomial/equation. Theorem 2 and 4 of chapter 5–they are the cornerstones to the diagonalization theorem/process, and also important on their own right. You can use Theorem 2 as a blackbox– no requirements on learning the proof. Zero is not an eigenvector.

Modeling/Graphing There will be no modeling problems that you’ll have to build up from scratch on this test, nor will there be any graphing problems.

3...


Similar Free PDFs