Cheatsheet PDF

Title Cheatsheet
Author Bernard Yip
Course Linear Algebra I
Institution National University of Singapore
Pages 3
File Size 152.7 KB
File Type PDF
Total Downloads 64
Total Views 187

Summary

Cheatsheet used for finals...


Description

Homogeneous System

Adding a row to another row

{

a11 x 1 +…+a 1 n x n=0 am 1 x 1 +…+ amn x n=0 x 1=…=xn =0 is always a trivial solution Other solutions are non-trivial solutions Only can have only trivial | infinitely many sols. Unknowns>equations has infinitely many sols.

Solutions When sols. are given, sub into the variables to get linear eqn to solve for unknowns in the eqn

Matrix Operations Matrix are equal when (proof both): Same size & corresponding entries equal Ensure matrices order are maintained  A + B=B+ A

A + ( B+C )= ( A+ B ) +C c ( A +B )=cA +cB ( cd ) A=c (dA ) =d ( cA ) A +0=¿+ A=A A − A =0 0 A=0 & IA = AI = A AB ≠ BA in most cases

   

A ≠0

  if  

A=m× n matrix T ( AT ) =A B=m ×n matrix, ( A +B ) T =A T + BT ( cA ) T =c A T B=n × p matrix, ( AB )T =BT AT

Invertible Properties

1 ( cA ) = A−1 c −1



cA



A is invertible and T −1 ( AT ) = ( A−1 ) −1 n A n is invertible and ( An) = ( A−1) −1 −1 A−1 is invertible and ( A ) = A AB is invertible and −1 −1 −1 ( AB ) =B A r s r +s A A =A

is invertible and

T

   

 A is invertible, B is invertible  RREF has no zero row

All elementary matrices are invertible Multiplying row by k, k ≠ 0

1 ⋯ 0 1 ⋯ 0 1 ⋮ ⋮ k ⋮ =⋮ k 0 ⋯ 1 0 ⋯ 1

Swapping 2 rows

0 1 0 0 1 0 1 0 ⋮=1 0 ⋮ 0 ⋯ 1 0 ⋯ 1

are same

]

]

1 det ( Ai ) det ( A)

Span/Subspace

{ 0} =span {0 }=¿

zero space

Rn A=[ u1 u 2 … un ] , R = RREF(A) 0 ∈ Rn

Show Span of Let

{

a11 a11 A11 + …+a1 n A 1 n

if n=1 if n>1 (i, j)-cofactor of A

A ij = (−1)i+ j det ( M ij ) where M ij is the matrix by removing the row/column of the original matrix

Shortcuts to find determinant

2× 2 : ad − bc cross If A is triangular matrix,

3 ×3

: criss-

Elementary row operations on determinants

k Ri=kdet ( A ) Ri ↔ R j=−det ( A ) Ri +k R j=det ( A )

is linear combi of

v1 … v j RREF( [ v 1 …v j∨s 1∨…∨sn

])

() ( ) ( ) x1 r 11 r 21 x 2 =t1 r 12 +t 2 r 22 x3 r 13 r 23

 General

Solution Solution Set/Space =

{( r 11 ,r 12 , r13), ( r 21 , r 22 , r23) } Linear Independence

Determinant Properties

det ( BA )=det ( B ) det ( A ) =det ( AB ) n det ( cA ) =c det(A ) , due to each row

[

S spans V Show s 1 …s n

( A)

Use the row/column with most 0s

3.

span(V) :

Show span(S) ⊆ AKA Subspace

Solution spaces

If A is square matrix, det ( A T )=det If A is square matrix and has 2 identical row/column, det ( A ) =0

2.

n

If consistent, then true

det ( A )=a 11 a22 … a nn

1.

R

#nonzero row = #leading entries = #pivot columns Examples of non-subspace  Don’t contain zero vector  cv ∉ A : v ∈ A , c ∈ R  u+ v ∉ A : u , v ∈ A

Given S = u 1 … un , do RREF([S | 0]), infinite solution = dependent, otherwise independent  Proof that only trivial solution exists Can add a linear independent vector, un+ 1 , to S and still be linearly independent

S is a Basis for

R

n

*Empty set is basis for zero space* dim(0) = 0 2 of the 3 conditions 1. S is linearly Independent 2. S spans Rn OR V

][

3.

T

¿

|S|=dim¿

S) = n OR dim(V)

A 11 ⋯ A1 n A11 ⋯ = = ( ) adj A ⋮ ⋱ ⋮ ⋱ ⋮ A1 n ⋯ A n 1 ⋯ A nn

 RREF( s 1 … s n ), vectors as columns w/o 0  no zero row (2) & all pivot column (1) = basis  # of vectors = # of dimension OR n (MUST)

A ij = (−1)i+ j det ( M ij )



1 adj ( A ) , det ( A ) A [adj ( A ) ] =( b ij) n× n =ai 1 A j 1+ …+a¿ A j

Coordinate Systems

u=v ⟺( u )S= ( v )S :u , v ∈V



−1

A = , They

x i=

2.

1.

If A is invertible, then

−1

column is replaced by b

2. If no zero row, consistent = span(A) =

Adjoints −1

[]

,

Use rref, if have zero row = singular

1 det ( A )= det ( A) det ( A )=det( A T )

Elementary row operations

[]

,

Determinants

−1

( A |I )→ rref → ( I| A−1 )

][ ][

BA = AB= I

A=( aij ) n ×n

x1 x= ⋮ xn

b1 b= ⋮ bn and Ax = b 1. Let A i be the matrix where the i th

n

*c

Getting the Inverse Matrix

[ [

Let

Let A be square matrix, these are equivalent 1.A is invertible 2. Ax=0 has only the trivial solution 3.rref(A) is an identity matrix 4.A can be expressed a product of elementary matrices 5. det ( A ) ≠ 0

det ( A )=

Transpose Properties Let

]

Invertible Matrices

3.

   AB  pre-multiply of A to B  BA  post-multiply of A to B  When AB=0 , possibly and B ≠ 0

][

Cramer’s Rule

−1

6.Row|Column space of A form basis for R 7.Rank(A) = n 8.0 is not eigenvalue of A Let A and B be square matrices of same size 1.A & B are invertible −1 & 2. A−1=B B =A

Matrix Properties



[

1 ⋯ k 1 ⋯ −k ⋮ 1 ⋮ =⋮ 1 ⋮ 0 ⋯ 1 0 ⋯ 1



c 1 v 1+ …+ c n v n=c1 ( v 1) S +…+cn ( v n) S v 1 … v n linearly ind. if ( v1 )S … ( v n ) S lin. ind.



Orthogonal/Orthonormal basis

span { v 1 …v n }=span ( v 1 )S … ( v n) S =Rk { }

Dimensions/Ranks/Nullity

 # of vectors in basis, 0 for zero space  rank = dimension = # nonzero row = # leading entries = # pivot columns  dim(row space(A)) = dim(column space(A))  rank(AB) ≤ min {rank(A), rank(B)}  nullity(A) = dim(nullspace(A)) ≤ n  rank(A) = rank( A T )  rank + nullity = # columns

To show S is orth. basis to V 1.S is orthogonal/orthonormal 2.|S| = dim(V) OR span(S) = V  Nullspace is orthogonal to Rowspace

(Orthogonal) Projections Projection p of vector u within the subspace V

u . v1 u . vn v 1 +… v ∈V v1 . v1 vn . vn n  If V is orthonormal, v i . v i =1 p=

Gram-Schmidt Process

Solution Spaces

v 1=u 1

RREF([A | 0]), # arbitrary param. = # non-pivot columns = # of dimensions

v 2=u 2−

u2 . v1 v v1 . v1 1 u3 . v 1 u3 . v 2 v 3=u3− v 1− v v1 . v1 v2 . v2 2 u .v u .v v n =un− n 1 v 1−…− 3 n v n vn . vn v1 . v1

Transition Matrices P is a transition matrix s.t.

[ w ]T =P [w ]S

P=[ [s 1 ]T … [ s n]T ] for (S  T) e.g. s 1 =s11 t 1 + … + s 1 n t j 1.RREF([ t 1 …t m|s 1|…|s n ] ) 2.P = [ s 1 ' … s n ' ], where s’ is after RREF s’ = vector s relative to basis T  Opposite = find inverse

Row/Column spaces row/column space of A = column/row space of

A

T

Can normalize to get orthonormal basis

Best Approximations/ least square sol. 1.Do Gram-Schmidt Process 2.Calculate projection based on orthogonal basis 3.Solve for solution using Ax = p Shortcut Method

Row space is preserved through row operations Column space is NOT PRESERVED

T T A Ax= A b , solve for x  x is the least square solution for

1.

Finding Bases Row Vector, Method 1: Place vectors as rows, RREF, get all nonzero rows Column Vector, Method 2: Place vectors as columns, RREF  Select initial vectors that are pivot columns

Extending Bases 1.Form matrix A using vectors as rows 2.RREF(A) to R 3.Get non-pivot columns in R 4.Add vector (0 ... 1 ... 0) where 1 is at non-pivot column position. (FOR EACH POSITION)

A

Dot Product

||u||= √ (u1 )2 +…+ (un ) 2 = length of u distance(u, v) = ||u−v|| ||u−v|2=||u||2 +||v ||2−2||u|||| v||cos ( θ)

 

(

T

u . v=u v T u . v=u v

2. 3. 4. 5.

For bases S & T, if both are orthonormal bases Let matrix P be transition from S to T −1

P =P

−1

Exponent in D can be moved into each element

Orthogonal Diagonalization When P is an orthogonal matrix −1

IFF when Square Matrix A is symmetric,

A= A

Linear Transformations Let a transition matrix T : Rn → Rm if n=m, T is a linear operator/ linear transformation.

T (cu + dv )=cT (u ) +dT (v) Identity Mapping = I Zero Mapping = 0

Linear Transformation Properties

3.

A =A

)

( [10]+[10 ])=T( [20] )=[ 40] T [ ] +T [ ] =[ ] + [ ]= [ ] ( 10 ) ( 10 ) 10 10 20

for row vectors

Characteristics

Orthogonality

u . v=0

, perpendicular

Orthogonal set if v i . v j =0 : i ≠ j Orthonormal set = orthogonal set &

||v i||=1 Normalize: divide by length Orthogonal set = linearly independent

λ

Characteristic Equation

det ( λI − A )=0

det(λI − A) How to diagonalize Eigenvalue: & solve for λ  Can use row operations on det(λI − A) to triangular matrix to find determinant  If A is triangular matrix, can just use diagonal entries as λ Eigenvectors:

but

T (v ) of v is comp. det. by images T (u1 ) , … , T (u n )  Something like linear combination 1. Get cords. as linear combination of vectors 2. Do transformation for each vector and multiply by the cord. to get the transformation

Standard Matrices (based on standard basis) Let A be the standard matrix for T

A=[ T ( e 1 ) …T ( e n )]

Characteristic polynomial

det ( λI − A )=0

T

Complete Determination

T

for column vectors

, A is matrix

T (c 1 u1+…+ c n un ) =c1 T (u 1 ) + …+c n T (u e.g.

Diagonalization

T (0 ) =0=A 0

1. for T 2.

Orthogonal matrices

−1

T

After getting eigenvector, apply Gram-Schmidt process and normalize vectors

T

Let A be square matrix of order n 1.A is orthogonal 2.Rows & columns of A form orthonormal basis for Rn

T

P =P

 P is orthogonal To show orthogonal matrix, proof: T

−1

P AP= D

A=PD P A m =P Dm P−1

T

Au = λu

u . v= v . u ( u+ v ) . w=u . w+v . w ( cu ) . v =u .( cv )=c (u . v ) ||cu||=|c ||u| , |c| is abs value u .u ≥ 0 & u .u=0 ⟺ u=0

Orthogonal if

onto A

Orthonormal Bases

A = square matrix or order n λ = eigenvalue, λ ∈ R u = eigenvector of A associated with

Dot product properties 1.

b

Diagonalization

A A ∨A A=I

x = general solution of Ax=0 + particular solution for Ax = b

u.v | u | |||v||

, using any solution for

 p is the projection of

Linear Systems

θ=cos−1

p= Ax x

2.

*Always linearly independent & basis for E λ * For each eigenvalue, sub in λ (call it B) and do RREF([ B∨0 ]) Each general solution gives # of eigenvectors  if # eigenvectors ≠ power of λ , invalid Finally… Let P = append each eigenvector as column to form matrix D = diagonal of eigenvalues in the same order as P  for square matrix of order n, n distinct eigenvalues = diagonalizable

How to get matrix 1.RREF([ 2.

( e )i '

3.Set T(

a1 …an|e 1| …|e n ]

)

ei e 1 ’) + … + T( e n ’ ) ' T (ei) to get each

gives the combination to get

u1 ¿

= T(

4.Transform individual

column 5.Append them in order of

i

Compositions of Mappings

( T ∘ S )( u ) =T ( S ( u) )=T ( Au )= BAu

If A&B standard matrices for S and T respectively, BA is also a standard matrix

Ranges AKA column space Let T be a linear transformation and A be the standard matrix of T R(T) is set of images of T (The domain it maps to)  Represented as spans / column space Rank(T) = dim(R(T)) = dim(col(A)) = rank(A)

Kernels AKA null space AKA Homo. eqns Set of vectors whose image is 0

Ker (A )={u∨ Au=0 }  Ker(A) = null space of A  dim(Ker(A)) = nullity(A) = dim(null(A)) = nullity(A)

rank (T )+ nullity (T )=rank (A )+ nulli Some Proofs Proof Inverse:

AB = I

Proof Symmetric: A = A T Proof Span: closed under addition/scalar...


Similar Free PDFs