Linear-Algebra - Formula Sheet PDF

Title Linear-Algebra - Formula Sheet
Author Nguyên Trần HiếuPhương
Course Linear Alg/Mat
Institution Austin Community College District
Pages 32
File Size 2.4 MB
File Type PDF
Total Downloads 35
Total Views 176

Summary

Formula Sheet...


Description

Operations on one matrix Solving linear systems Solving systems with substitution 1. Get a variable by itself in one of the equations. 2. Take the expression you got for the variable in step (substitute it using parentheses) into the other equat 3. Solve the equation in step 2 for the remaining variab 4. Use the result from step 3 and plug it into the equat 1. Solving systems with elimination 1. If necessary, rearrange both equations so that the xfirst, followed by the y-terms, the equals sign, and th term (in that order). If an equation appears to have n term, that means that the constant term is 0. 2. Multiply one (or both) equations by a constant that w either the x-terms or the y-terms to cancel when the added or subtracted (when their left sides and their r added separately, or when their left sides and their ri

5. Plug the result of step 4 into one of the original equa solve for the other variable. Graphing method 1. Solve for y in each equation. 2. Graph both equations on the same Cartesian coordin 3. Find the point of intersection of the lines (the point w lines cross).

Matrix dimensions and entries “rows × columns” K=

[k2,1 k2,2 k2,3] k1,1 k1,2 k1,3

Augmented matrix M=

[m2,1 m2,2 m2,3 m2,4 | C2] m1,1 m1,2 m1,3 m1,4 | C1

Pivot column: Any column that houses a pivot.

Echelon forms Row-echelon form (ref) 1. All the pivot entries are equal to 1. 2. Any row(s) that consist of only 0s are at the bottom 3. The pivot in each row sits in a column to the right of that houses the pivot in the row above it. In other wo entries sit in a staircase pattern, where they stair-ste the upper left corner to the lower right corner of the Reduced row-echelon form (rref) 1. All the pivot entries are equal to 1. 2. Any row(s) that consist of only 0s are at the bottom 3. The pivot in each row sits in a column to the right of that houses the pivot in the row above it. In other wo entries sit in a staircase pattern, where they stair-ste the upper left corner to the lower right corner of the

1. Optional: Pull out any scalars from each row in the m 2. If the first entry in the first row is 0, swap it with ano has a non-zero entry in its first column. Otherwise, m 3. Multiply through the first row by a scalar to make th entry equal to 1. 4. Add scaled multiples of the first row to every other r matrix until every entry in the first column, other than 1 in the first row, is a 0. 5. Go back step 2 and repeat the process until the mat reduced row-echelon form.

Number of solutions to the system

One unique solution:

1 0 0 | a 0 1 0 | b 0 0 1 | c

No solutions:

1 0 0 | a 0 1 0 | b 0 0 0 | c

Operations on two matrices Matrix addition • Matrix dimensions must be identical • Add corresponding matrix entries to find the sum • Matrix addition is commutative and associative

Matrix subtraction • Matrix dimensions must be identical • Subtract corresponding matrix entries to find the sum • Matrix subtraction is not commutative and not assoc

Scalar multiplication Scalar: a constant that gets multiplied by every entry in the m

Matrix multiplication The number of columns in the first matrix must match the num in the second matrix. In the product of A and B, with b1,1 b1,2 a1,1 a1,2 and B = A= a [ 2,1 a2,2] [b2,1 b2,2] the entries of AB are (AB)1,1 is the product of the first row and first column (AB)2,1 is the product of the second row and first colum (AB)1,2 is the product of the first row and second colum (AB)2,2 is the product of the second row and second co The dimensions of the product of any A and B are the rows of matrix by the columns of the second matrix. Matrix multiplication is associative and distributive, but not co

Identity matrix: A matrix with only 1 entries

I2 =

1 0 [0 1]

1 0 0 I3 = 0 1 0 [ ] 0 0 1

1 0 I4 = 0 0

0 0 1 0 0 0 0

The product of the identity matrix and a matrix A: IA = A, but I must have the same number of columns as AI = A, but I must have the same number of rows as A h

Matrices as vectors Vectors Vector: A vector has two pieces of information contained wit 1. the direction in which the vector points, and 2. the magnitude of the vector, which is just the length vector.

Row and column vectors

Sketching vectors Initial point: The point where the vector begins Terminal point: the point where the vector ends Standard position: A vector sketched in standard position has point at the origin

Vector operations  Sum of vectors: a  + b = (a1, a2) + (b1, b2) = (a1 + b1, a2 + b2)  Difference of vectors: a  − b = (a1, a2) − (b1, b2) = (a1 − b1, a2 − b2)

Unit vectors and basis vectors Unit vector: a vector with length 1 u=

1 v | | v | |

Vector length: | | v  | | =

v12 + v22 + v32 + . . . + vn2

For ℝ2: i = (1,0) and j = (0,1) For ℝ3: i = (1,0,0) and j = (0,1,0) and k = (0,0,1)

Linear combinations and span Linear combination: The sum of scaled vectors Span of a vector set: The collection of all vectors which can b represented by linear combinations of the set.

Linear dependence and independence Linear dependence: A set of vectors are linearly dependent w vector in the set can be represented by a linear combination vectors in the set. Collinear: Parallel vectors, or vectors that lie along the same li collinear. Coplanar: Parallel vectors, or vectors that lie in the same plan coplanar.

linearly independent. If any other solution exists, then V is line dependent.

Linear subspaces Subspace: A linear subspace always 1. includes the zero vector, 2. is closed under scalar multiplication, and 3. is closed under addition.

Possible subspaces For ℝn: 1. ℝn is a subspace of ℝn. 2. Any line through the origin is a subspace of ℝn. 3. Any plane through the origin, when the plane is defi fewer dimensions, is a subspace of ℝn. 4. The zero vector in ℝn is a subspace of ℝn.

Span: The span of a vector set is all the linear combinations o Span as a subspace: A span is always a subspace.

Basis Basis: If you have a basis for a space, it means you have enou span the space, but not more than you need. So a vector set a space if it 1. spans the space, and 2. is linearly independent. Standard basis: In ℝ2, the standard basis vectors are i =

0 1 and j = [1] [0]

1 0 In ℝ , the standard basis vectors are i = 0 , j = 1 , an [0] [0] 3

Dot products and cross products

u  ⋅ v  = [u1 u2] ⋅

[v2] = u1v1 + u2v2 v1

Vector length and the dot product: The square of the length o equal to the vector dotted with itself: | | u  | |2 = u  ⋅ u  Properties of dot products: Commutative u ⋅ v  = v⋅ u  Distributive ( u  + v ) ⋅ w  = u  ⋅ w  + v  ⋅ w  ( u  − v ) ⋅ w  = u  ⋅ w  − v  ⋅ w  Associative (c u ) ⋅ v  = c( v  ⋅ u  )

Cauchy-Schwarz inequality | u  ⋅ v | ≤ | | u  | | | | v  | |

The two sides are unequal when the vectors are l independent

Vector triangle inequality The length of the sum of two vectors will always be less than the sum of the lengths of the vectors. || u  + v || ≤ || u || + || v  ||

Angle between vectors Angle between vectors: u  ⋅ v  = | | u  | | | | v  | | cos θ Perpendicular vectors: When vectors are perpendicular (or or their dot product is 0. u ⋅ v  = 0 Angle between the zero vector: 1. the zero vector is orthogonal to every non-zero vect

Plane: A plane is a perfectly flat surface that goes on forever direction in three-dimensional space. It’s the set of all vectors perpendicular (orthogonal) to one given normal vector, which that’s perpendicular (orthogonal) to the plane. Standard equations of a plane: Ax + By + Cz = D a(x − x0) + b(y − y0) + c(z − z0) = 0 when n  = (A, B, C)

Cross product Cross product: The cross product a  × b  is orthogonal to the vectors, a  = (a1, a2, a3) and b  = (b1, b2, b3). i j k a  × b  = a1 a2 a3 b1 b2 b3 a× b=i

a2 a3 a1 a3 a1 a2 −j +k b2 b3 b1 b3 b1 b2

 a  × b = i(a2b3 − a3b2) − j(a1b3 − a3b1) + k(a1b2 − a2b1) Length of the cross product:

Right-hand rule

Dot product vs. cross product The more two vectors point in the same direction, the larger t product.

The more two vectors point in opposite directions the longer

Matrix-vector products Matrix-vector products In A x , x  must be a column vector.  x  must be a row vector. In x A,

Null space The null space is always a subspace: It’s closed under scalar m and closed under addition Null space of the matrix A: All the vectors x  that satisfy A x  = N(A) = N(rref(A)). Linear independence: The columns of A are linearly independe

Column space For a matrix A = [v1 v2 v3 . . . vn] with column vectors v1, v2, v the column space is C(A) = Span(v1, v2, v3, . . . vn).

Solving A x  = b



General solution: The general solution, also called the comple the sum of the complementary and particular solutions, x  = x   Complementary solution: Any x  that satisfies A x  = O .  Particular solution: Any x  that satisfies A x  = b .

Dimensionality, nullity, and rank Dimension of a vector space: The number of basis vectors req span that space. Nullity: The dimension of the null space of a matrix A is also ca nullity of A, and can written as either Dim(N(A)) or nullity(A). It the number of free variables in the system. Rank: The dimension of the column space of a matrix A is also

Transformations Functions and transformations Function: A rule that maps one value to another. Vector-valued function: A function defined in terms of vectors Transformation: Maps vectors from one space to another. Domain: The space ℝm that’s being mapped from. Codomain: The space ℝn that’s being mapped to. Range: The specific vectors within ℝn that are being mapped codomain.

Image of the subset Subset: The vector set that’s being transformed. Preimage: The vector set before the transformation is applied Image: The vector set after the transformation is applied. Kernel of the transformation: All of the vectors that result in t vector under the transformation T

Linear transformations A transformation T : ℝn → ℝm is a linear transformation if, for a vectors u  and v  that are both in ℝn, and for a c that’s also in real number), then • the transformation of their sum is equivalent to the s individual transformations, T( u  + v  ) = T( u ) + T( v ), an • the transformation of a scalar multiple the vector is e the product of the scalar and the transformation of th vector, T(c u ) = cT( u ) and T(c v  ) = cT( v  ).

Rotation matrix In ℝ2: Rotθ =

cos θ −sin θ [ sin θ cos θ ]

In ℝ3: 1 0 0 Rotθ around x = 0 cos θ −sin θ 0 sin θ cos θ

cos θ −sin θ 0 Rotθ around z = sin θ cos θ 0 0 0 1 Rotations are linear transformations: Rotθ( u  + v ) = Rotθ( u  ) + Rotθ( v  ) Rotθ(c u  ) = cRotθ( u  )

Modifying transformations Given transformations S( x ) : ℝn → ℝm as S( x  ) = A x  and T( x ) : T( x ) = B x , where A and B are m × n matrices, • The sum of the transformations is (S + T )( x ) = (A + B) • The scaled transformation is cT( x ) = c(B x  ) = (cB) x 

Projections The projection of v  onto L, where L is given as scaled version v⋅ x ProjL( v ) = x ( x  ⋅ x )

ProjL( v ) = A v  =

u12 u1u2 v 2  [u1u2 u 2 ]

when x  is normalized to u.

Projections are linear transformations   ProjL( a  + b ) = ProjL( a  ) + ProjL( b ) ProjL(c a  ) = cProjL( a )

Compositions of transformations Compositions are linear transformations, which are closed un and closed under scalar multiplication. T ∘ S( x  + y  ) = T ∘ S( x ) + T ∘ S( y ) T(S(c x )) = cT (S( x  )) Compositions as matrix-vector products: T ∘ S( x ) = T(S( x )) = T(A x ) = BA x  = C x 

Ix( x ) = x 

Inverses Inverse transformations Surjective: If every vector b  in B is being mapped to, then T is onto. Injective: If every a  maps to a unique b  , then T is injective, or Invertible transformations: A transformation is invertible if, fo  B, there’s a unique a  in A, such that T( a  ) = b . Inverse transformations are linear transformations, which are addition and closed under scalar multiplication. T −1( u  + v  ) = T −1( u ) + T −1( v  ) T −1(c u ) = cT −1( u )

Inverse matrices Only square matrices can be invertible. Given a transformatio

• the transformation T maps ℝn → ℝn, and that • T is invertible. Determinant formula for the inverse matrix: M −1 =

1

d −b [ M −c a ]

When Det(M ) = | M | = ad − bc = 0, the matrix is singular, which not invertible. When Det(M ) = | M | = ad − bc ≠ 0, the matrix is

Determinants Rule of Sarrus for 3 × 3 matrices Given a b c A= d e f g h i the determinant is | A | = aei + bfg + cdh − af h − bdi − ceg

a1x + b1y = d1 a2 x + b2 y = d2 the solution is given by Dy Dx , y = , with D ≠ 0 x= D D where a1 b1 d1 b1 a1 d1 D= , Dx = , Dy = a2 b2 d2 b2 a2 d2

Determinant rules Multiplying a row by a scalar: Multiplying a row of the matrix b requires that we multiply the determinant by the same scalar Det(B) = | B | = k | A | . Swapped rows: When two rows of a matrix are swapped, the must be multiplied by −1. Duplicate rows: When two rows in a matrix are identical, the d will be 0, which means the matrix is singular, and not invertibl

Main diagonal: The main diagonal of a matrix is made of the e run from the upper left corner of the matrix down to the lowe of the matrix. Upper triangular matrix: A matrix is upper triangular when all below the main diagonal are 0. Lower triangular matrix: A matrix is lower triangular when all t above the main diagonal are 0. Determinant: The determinant of upper and lower triangular m the product of the entries in the main diagonal.

Determinants to find area The area of the parallelogram formed by v1 = (a, c) and v2 = (b, A=

a b [c d]

is given by Area = | Det(A) | . If a figure f is transformed by T in area of g is Areag = | Areaf ⋅ Det(T ) | .

Transposes

The transpose: The transpose A T of a matrix A is simply the m when you swap all the rows and columns. The determinant of the transpose: | A | = | A T | . Transpose of the transpose: (A T )T = A Transpose of a matrix product: (XY )T = Y T X T Transpose of a matrix sum: (X + Y )T = X T + Y T Transpose of a matrix inverse: (X T )−1 = (X −1)T Invertibility of the product: A T A is invertible if the columns of A independent.

Row space and left null space The row space is the span of the row vectors of A (the span o vectors of A T), and the left null space is the vector set that sa  x T A = O T. Subspace

Symbol Space Dime

Column space of A

C(A)

ℝm

Dim(

Null space of A

N (A)

ℝn

Dim(

Orthogonality and change of basis Orthogonal complements If a set of vectors V is a subspace of ℝn, then the orthogonal c of V, called V ⊥, is a set of vectors where every vector in V ⊥ is o every vector in V. V ⊥ = { x  ∈ ℝn | x  ⋅ v  = 0

for every

v  ∈ V}

The orthogonal complement is a subspace, which means it’s c addition and closed under scalar multiplication. Complement of the complement: (V ⊥)⊥ = V

Orthogonality of the fundamental subspaces The null space N(A) and row space C(A T ) are orthogonal comp N(A) = (C(A T ))⊥, or (N(A))⊥ = C(A T ). The left null space N(A T ) and column space C(A) are orthogon complements, N(A T ) = (C(A))⊥, or (N(A T ))⊥ = C(A).

Projection onto a subspace The projection of x  onto a subspace V is a linear transformat be written as the matrix-vector product where A is a matrix o vectors that form the basis for the subspace V. ProjV x  = A(A T A)−1A T x 

Least squares solution A T A x * = A b  T

Orthonormal bases Orthonormal basis: An orthonormal basis is a basis in which e the basis is both 1 unit in length and orthogonal to every othe the basis. Orthogonal matrix: A square matrix whose columns form an o set. Orthonormal matrix: A rectangular matrix whose columns form orthonormal set. Projection onto an orthonormal basis: ProjV x  = A A T x 

Gram-Schmidt process Given V = Span( v 1 , v 2 , v 3 , . . . v n ), 1. Normalize v  1 to make the basis V = Span( u 1 , v 2 , v 3 . 2. Find w 2 = v 2 − ( v 2 ⋅ u 1 ) u1 , then normalize w 2 , to mak V = Span( u 1 , u 2 , v  3 . . . v  n) 3. Find w 3 = v 3 − [( v 3 ⋅ u 1 ) u1  + ( v 3 ⋅ u 2 ) u2 ], then normal

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Any vector v  that satisfies T( v  ) = λ v  is an eigenvector for th transformation T, and λ is the eigenvalue that’s associated wit eigenvector v . A v  = λ v  for nonzero vectors v  when | λIn − A | = 0. λ is an eigenvalue of A when | λIn − A | = 0. Trace: sum of the entries along the main diagonal Trace(A) = sum of A's eigenvalues Det | A | = product of A's eigenvalues Eigenspace: Eλ = N(λ In − A)...


Similar Free PDFs