MATH125 Final Exam note sheet PDF

Title MATH125 Final Exam note sheet
Course Elementary Linear Algebra
Institution University of Illinois at Chicago
Pages 10
File Size 189.6 KB
File Type PDF
Total Downloads 1
Total Views 152

Summary

final exam note sheet...


Description

MATH125 Final Exam Sections: 1.3-2.7, 3.1-3.4, 3.6, 4.1-4.5, 5.1-5.3

Definition Linear- all variables have at most degree 1 System of linear inequalities- a collection of more than 1 Feasibility region- the region determined by simultaneously solving the system of inequalities -

A system of exactly 2 linear inequalities are always infinite A system of exactly 3 linear inequalities are sometimes true and sometimes false

Half Plane- The region defined by a linear inequality Matrix- a table of numbers whose entries represent coefficients in equations Gaussian Elimination- procedure for putting a matrix into Row Echelon Form Multisystem- a collection of linear system with the same coefficient matrix but with different right hand side Consistent System- 1 or more solutions Inconsistent System- no solutions Row rank- a matrix is the number of non-zero rows the matrix has after you put it in REF Parameters- when a linear system has infinitely many solutions, you can write some of the variables in terms of other variables. (Occur in a column that has no leading 1’s) Parametric Solution- the solution set of the parameter system -

When we express the infinite solution set of a linear system in terms of an independent variable, the set is a parametric solution

Homogenous System- one where the right hand side of all constraints is 0 Simplex Algorithm- an algebraic method for finding the maximum in a linear program Pivoting- the procedure of choosing a point in the matrix, turning it into a 1, and then annihilating (turning to zero) all the numbers above and below the 1 Slack Variable- an extra variable that is added to the inequality to make the constraint an equality Pivoting- the procedure of choosing a point in the matrix, turning it into a 1, and then annihilating (turning to zero) all the numbers above and below the 1 Gauss-Jordan Elimination- procedure to make a matrix to RREF Simplex Algorithm- an algebraic method for finding the maximum in a linear program

Vector- is an ordered n-tuple of numbers Parallel- 2 non-zero vectors are parallel if one is a scalar multiple of the other Position Vector- starts at the origin Dot Product- uv= u1v1+…+unvn Linear Combination- Let vn be vectors in R^n. A vector is called a linear combination of Vn if v= ∑cnvn c=constant/scalar Orthogonal- perpendicular u·v=0 Leontief Open Model- A method for studying input of output analysis (How the output of 1 industry is used in the input of industry) Consumption Matrix- Represents the cost per $1 to run several companies or industries in an economy (I.e. cost of raw materials to produce $1 of good) Profitable Industry- An industry is profitable if it costs less than $1 in raw materials to make $1 worth of product Productive Economy- An economy is productive if given any demand, there is a production schedule that meets the demand Markov Chain (Process) - A process in which the probability of system being in a particular state at a specific time depends only on the state of the system at the preceding time Transition Matrix- A matrix whose entries represent the probability of moving from 1 state to another Stochastic Matrix/Vector- A matrix or vector where all the entries are non-negative and every column sums to 1 (ex. Transition Matrix) Stable Vector- A stochastic Vector (The vector

S associated with a transition matrix T is stable if T S = S

S is called the equilibrium of the model)

Stable Matrix- The stable Matrix associated with a Markov Chain is the square matrix where every column is the stable vectors of the transition matrix of the process (assuming the stable exists and there is only 1) Determinant- Determinant of a square matrix is a real number calculated from entries of the matrix that has the property of telling us if the matrix is invertible (I, j)- Minor= Mij- The determinant of the submatrix formed by deleting the ith row and jth column (I, j)- Cofactor= Aij- The determinant of a matrix is the “cofactor expansion” Aij= (-1)i+j ·Mij Triangular Matrix- A square matrix where all the entries below the diagonal are 0 (lower triangular) –orall the entries above the diagonal are 0 (upper) Diagonal Matrix- A square matrix where every entry of the diagonal is zero

Span- If {v1, v2… vk} is a set of vectors in Rn, then the span of S (span(S)) is the set of all linear combinations of the vectors in S Subspace of Rn- a non-empty subset V Contract Space- The set of all possible contracts the company can satisfy The contract space is closed under scalar multiplication and vector addition Null Space- Null space of a matrix A nul (A) is the set of all solutions to the homogenous system A x =0 Column Space of a matrix- col(A)=the set of all linear combination of the columns of A Linearly Dependent- Let S={ v1, v2… vk}, we say S is Linearly Dependent if there exist constants c 1,c2,…ck. Not all zero such that c1v1,c2v2,…ckvk=0 Linearly Independent- if whenever c1v1,c2v2,…ckvk=0, then we must have c1=c2=…=ck=0 (No vector in S is a linear combination of the others) Basis for a Subspace V- A finite set of vectors { v

1

, v

2



v

k

} is a basis for V if

1) The set spans V 2) The set is Linearly Independent Unit Vector- A vector whose length is 1

v , we can normalize v v u = |v|

Normalizing (Normalization) - For any non-zero vector pointing in the same direction

Orthonormal set of vectors- A set of vectors in orthonormal if { v Projection- Projection of a vector u

onto another vector v

1

, v

2



v

into a unit vector,

k

}

u  (¿) = ( u · v ¿ v ¿ Projv

1

=

(u · v ) v (u · v ) Normal Equations- The normal equations associated with a linear system A x = b Coefficient matrix=rhs vector linear system ATA x =AT b Initial Simplex Table 1) 1st Row “Objective Function” in Standard Form 2) All inequalities that are converted to equalities using “slack variables”

2 Planes Interacting in R^3 (3D) Parallel planes no solution (contradiction)

is the new

2 Intersecting Planes Planes interest in a line. We expect to come up with infinitely many solutions, all on which are points on a line. Overlapping Planes intersection is a plane. Expect infinitely many solutions, all of which are points in a plane. Interaction of 3 Planes in R^3 No solutions -

3 parallel 2 parallel, 1 intersecting 2 intersecting, 1 parallel to the line of intersection 2 overlapping, 1 parallel

Infinitely many solutions -

3 overlapping (parameters) All 3 intersection the same line ( 1 parameter in parametric solution) 2 overlapping, 1 intersect (intersection has the shape of a line)

Intersecting at a single point -

2 intersect in a line (cuts the line of intersection at a single point)

Standard Form Ax+By+Cz=D Intercept Form x/a+y/b+z/c=1 Recall that the purpose of putting a matrix into row echelon form is to be able to read off equations which can easily be solved with backwards substitution.

4 properties of Row Echelon Form 1) 2) 3) 4)

The first non-zero entry in each row is a 1 (leading 1) All entries below a Leading 1 are 0 (clearing the column) As you move down the rows, the leading 1’s move to the right Any rows with all-zero entries is at the bottom of the matrix

Reduced Row Echelon Form  

1’s along the diagonal 0’s above and them

Procedure for moving a matrix into RREF 1) First put it in REF using Gaussian Elimination 2) Move from the lower right leading 1 to upper left, using leading 1’s to eliminate entries above them Operations Allowed on Matrices   

Multiply rows by non-zero numbers Add/Subtract 1 row to another Flip rows around

Row Echelon Form of a matrix happens when 1. 1’s along the diagonal (starting from upper left) 2. Lower Triangle is all 0 Procedure for finding Parametric Solution 1) 2) 3) 4)

Matrix RREF Write out equations Identify “parameters”  columns where there are no leading 1’s Express remaining variable in terms of parameters

ex) t,s,r,u

Requirements for using this Algorithm 1) Linear program must be a maximization 2) Constraint Inequalities must have the form (Linear expression is less than or equal to positive number 3) All variables must be non-negative Linear Theorem Let z=ax+by be a linear function, let R be a finite polygonal region in the plane. Then, the min/max value of z over the region R, is attained on the corner points of R Consistency Theorem Your system is consistent if and only if the row rank of the coefficient matrix equals row rank of the augmented matrix Homogenous Systems Theorem Always consistent (they always have at least the trivial solution (0, 0, 0) When they have trivial solution… 1) If row rank of the coefficient matrix equals the number of variables  Then only has the trivial solution 2) If row rank of coefficient matrix is great than the number of variable  Then it has infinitely many solutions (i.e. all the columns cannot have leading 1’s, so free variables are present)

The Simplex Algorithm 1) Choose a Pivot Column  Choose the column that has the most negative entry in the top 2) Pick an entry to Pivot All positive entries are candidates for pivoting  For each positive entry, divide it into the last column 3) Pivot the entry 4) Repeat until all entries the top row are positive 5) Maximum z-value appears in the top right corner Procedure for finding Basic Feasible Solution 1) Find all columns of this form (The column entries are all 0, except for a single1) 2) Define the variable for this column as the number in the solution column in the row containing 1 3) Set all other variables to 0 3 Ways to describe lines mathematically 1. Y=mx+b 2. Parametric Form a. {(p1+tv1, …, pn+tvn) l t€R} 3. Vector Equations a. X=P0+tv i. P is the initial point ii. V describes direction of the line Properties of Dot Product 1. 2. 3. 4. 5.

Commutative u-v=v-u Distributive u(v+w)=uv+uw Associative h(u·v)=(hv) ·u=(hu) ·v 0 · v= 0 u·u= lul^2

Square Matrix A square matrix A has inverse B, if and only if [A│I] RREF [I│B] A-1=

[

1 d −b ad −bc −c a

]

Standard form of Consumption Matrix Columns- industries (companies)

ex) C=

[ .9.2 .4.1]

x=$ amount of electricity to

produce Rows- raw materials

Elect-$300 Oil-$100

y=$ amount of oil to produce

Linear equation for Consumption Matrix (from above) X=300+.9x+.4y ↔ (1-.9)x-.4y=300 Y= 100+.2x+.1y ↔ -.2x+(1-.1)y=100 (x, y)=total amount needed

(300,100) =amount needed to see to consumers

(.9x, .4y)& (.2x, .1y) = amount of raw material used in making electricity or oil

[300 100 ]

= Demand Vector

 D

Leontief input or output model for an Economy (I-C) x =  D For a solution to exist  (I-C)-1 must exist No Solution 1) (I-C)-1 does not exist

x

2) If (If

x

has a negative values has all positive value, it is called a production schedule)

Productive Economy Theorem 1) Every column of C sums to less than 1 a. Every industry is profitable 2) Every row of C sums to less than 1 a. Industries do not consume all the resources -1 3) (I-C) exists and all entries are positive a. (I-C) x =  D b. x =(I-C)  D Markov Analysis 1) The market distribution at any specific time depend solely on the market distribution of the preceding time period a.

( n−1 ) [CnPn] [ac db] [ CP (n−1) ] =

2) If we know the initial state (i.e. t=0), you can find the market distribution at time t=n by

[ ] [ ] [ CₒPₒ ] Cn Pn

a.

=

a b c d

n

Transition Matrix T=

state of system

Switching to a particular state

[¿ ]

Model Markov Chain

S

=Tn·  S matrix) n

0

State at time n = Transition matrix to the “n power” · Original State (column

Stable Vector from Markov T  S = S

 T  S - S =0  T S -I S =0  (T-I) S =0

Determinant 1) Determinant of a matrix comes from the exact conditions needed to make it invertible 2) Matrix An*n is invertible if it can be turned into the Identity matrix via row operations Triangular Matrix

[ ] 2 0 0 0 8 0 3 6 1

(Upper triangle)

[ ] 2 0 3 0 8 6 0 0 1

(Lower Triangle)

Properties of Determinants 1) Det(A)= det(AT) a. Expanding across a row in A is equivalent to expanding down a column in A T 2) If A has either a row or column of all zero entries Det(A)=0 a. Explanation: i. Choose to expand across the row or column at all zeros 1. Det(A)=0…+0…+0…=0 ii. You cannot use row operation to turn a matrix with an “all zero row” into an identity 1. So A-1 =DNE 2. So det(A) much be zero iii. Determinant of a triangular matrix is the product of the entries along the diagonal iv. The determinant of a diagonal matrix is the product of all the diagonal entries v. Let A be a square matrix that has 2 “proportional rows” Example of Span Let

v = (assume v

Span{ v }=k· v

is a position vector)

k=real number

Span{v1,v2}= x = a v

1

+b v

2

Theorem of Span

v 1, …, v

k

Then span { v

1

If

are vectors in R3, k≥3 and A=[ v

, v

2

, …, v

k

1

, v

2

, …,

v

k

],

{

A Line if row rank A=1 A Plane = if row rank A=2 3 if row rank A=3 All of R

}=

Subspace of Rn

u, 1) Whenever 

v u

2) Whenever multiplication

∈ V, then

∈ V, then c· u

u + v ∈ V (closure under addition) ∈ V 0· u ∈ V (closure under scalar

Every subspace must contain zero vector Theorem of Contract Space Let S={ v

1

, …, v

k

} in Rn Then V=span{s} is a subspace of Rn

2 Common Questions about Subspaces 1) Determine whether a given vector is in the subspace a. Trivial for the Null space b. Nul(A) is all solutions to A x =0 c. To check if x 1∈ nul(A), you check that A x 2) Express the subspace as a span of vectors a. Trivial for the Column Space

is the zero vector

Consistency Theorem (Ver3) A system of equations A x = b

is consistent



b

can be written as a linear combination of the columns of A

Procedure for determining if a set of vectors in Linearly Dependent 1) Equation c1v1,c2v2,…ckvk=0 corresponds to the homogenous system [v1, v2… vk │0] Solve to get ci’s 2) If the homogenous system has only trivial solution Vectors are Linearly Independent 3) If the homogenous system has a non-trivial solution (∴ infinitely many solutions given parametrically)  vectors are Linearly Dependent and we can form a dependency equation c1v1,c2v2,…ckvk=0) Theorem of Linearly Dependent

Let v1, v2… vk be vectors in Rn, if k>n, these vectors are linearly dependent Standard Basis for Rn R2

R3

e

1

=

e

2

=

e

1

=

e

2

=

e

3

=

How to select a basis from a spanning set 1) 2) 3) 4) 5) 6)

Express each vector in terms of standard bases Make a Dependency Table From the left, pivot each column in a different row Replace how you labeled the rows Write Dependency Equations Conclusion

Facts about Bases 1) Bases are not unique 2) All bases of a space have the same # of vectors 3) Dimensions of a space is the number of vectors in the basis Orthonormal Set of Vectors 1) The vectors are mutually (pairwise) orthogonal 2) Every vector is the unit vector

v

i

+ v

=0

j

Theorem of Orthonormal Set of Vectors If { v

1

, v

2

, … v

u = c1 v 1+ c2 v v

1

+ ( u · v

2

k

2+

) v

} is an orthonormal set and … +ck v 2

¿

c1= u · v

k

 + …+ u · v

u ∈ span{ v 1, v

k

) v

k

1

c2= u · v

2

2

, … v

k

}

( u · v

1

)...


Similar Free PDFs