Vectors & Matrices Lecture Notes PDF

Title Vectors & Matrices Lecture Notes
Course Vectors and Matrices
Institution Queen Mary University of London
Pages 56
File Size 788.6 KB
File Type PDF
Total Downloads 85
Total Views 154

Summary

Teacher- Abhishek Saha, Vito Latora...


Description

Vectors & Matrices, 2020–2021 Lecturers: Prof. Vito Latora and Dr. Abhishek Saha These lecture notes are based on previous notes by Prof. Oliver Jenkinson December 1, 2020

ii

Contents 1 Vectors 1.1 Bound Vectors and Free Vectors . . . . . . . . 1.2 Vector Negation . . . . . . . . . . . . . . . . 1.3 Vector Addition . . . . . . . . . . . . . . . . 1.4 Scalar Multiplication . . . . . . . . . . . . . . 1.5 Position Vectors . . . . . . . . . . . . . . . . 1.6 The Definition of u + v does not depend on A

. . . . . .

1 1 2 2 4 5 5

2 Coordinates 2.1 Unit Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Sums and Scalar Multiples in Coordinates . . . . . . . . . . . . . . . . . . . . 2.3 Equations of Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7 7 8 8

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

3 Scalar Product and Vector Product 3.1 The scalar product . . . . . . . . . . . . . . . . . . . . 3.2 The Equation of a Plane . . . . . . . . . . . . . . . . . 3.3 Distance from a Point to a Plane . . . . . . . . . . . . 3.4 The vector product . . . . . . . . . . . . . . . . . . . . 3.5 Vector equation of a plane given 3 points on it . . . . . 3.6 Distance from a point to a line . . . . . . . . . . . . . 3.7 Distance between two lines . . . . . . . . . . . . . . . 3.8 Intersections of Planes and Systems of Linear Equations 3.9 Intersections of other geometric objects . . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . . . .

11 11 12 13 13 14 15 15 16 16

4 Systems of Linear Equations 19 4.1 Basic terminology and examples . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.2 Gaussian elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.3 Special classes of linear systems . . . . . . . . . . . . . . . . . . . . . . . . . 25 5 Matrices 5.1 Matrices and basic properties . . . . . . . . . . . . . . 5.2 Transpose of a matrix . . . . . . . . . . . . . . . . . . 5.3 Special types of square matrices . . . . . . . . . . . . . 5.4 Column vectors of dimension n . . . . . . . . . . . . . 5.5 Linear systems in matrix notation . . . . . . . . . . . . 5.6 Elementary matrices and the Invertible Matrix Theorem 5.7 Gauss-Jordan inversion . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

27 27 31 32 33 34 35 39

iv

CONTENTS

6 Determinants 43 6.1 Determinants of 2 × 2 and 3 × 3 matrices . . . . . . . . . . . . . . . . . . . 43 6.2 General definition of determinants . . . . . . . . . . . . . . . . . . . . . . . . 44 6.3 Properties of determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Index

52

Chapter 1 Vectors 1.1

Bound Vectors and Free Vectors

Definition 1.1. A bound vector is a directed line segment in 3-space. If A and B are points −→ in 3-space, we denote the bound vector with starting point A and endpoint B by AB. −→ As the notation AB suggests, an ordered pair of points A, B in 3-space determines a bound vector. Alternatively, a bound vector is determined by its: • starting point. • length, • direction (provided that the length is not 0), −→ −→ We denote the length of the bound vector AB by |AB|. If a bound vector has length 0 then −→ it is of the form AA (where A is some point in 3-space) and has undefined direction. If we ignore the starting point we get the notion of a free vector (or simply a vector). So a free vector is determined by its: • length, • direction (provided that the length is not 0). We will use letters in bold type for free vectors (u, v, w etc.)1 The length of the free vector v will be denoted by |v|. −→ Definition 1.2. We say that the bound vector AB represents the free vector v if it has the same length and direction as v.

1

In handwritten notes underlining would be used: u,v,w etc.

2

CHAPTER 1. VECTORS −→ −−→ In the figure below, the two bound vectors AB and CD represent the same free vector v.

B

D

v A

v C

Definition 1.3. The zero vector is the free vector with length 0 and undefined direction. It is denoted by 0. −→ For any point A, the bound vector AA represents 0. It is important to be aware of the difference between bound vectors and free vectors. In −→ particular you should never write something like AB = v. The problem with this is that the two things we are asserting are equal are different types of mathematical object. It would be −→ correct to say that AB represents v. One informal analogy that might be helpful is that a free vector is a bit like an instruction (go 20 miles Northeast say) while a bound vector is like the path you trace out when you follow that instruction. Notice that there is no way to draw the instruction on a map (similarly we cannot really draw a free vector) and that if you follow the same instruction from different starting points you get different paths (just as we have many different bound vectors representing the same free vector). In what follows we will mainly be working with free vectors and when we write vector we will always mean free vector.

1.2

Vector Negation

If v is a non-zero vector we define its negation −v to be the vector with the same length as v and opposite direction. We define −0 = 0. Negation is a function from the set of vectors −→ −→ to itself. If AB represents v then BA represents −v.

1.3

Vector Addition

To define the sum of two vectors we need the notion of a parallelogram. −→ −−→ Definition 1.4. The figure ABCD is a parallelogram if AB and DC represent the same vector. −→ −−→ Fact 1.5 (The Parallelogram Axiom). If AB and DC represent the same vector (u, say), − − → − −→ then BC and AD represent the same vector (v, say). Note that we need not have u = v. Definition 1.6. Given vectors u and v we define the sum u + v as follows. Pick any point −→ −− → A and let B, C, D be points such that AB represents u, AD represents v and ABCD is a −→ parallelogram. Then u + v is the vector represented by AC.

1.3. VECTOR ADDITION

3

C

v B u

A

u

D

v −−→ In the figure, DC represents u because we chose C in order to make ABCD a paral−− → − − → lelogram. Now by the parallelogram axiom AD and BC represent the same vector v. By −→ definition AC represents u + v. Remark 1.7. In order for this to be a sensible definition it needs to specify u + v in a completely unambiguous way. In other words, if two people follow the recipe above to find u + v they should come up with the same vector. For this to be true we need to check that B, C, D are uniquely specified by the rule we gave (this is obvious) and that the answer we get does not depend on the choice of A. This last point requires bit of checking which I give as an exercise (with hints) at the end of the chapter. From this definition we get the following useful interpretation of vector addition2 : −→ − − → Proposition 1.8 (The Triangle Rule for Vector Addition). If AB represents u and BC rep−→ resents v then AC represents u + v. −→ − − → Proof. Suppose that AB represents u and BC represents v. Let D be the point such that −−→ −→ −−→ DC represents u. Since AB and DC both represent u, the figure ABCD is a parallelogram. − − → − −→ − −→ By the parallelogram axiom BC and AD represent the same free vector and so AD represents v. It follows that the figure ABCD is precisely the parallelogram constructed in the definition −→ of u + v and so by definition AC represents u + v as required. Vector addition shares several properties with ordinary addition of numbers. Proposition 1.9. [Properties of Vector Addition3 ] If u, v, w are vectors then: 1. u + v = v + u (that is vector addition is commutative), 2. u + 0 = u (that is 0 is an identity for vector addition), 3. u + (v + w) = (u + v) + w (that is vector addition is associative), 4. u + (−u) = 0 (that is −u is an additive inverse for u). We usually write u − v for u + (−v) (this defines vector subtraction) and so part (iv) could be written as u − u = 0. −→ −− → − −→ Proof. 1. If AB represents u, AD represents v and ABCD is a parallelogram then DC −→ represents u. It follows from the triangle rule applied to the triangle ADC that AC −→ represents v + u. We know (by the definition of vector addition) that AC represents u + v. Hence u + v = v + u. 2 Some people regard this as the definition of u + v in which case the parallelogram interpretation becomes a result that needs a proof 3 If you take the module Introduction To Algebra you will recognise that these properties mean that the set of free vectors forms an Abelian group under vector addition.

4

CHAPTER 1. VECTORS −→ −− → 2. Let AB represent u. Since BB represents 0, the triangle rule applied to the (degenerate) −→ triangle ABB gives that AB represents u + 0. Hence u + 0 = u. 3. See problem sheet 1. −→ −→ 4. Let AB represent u. Since BA represents −u, the triangle rule applied to the (degen−→ −→ erate) triangle ABA gives that AA represents u + (−u). But AA represents 0 and so u − u = 0.

1.4

Scalar Multiplication

If α ∈ R and v is a vector, we define αv to be the vector with length4 |α||v| and direction the same as v if α > 0, opposite to v if α < 0, and undefined if α = 0. Multiplication of a vector by a scalar also has some nice properties: Proposition 1.10. [Properties of Scalar Multiplication] For any α, β ∈ R and vectors u, v we have: (i) 0u = 0, α0 = 0, 1u = u, −1u = −u, (ii) α(βu) = (αβ)u, (iii) (α + β)u = αu + βu, (iv) α(u + v) = αu + αv. The last two properties in this proposition are called distributive laws. We will show parts of the proofs of these but will not go through all cases in detail. The proofs of this Proposition are not examinable. Proof.

(i) Trivial.

(ii) By definition of scalar multiplication: |α(βu)| = |α||βu| = |α||β ||u| = |αβ||u| = |(αβ)u|. So α(βu) and (αβ)u have the same length. If α = 0 or β = 0 (or both) then both sides are equal to 0. We consider cases according to whether α and β are positive or negative. If α > 0, β > 0. Then both α(βu) and (αβ)u have the same direction as u and so are equal. If α < 0, β > 0. Then βu has the same direction as u and α(βu) has direction opposite to u. Also αβ < 0 so (αβ)u has direction opposite to u. It follows that both α(βu) and (αβ)u have the same direction as −u and so are equal. The remaining cases of α > 0, β < 0 and α < 0, β < 0 are similar.

4

Be careful with the notation here. In this expression |α| is the absolute value of the scalar α while v is the length of the vector v.

1.5. POSITION VECTORS

5

−→ − − → −→ (ii) Let AB represent αu and BC represent βu. Then by the triangle rule AC represents αu + βu. −→ If α > 0, β > 0 then AC is a bound vector of length α|u|+β|u| = (α+β)|u| in the same −→ direction as u. That is AC represents (α + β)u. It follows that αu + βu = (α + β)u. The remaining cases are similar. (iv) If α = 0 then both sides are equal to 0. −→ −− → − − → −−→ Suppose that α > 0. Let AB represent u, BC represent v, AD represent αu, and DE represent αv (draw a picture). The triangles ABC and ADE are similar triangles and the edge AB is in the same −→ direction as the edge AD. It follows that the bound vector AE is in the same direction −→ −→ −→ as AC and its length differs by a factor of α. But AC represents u + v and AE represents αu + αv. It follows that αu + αv = α(u + v). The α < 0 case is similar.

1.5

Position Vectors

Suppose now that we fix a special point in space called the origin and denoted by O. Definition 1.11. If P is a point, the position vector of P is the free vector represented by −→ the bound vector OP . We will usually write p for the position vector of P , q for the position vector of Q and so on. Each point in space has a unique position vector and each vector is the position vector of a unique point in space. If A and B are points with position vectors a and b respectively then by the triangle rule −→ applied to the triangle AOB we get that AB represents the vector b − a. Theorem 1.12. Let A, B be points with position vectors a and b respectively. Let P be the −→ −→ point on the line segment AB with |AP | = λ| AB|. The position vector p of P is (1−λ)a+λb. −→ Proof. Define u to be the free vector such that AB represents u. It follows that the bound −→ vector AP represents λu. The triangle rule applied to OAP gives that p = a + λu. Also u = b − a. Putting these together gives that p = a + λ(b − a) which after some manipulation (using the distributive laws of Proposition 1.3(iii,iv)) gives the result. In lectures we used this theorem to prove the following geometric fact about parallelograms. Example 1.13. The diagonals of a parallelogram intersect at their midpoints.

1.6

The Definition of u + v does not depend on A

This section is non-examinable but I encourage you to work through the argument as an exercise. If you look back to the definition of vector addition you will see that we started by picking an arbitrary point A. This question leads you through the proof that the definition of vector

6

CHAPTER 1. VECTORS

addition does not depend on which point is chosen. The steps are roughly indicated with some gaps. Your task is to expand each step and fill in the gaps to get a complete proof: • Consider the parallelogram needed to define u + v with respect to point A (name the points). • Consider the parallelogram needed to define u + v with respect to a different point E (name the points). • Fill in the gaps: “In order for it not to matter whether we used the parallelogram based at A or the one based at E in the definition of u + v, our task is to show that the bound vectors . . . and . . . represent the same free vector”. • Fill in the gaps: “The figure . . . is a parallelogram because . . . and . . . both represent the vector u and so by the parallelogram axiom . . . and . . . represent the same vector [give it a name].” • Fill in the gaps: “The figure . . . is a parallelogram because . . . and . . . both represent the vector v and so by the parallelogram axiom . . . and . . . represent the same vector [what is that vector].” • Make one more application of the parallelogram axiom to show that the bound vectors from step 3 really do represent the same free vector.

Chapter 2 Coordinates Suppose now that we choose an origin O and 3 mutually perpendicular axes (the x−, y− and z− axes) arranged in a right-handed system as in the figures below: x

y

z

O

x

z

O

y

z

x

O

y

Let i, j, k denote vectors of unit length (i.e. length 1) in the directions of the x−, y− and z−axes respectively. We say that R is the point with coordinates (a, b, c) if the position vector of R is r = ai + bj + ck. If Q is the point with position vector ai + bj and P is the point with position vector ai then −→ OP Q is a right-angled triangle and P Q represents bj. It follows from Pythyagoras’s Theorem that −→ | OQ|2 = |ai|2 + |bj|2 = a2 + b2 . −→ Further, OQR is a right-angled triangle and QR represents ck. So −→ −→ −→ |r|2 = |OR|2 = |OQ|2 + | QR|2 = a2 + b2 + c2 . It follows that |r| =

2.1



a 2 + b2 + c 2 .

Unit Vectors

Definition 2.1. A unit vector is a vector of length 1. For instance i, j and k are unit vectors.   1 If u is any non-zero vector then u ˆ = |u| u is a unit vector in the same direction as u. 1 u u. for |u| We often write |u|

8

CHAPTER 2. COORDINATES

2.2

Sums and Scalar Multiples in Coordinates

 a We will write  b  for the vector ai + bj + ck where a, b, c ∈ R. c     a d Let u =  b , v =  e  and α ∈ R. c f     a d u + v =  b  +  e  = (ai + bj + ck) + (di + ej + f k) c f 

= (ai + di) + (bj + ej) + (ck + f k) = (a + d)i + (b + e)j + (c + f )k   a+d =  b + e . c+f

Also,

 a αu = α  b  = α(ai + bj + ck) c 

= α(ai) + α(bj) + α(ck)

 αa = (αa)i + (αb)j) + (αc)k = αb  . αc 

So vector addition and scalar multiplication can be nicely expressed in coordinates. Note that in deriving these expressions, we used our properties of vector addition and scalar multiplication (Propositions 1.9 and 1.10) repeatedly.

2.3

Equations of Lines

Let l be the line through the point P in the direction of the non-zero vector u. The point R with position vector r is on the line l if and only if the vector represented −→ by P R is a multiple of u. That is, R is on l if and only if r − p = λu for some λ ∈ R, or equivalently if and only if r = p + λu. This is called the vector equation for l. Note that in this equation, p and u are constant vectors (depending on the line), while r is a (vector) variable depending on the (real number) variable λ. The equation gives a condition which r satisfies if and only if R lies on the line l. Specifically, suppose that R is a point with position vector r. If there is some λ for which r = p + λu then R lies on l; if there is no such λ then R does not lie on l.       x p1 u1 Working in coordinates, let r =  y , p =  p2  and u =  u2 . We get that R z p3 u3 is on l if and only if         x p1 u1 p1 + λu1  y  =  p2  + λ  u2  =  p2 + λu2  . z p3 u3 p3 + λu3

2.3. EQUATIONS OF LINES

9

This is equivalent to the system of equations:  x = p1 + λu1  y = p2 + λu2 .  z = p3 + λu3

These are called the parametric equations for the line l. The variable λ is referred to as a parameter. Note that it appears in the above 3 equations (which we have called the parametric equations), but it also appears in the equation r = p+λu (which we called the vector equation, but could equally well be called a parametric vector equation). If u1 = 6 0, u2 = 6 0, u3 = 6 0 we can eliminate the parameter λ from the parametric equations to get x − p1 z − p3 y − p2 , = = u3 u2 u1 called the Cartesian equations for the line l. If u1 = 0, u2 6= 0, u3 6= 0 then the Cartesian equations are x = p1 ,

y − p2 z − p3 . = u3 u2

If u1 = u2 = 0, u3 6= 0 then the Cartesian equations are x = p1 ,

y = p2

(with no constraint on z). Note that we cannot have u1 = u2 = u3 = 0 because we insisted that u was a non-zero vector. Another natural way of describing a line is by giving two points that lie on it. If P and Q are distinct points1 with position vectors p and q respectively then the line containing P and Q is in direction q − p. We can now use the method above with u = q − p. For instance the line through P and Q has vector equation r = p + λ(q − p). We found this equation by noting that the line in question is the line through P in direction (q ...


Similar Free PDFs