Algebra 2 class notes PDF

Title Algebra 2 class notes
Course Algebra 2
Institution McGill University
Pages 79
File Size 1.4 MB
File Type PDF
Total Downloads 345
Total Views 781

Summary

Contents 1 Overview 1 An Overview 2 Vectors 2 Vectors 2 Vector Operations 2 Vector Properties 3 Fields 3 Fields 3 The Complex NumbersC 4 Abstract Vector Spaces 4 Vector Spaces 4 Zero Elements 4 Examples of Vector Spaces 5 Subspaces 5 Subspaces 5 Examples of Subspaces 5 Operations on Subspaces 6 Line...


Description

Contents 1 Overview 1.1 An Overview

3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 Vectors

3 4

2.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

2.2 Vector Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

2.3 Vector Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

3 Fields

6

3.1 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

3.2 The Complex Numbers C . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

4 Abstract Vector Spaces

8

4.1 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

4.2 Zero Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

4.3 Examples of Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

5 Subspaces

10

5.1 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10

5.2 Examples of Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10

5.3 Operations on Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

6 Linear Combinations

14

6.1 Linear Combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

6.2 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16

6.3 Spanning Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

7 Dimensionality

19

7.1 Dimensions of Spanning Sets . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

7.2 Dimensions of Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

8 Linear Maps

25

8.1 Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

8.2 Kernel and Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

1

8.3 Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Coordinate maps and matrix representations

29 35

9.1 Coordinate maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

9.2 Matrix representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

9.3 Change-of-Basis Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

9.4 Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

10 Inner Product Spaces

44

10.1 Inner product spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

44

10.2 The Cauchy-Schwarz Inequality . . . . . . . . . . . . . . . . . . . . . . . . .

46

10.3 The Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

48

11 Orthogonality

51

11.1 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51

11.2 Orthogonal & Orthonormal Sets . . . . . . . . . . . . . . . . . . . . . . . . .

51

11.3 Orthonomal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

52

11.4 Gram-Schmidt Orthogonalization . . . . . . . . . . . . . . . . . . . . . . . .

54

11.5 Orthogonal Complements

56

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

12 Determinants

61

12.1 Fundamental Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

12.2 The Cofactor Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

63

12.3 The Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64

12.4 The Characteristic Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . .

66

13 Eigenvalues and Eigenvectors

70

13.1 Eigenspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Diagonalization

72 75

14.1 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

75

14.2 Symmetric Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

2

1

Overview

1.1

An Overview

Algebra Algebra is the study of algebraic structures (sets endowed with operations) and structure-preserving maps. Some examples of algebraic structures and the corresponding structure-preserving maps are: 1. groups, and group homomorphisms; 2. rings, and ring homomorphisms, 3. vector spaces, and linear maps. Linear algebra studies two interrelated strands: 1. vector spaces (including linear maps, inner product spaces, etc.) 2. matrices (determinants, eigenvectors and eigenvalues, diagonalization and other canonical forms, etc.)

3

2

Vectors

2.1

Vectors

Example. Consider Rn , the set of n-tuples with real entries: Rn = v = (v1 , · · · , vn )vk ∈ R A tuple v is referred to as a vector (the algebraic viewpoint), or as a point (the geometric viewpoint). Descartes introduced R3 as a model for the real world. Mental images for Rn , or even abstract vector spaces, stem from R3 . There are two operations on Rn : vector addition and scalar multiplication.

2.2

Vector Operations

Vector Addition then

For u, v ∈ Rn → u + v ∈ Rn , if u = (u1 , · · · , un ) and v = (v1 , · · · , vn ), u + v = (u1 + v1 , · · · , un + vn ).

As an algebraic structure (R2 , +) forms an abelian group. The group properties are: 1. u + (v + w) = (u + v) + w (associativity) 2. v + 0 = 0 + v = v (existence of the zero, or neutral, element for addition) 3. v + (−v) = (−v) + v = 0 (existence of inverses; here −v = (−v1 , · · · , −vn ) takes the inverse of each coordinate) 4. u + v = v + u (commutativity). Scalar Multiplication

For a ∈ R, v ∈ Rn → av ∈ Rn , if v = (v1 , · · · , vn ), then av = (av1 , · · · , avn ).

Note: The latter equal sign is the definition of the scalar multiplication, that is (a, · · · , a) × (v1 , · · · , vn ) = (av1 , · · · , avn ) This multiplication scales the vectors, hence the word “scalar.” Note: While vector addition is an internal operation on Rn , scalar multiplication reflects an external action of R.

4

2.3

Vector Properties (a) 1 · v = v for all v ∈ Rn . (b) a(bv) = (ab)v for all a, b ∈ R and v ∈ Rn . (c) a(u + v) = au + av for all a ∈ R and u, v ∈ Rn . (d) (a + b)v = av + bv for all a, b ∈ R and v ∈ Rn .

5

3

Fields

3.1

Fields

Field A field is a commutative ring in which every non-zero element has a multiplicative inverse. Examples. 1. R and C; these are the most important fields in Linear Algebra 2. Q 3. Z/pZ where p is a prime number (a finite field!) (Non-)Examples. 1. Z 2. Z/nZ, where n is composite

3.2

The Complex Numbers C

Proposition: (C, +, ·) forms a field. A crash-course on C

For any number c ∈ C C = {a + bi : a, b ∈ R}

where i2 = −1 (the imaginary number). Addition and Multiplication

if z = a + bi and w = c + di, then

z + w = (a + c) + (b + d)i and zw = (ac − bd) + (ad + bc)i That is, addition is simply component-wise, but multiplication exploits the fundamental fact that i2 = −1.

6

The Imaginary Unit

There are several ways of defining C, and in particular i.

1. C as the set of real pairs, {(a, b) : a, b ∈ R}. Here i = (0, 1). The downside is that the algebraic properties of C are somewhat opaque in this model. 2. C as the quotient polynomial ring R[x]/(x2 + 1). Here i is the class of x. Note: (x2 + 1) is irreducible, and general theory grants us the fact that C in this model is a field.

Theorem 3.2.1. Over C, every polynomial of degree n ≥ 1 has n roots, counted with multiplicity.

Example. x2 + 1 has two roots over C, but no root over R.

7

4

Abstract Vector Spaces

4.1

Vector Spaces

Vector Space operations:

A vector space V over a field F is a nonempty set V endowed with two

1. Addition: a map V × V → V , sending u, v ∈ V to u + v ∈ V . 2. Scalar multiplication: a map F × V → V , sending a ∈ F, v ∈ V to av ∈ V So that the following hold: (a) (V, +) is abelian. (b) 1 · v = v for all v ∈ Rn . (c) a(bv) = (ab)v for all a, b ∈ R and v ∈ Rn . (d) a(u + v) = au + av for all a ∈ R and u, v ∈ Rn . (e) (a + b)v = av + bv for all a, b ∈ R and v ∈ Rn . The elements of V are called vectors; the elements of F are called scalars.

4.2

Zero Elements

Proposition 4.2.1. (Proposition about Zero Elements) Let V be a vector space over F. Then: 1. 0 · v = 0 for all v ∈ V .

Proof. Using 0 = 0 + 0 in F, we have the following equality in V : 0v = (0 + 0) · v = 0 · v + 0 · v

This uses the distributivity of scalars over vector addition. Then, adding the inverse of 0 · v (V is a group under vector addition!) to both sides of the equality, we get 0 = 0 · v. 2. a · 0 = 0 for all a ∈ F.

Proof. Let as exercise.

8

3. If a · v = 0, where a ∈ F and v ∈ V , then a = 0 or v = 0.

Proof. If a = 0, done. If a 6= 0, multiply both sides of a · v = 0 by the multiplicative inverse of a, denoted 1/a. Here we use that F is a field. We get 1 1 (av) = 0 = 0 a a

by using the second statement. By the vector space axioms, we also have 1  1 (av) = a v = 1v = v a a and so v = 0.

4.3

Examples of Vector Spaces

Let F be any field. The following are vector spaces over F. 1. Fn = {(x1 , · · · , xn ) : xk ∈ F}, the set of n-tuples where each entry is an element of F. The operations are component-wise, using the operations of F. 2. {0}, the trivial vector space. 3. F∞ = {(x1 , · · · , xn )xk ∈ F}, the set of sequences (xk )k≥1 with entries in F. Informally, this is an ‘infinite’ version of Fn above. Again, the operations are component-wise. 4. F[t], the polynomials in the variable t with coefficients in F. Operations are coefficientwise. 5. K, an overfield of F. That is, K is field containing F. The operations are those of K. 6. Mm×n (F), the set of m × n matrices with entries in F. Each element of the vector space is a matrix, where each component is an element of the field. Operations are entry-wise. What is the zero vector? The m × n zero matrix, usually denoted 0m×n . 7. Example specific to R: C[a, b] the set of continuous functions defined over the interval [a, b], with real values.

9

5

Subspaces

5.1

Subspaces

Let V be a vector space over field F. There are two alternate definitions of vector space, and it is a good exercise to understand why they are equivalent. Subspace A non-empty subset U ⊆ V is a subspace of V if U is a vector space over F when endowed with the restricted operations. Alternate Definition

A non-empty subset U ⊆ V is a subspace of V if

1. 0 ∈ U (the zero vector of V is in U ) 2. if u1 , u2 ∈ U , then u1 + u2 ∈ U (closed under the vector addition) 3. if a ∈ F and u ∈ U , then au ∈ U (closed under the scalar multiplication) In practice, we use the second, more concrete definition.

5.2

Examples of Subspaces

Example. Show that U = (x, y, z) : x + y + z = 0 is a subspace of R3 .

1. (0, 0, 0) ∈ U since 0 + 0 + 0 = 0. 2. U is closed under vector addition. Let u1 , u2 ∈ U . Then u1 = (x1 , y1 , z1 ) and u2 = (x2 , y2 , z2 ), where x1 + y1 + z1 = 0, respectively x2 + y2 + z2 = 0. We have u1 + u2 = (x1 + x2 , y1 + y2 , z1 + z2 ) and (x1 + x2 ) + (y1 + y2 ) + (z1 + z2 ) = (x1 + y1 + z1 ) + (x2 + y2 + z2 ) = 0. Therefore u1 + u2 ∈ U . 3. U is closed under scalar multiplication. Let u ∈ U and a ∈ R. Writing u = (x, y, z), where x + y + z = 0, we have au = (ax, ay, az) and ax + ay + az = a(x + y + z) = 0. Hence au ∈ U . Note: The subset {(x, y, z) : x + y + z = 1} is not a subspace of R3 . 10

Example. U = {(x, y, z) : z = 0}, the xy plane, is a subspace of R3 . Example. The set of all symmetric matrices is a subspace of Mn (F). Recall, a matrix A is symmetric if A = AT where AT is the transpose of A. In other words, a matrix A = (aij ) is symmetric if aij = aji for all i, j . Example. Let V be a vector space. Then {0} and V are subspaces of V . Note: This is completely analogous to the fact that every group G has two ‘trivial’ subgroups, identity and G itself. Example. Recall that F[t] is the vector space of polynomials over the field F. Let F[t]n denote the set of all polynomials of degree at most n. That is: F[t]n = c0 + c1 t + · · · + cn tn : ck ∈ F 1. The polynomial 0 is in F[t]n . Recall that by convention, deg(0) = −1. 2. F[t]n is closed under addition. Let f, g ∈ F[t]n , so deg(f ), deg(g) ≤ n. Then deg(f + g) ≤ max{deg(f ), deg(g)} ≤ n and so f + g ∈ F[t]n . 3. F[t]n is closed under scalar multiplication. Let f ∈ F[t]n , so deg(f ) ≤ n, and let a ∈ F . Then ( deg(f ), if a 6= 0, deg(af ) = deg(0) = −∞, if a = 0. Note: The set of all polynomials (over F) of degree n, together with the zero polynomial 0, is not a subspace. It is closed under scalar multiplication, but not under addition.

5.3

Operations on Subspaces

Note: Throughout, we fix an ambient vector space V .

Theorem 5.3.1. The intersection of two subspaces is again a subspace. That is, if U1 and U2 are two subspaces, then U1 ∩ U2 is also a subspace.

11

Proof. We check, as usual, the three conditions. 1. 0 ∈ U1 ∩ U2 since 0 ∈ U1 and 0 ∈ U2 . 2. U1 ∩ U2 is closed under vector addition. Indeed, let u, v ∈ U1 ∩ U2 . Then u, v ∈ U1 , hence u + v ∈ U1 ; similarly, u + v ∈ U2 . Thus u + v ∈ U1 ∩ U2 . 3. U1 ∩ U2 is closed under scalar multiplication. Let a ∈ F and u ∈ U1 ∩ U2 . Then u ∈ U1 , hence au ∈ U1 . Similarly av ∈ U2 , then au ∈ U1 ∩ U2 . Note: The union of two subspaces is no longer a subspace, in general. Sum

Let U1 and U2 be two subspaces. The sum of U1 and U2 is the set U1 + U + 2 = {v ∈ V : v = u1 + u2 : u1 ∈ U1 , u2 ∈ U2 }.

Theorem 5.3.2. The sum of two subspaces is again a subspace. That is, if U1 and U2 are two subspaces, then U1 + U2 is also a subspace. Proof. ”There’s so few hypotheses that you can hardly go wrong.” 1. As 0 ∈ U1 and 0 ∈ U2 , we have 0 = 0 + 0 ∈ U1 + U2 . 2. U1 + U2 is closed under vector addition. Let v, v ′ be two vectors from U1 + U2 . Rewrite each as a sum: v = u1 + u2 and v ′ = u1′ + u2′ , where u1 , u′1 ∈ U1 and u2 , u2′ ∈ U2 . Then v + v ′ = (u1 + u2 ) + (U1′ + u′2 ) = (u1 + u1′ ) + (u2 + u2′ ) where u1 + u1′ ∈ U1 , and u2 + u2′ ∈ U2 , by definition of subspace. Therefore, by the definition of U1 + U2 , we have that v + v ′ ∈ (U1 + U2 ). 3. U1 + U2 is closed under scalar multiplication. Let v ∈ U1 + U2 and a ∈ F, where v = u1 + u2 . The av = a(u1 + u2 ) = au1 + au2 where au1 ∈ U1 and au2 ∈ U2 , by definition of subspace. Therefore, by the definition of U1 + U2 , we have that av ∈ (U1 + U2 ).

12

Theorem 5.3.3. The sum subspace U1 + U2 is the smallest subspace that contains both U1 and U2 . Proof. Firstly, we show that U1 ⊆ U1 + U2 and U 2 ⊆ U1 + U2 . Let u1 ∈ U1 , then u1 = u1 + 0 ∈ U1 + U2 . The inclusion for u2 follows the same logic.

Secondly, we show that U1 + U2 is the smallest subspace containing U1 and U2 . This means the following: if W is a subspace that contains both U1 and U2 , then U1 + U2 ⊆ W . Let v ∈ U1 + U2 . Then v = u1 + u2 , where u1 ∈ U1 , u2 ∈ U2 . As u1 , u2 ∈ W , we have u1 + u2 ∈ W . That is, v ∈ W .

Direct Sum

V is the direct sum of two subspaces U1 and U2 , written V = U1 ⊕ U2

if V = U1 + U2 and U1 ∩ U2 = {0}. Example. In R3 , consider the subspace U1 = {(x, y, 0) : x, y ∈ R}, the xy-plane. We wish to find another subspace U2 such that U1 ⊕ U2 = R3 . An example is U2 = {(0, 0, z) : z ∈ R}, the z-axis. We need to check two properties: U1 ∩ U2 = {0},

U1 + U2 = R3 .

We check each equality by proving two inclusions. Note, however, that for each equality, one of the inclusions is trivial: U1 ∩ U2 ⊇ {0},

no matter what U1 and U2 are.

U1 + U2 ⊆ R3

Let (x, y, z) ∈ U1 ∩ U2 . Then z = 0 (since (x, y, z) ∈ U1 ) and x = y = 0 (since (x, y, z) ∈ U2 ). Thus (x, y, z) = 0. We conclude that U1 ∩ U2 = {0}. Let (x, y, z) ∈ R3 . Then (x, y, z) = (x, y, 0) + (0, 0, z) ∈ U1 + U2 , since (x, y, 0) ∈ U1 and (0, 0, z) ∈ U2 . We conclude that U1 + U2 = R3 . Another example is U2 = {(z, z, z) : zR}. In fact, it turns out that every line through the origin could act as U2 as long as the line is not contained in the xy-plane U1 . That line would effectively add one dimension to the sum.

Example. Consider the vector space Mn (F). Let U denote the subspace consisting of the upper triangular matrices. Let L be the subspace consisting of the lower triangular matrices. Then U + L = Mn (F), and U ∩ L is the subspace consisting of diagonal matrices. As U ∩ L is not the zero subspace {0}, Mn (F) is not the direct sum of U and L. 13

6

Linear Combinations

6.1

Linear Combination

Linear Combination form

A linear combination of vectors v1 , · · · , vk ∈ V is a vector of the c 1 v1 + · · · + c k vk

where c1 , · · · , ck ∈ F. Example. In R3 , (2, 2, 0) is a linear combination of (1, 0, 0) = e1 , (0, 1, 0) = e2 , and (1, 1, 0) = e1 + e2 . Indeed: (2, 2, 0) = 2e1 + 2e2 + 0(e1 + e2 ) but also (2, 2, 0) = 0e1 + 0e2 + 2(e1 + e2 ). Note that not every given vector is used, in the sense that the corresponding coefficient is zero. Note also that the linear combination of vectors is not unique; this has to do with to linear dependence. On the other hand, (1, 2, 3) is not a linear combination of the above vectors e1 , e2 , e1 +e2 . For a generic linear combination of these vectors has the form ae1 + be2 + c(e1 + e2 ) = (a + c, b + c, 0) and (1, 2, 3) is not as such, in view of the last component. Span The span of a set of vectors v1 , · · · , vk ∈ V is the set of all linear combinations using the vectors v1 , · · · , vk : Span{v1 , · · · , vk } = {c1 v1 + · · · + ck vk : ck ∈ F} Example. In R2 , we have 1. Span{e1 } = {ce1 : c ∈ R} = {(c, 0, 0) : c ∈ R}. 2. Span {e1 , e2 , e1 + e2 } = {ae1 + be2 + c(e2 + e2 ) : a, b, c ∈ R} = {(a + c, b + c, 0) : a, b, c ∈ R} = {(a, b, 0) : a, b, c ∈ R} = Span{e1 , e2 }. 3. Span{e1 , e2 , e3 } = R3 .

14

Example. In the vector space F[t]: 1. Span{1, t, · · · , tn } = F[t]n , the subspace of polynomials of degree at most n. 2. Span{1, t, · · · } = F[t]. The span of this infinite set of polynomials is the same as the entire vector space. Span of a Subspace Given a subset S ⊆ V , the span of S is the set of all linear combinations of vectors from the set S. By convention, Span∅ = {0}. Theorem 6.1.1. Span(S) is a subspace of V . In fact, Span(S) is the smallest subspace of V that contains S . Proof. We check the first claim, ...


Similar Free PDFs