Title | 2020/21 Finals Answer Key |
---|---|
Course | Linear Algebra I |
Institution | National University of Singapore |
Pages | 13 |
File Size | 184 KB |
File Type | |
Total Downloads | 81 |
Total Views | 239 |
Download 2020/21 Finals Answer Key PDF
MA1101R
Question 1 1 0 Let A = 0 a
0 1 0 b
[20 marks] 0 1 1 0 be a 4 × 4 matrix where a, b, c, d are some real numbers. 1 1 c d
(i) (4 marks) Find det A and write down the condition in terms of a, b, c, d such that the homogeneous system Ax = 0 has non-trivial solutions.
(ii) (4 marks) Let S = {(a, b, c, d) | Ax = 0 has only the trivial solution}. Is S a subspace of R4 ? Why?
(iii) (4 marks) Given rankA = 3, find the general solution of Ax = 0. Show your working. 1 1 (iv) (4 marks) Given that 1 is an eigenvector of A, find the condition satisfied by a, b, c, d. 1 (v) (4 marks) If a, b, c, d are all equal, find a basis for the column space of A in terms of a. Explain how you derive your answer. 1 0 0 1 1 1 0 0 1 1 0 1 1 0 = 0 1 1 − 0 0 1 (i) det A = 0 0 1 1 b c d a b c a b c d ) ( ) ( 1 1 0 1 0 1 0 0 = − − − + = d − c + b − a. c d b d a c a b So the system Ax = 0 has non-trivial solutions if and only if d − c + b − a = 0.
(ii) Ax = 0 has only the trivial solution if and only if det A = 0.
From (i), we can rewrite the set notation of S as {(a, b, c, d) | d − c + b − a = 0}.
Let u = (1, 0, 0, 0) and v = (0, 1, 0, 0). Both vectors belong to S . Then u + v = (1, 1, 0, 0) ∈ S .
So S does not satisfied the closure property and hence is not a subspace of R4 .
(iii) Since the first three rows of A are linearly independent, in order that rankA = 3, the 1 0 0 1 0 1 1 0 . last row (a, b, c, d) is “redundant” and hence a row echelon form of A is 0 0 1 1 0 0 0 0 Denote the four variables of Ax = 0 by x, y, z, w, by back substitution, we get the general solution w = t, z = −t, y = t, x = −t for t ∈ R.
2
x −1 y 1 . z In matrix form, this is given by = t −1 w 1 2 1 1 2 . (iv) A = 2 1 1 a+b+c+d 1 1 If 1 is an eigenvector of A, we must have a + b + c + d = 2. 1 1 0 0 1 0 1 1 0 . (v) In this case, A = 0 0 1 1 a a a a Note that the last row (a, a, a, a) = a(1, 0, 0, 1) + a(0, 1, 1, 0) is a linear combination of first and third row.
1 0 0 1 0 1 1 0 . So a row echelon form of A is 0 0 1 1 0 0 0 0 Hence we can take any three columns of A asthe basis the column space. for 1 0 0 0 1 1 In particular, we can take the basis , , . 0 0 1 a a a
3
Question 2a
[12 marks]
Let S = {(1, 1, 2, 0), (2, 2, 4, 0), (0, 0, 1, 3), (1, 1, 3, 3), (1, 1, 1, −3)} and V = span(S). (i) (4 marks) Find a basis S ′ for V such that S ′ ⊆ S and write down dim V .
(ii) (4 marks) Is V = span{(2, 2, 5, 3), (2, 2, 3, −3), (1, 1, 0, −6)}? Justify your answer.
(iii) (4 marks) Let W = {(x, y, z, w) | x − y + z − w = 0}. Find W ∩ V . Give your answer as a linear span. 1 2 0 1 2 0 (i) Let B = 2 4 1 0 0 3 So columns 1 and 3
1 1 3 3 of
1 1 E 0 GJ −→ 0 1 0 −3 B are linearly 1
2 0 1
1
0 1 1 −1 . 0 0 0 0 0 0 0 0 independent.
Hence S ′ = {(1, 1, 2, 0), (0, 0, 1, 3)} form a basis for V and dim V = 2. (ii) Stack the two spanning set of 1 0 2 2 1 0 2 2 2 1 5 3 0 3 3 −3
vectors column-wise 1 1 0 1 E 0 1 GJ −→ 0 0 0 −6 0 0
This represents a consistent system, hence
as an augmented matrix as follow: 2 2 1 1 −1 −2 0 0 0 0 0 0
span{(2, 2, 5, 3), (2, 2, 3, −3), (1, 1, 0, −6)} ⊆ V − − − − − (1) Flipping the two sets of vectors around: 2 2 1 1 0 2 2 1 1 0 E GJ −→ 5 3 0 2 1 3 −3 −6 0 3
1 0 −3/4 1/4
1/2
0 1 −5/4 1/4 −1/2 0 0 0 0 0 0 0 0 0 0
This again represents a consistent system, hence
V ⊆ span{(2, 2, 5, 3), (2, 2, 3, −3), (1, 1, 0, −6)} − − − − − (2) By (1) and (2), we get V = span{(2, 2, 5, 3), (2, 2, 3, −3), (1, 1, 0, −6)}. (iii) Observe that the vector (1, 1, 3, 3) ∈ V satisfies the equation x − y + z − w = 0 and hence it belongs to W .
Hence W ∩ V is non-trivial and this implies dim W ∩ V ≥ 1.
On the other hand, the vector (1, 1, 2, 0) ∈ V does not satisfy the equation x−y+z−w =
0 and hence it does not belong to W .
Hence W ∩ V is a proper subset of V and this implies dim W ∩ V < dim V = 2. So we conclude that dim W ∩ V = 1.
Since (1, 1, 3, 3) ∈ W ∩ V , we have W ∩ V = span{(1, 1, 3, 3)}.
4
Question 2b
[8 marks]
Let W be a subspace of Rn and W ⊥ = {w ∈ Rn | w · v = 0 for all v ∈ W }. (i) (2 marks) Show that W ∩ W ⊥ = {0}.
(ii) (6 marks) Show that every vector v ∈ Rn can be written uniquely as v = v1 + v2 where v1 ∈ W and v2 ∈ W ⊥ .
(You may assume in part (ii) that W and W ⊥ are associated to the row space and
nullspace of certain matrix.) (i) Let x ∈ W ∩ W ⊥ .
Then x ∈ W and x ∈ W ⊥ .
So x · x = 0 ⇒ x = 0. Hence we conclude that W ∩ W ⊥ = {0}. r1 r2 (ii) Let {r1 , r2 , . . . , rk } be a basis for W and let A = .. be a k × m matrix with i-th . rk row equal to ri . Then W is the row space of A and W ⊥ is the nullspace of A. Let {s1 , s2 , . . . , sh } be a basis for W ⊥ .
Then by dimension theorem, k + h = rank(A) + nullity(A) = n. To show that {r1 , r2 , . . . , rk , s1 , s2 , . . . , sh } is a basis for Rn , we just need to show the set is linearly independent:
a 1 r 1 + · · · + a k r k + b1 s 1 + · · · + bh s h = 0. This can be rewritten as a1 r1 + · · · + ak rk = −b1 s1 − · · · − bh sh (∗) Since LHS of (∗) belongs to W and RHS of (∗) belongs to W ⊥ , both sides belong to W ∩ W ⊥ = {0 }. Hence a1 r1 + · · · + ak rk = 0 implies a1 = · · · = ak = 0
and b1 s1 + · · · + bh sh = 0 implies b1 = · · · = bh = 0.
Therefore we conclude that {r1 , r2 , . . . , rk , s1 , s2 , . . . , sh } is linearly independent. For any v ∈ Rn , v = c1 r1 + · · · + ck rk + d1 s1 + · · · + dh sh = v1 + v2 where v1 = c1 r1 + · · · + ck rk ∈ W and v2 = d1 s1 + · · · + dh sh ∈ W ⊥ . Furthermore, the decomposition v = v1 + v2 is unique: Suppose v = u1 + u2 where u1 ∈ W and u2 ∈ W ⊥ . Then v1 + v2 = u1 + u2 ⇒ v1 − u1 = u2 − v2 (∗∗).
Like before, LHS of (∗∗) belongs to W and RHS of (∗∗) belongs to W ⊥ . So v1 − u1 = 0 ⇒ v1 = u1 and u2 − v2 = 0 ⇒ v2 = u2 .
5
Question 3a
[14 marks] 1 1 0 0 1 1 2 0 0 −1 . Let A = 1 2 3 4 and b = 1 1 2 3 4
−1
1 −3 0 1 1 −2 (i) (4 marks) Let S = {u1 , u2 , u3 } where u1 = 1 , u2 = 1 , u3 = 1 . 1 1 1 Show that S is an orthogonal basis for the column space V of A. (ii) (2 marks) Normalise S to get an orthonormal basis T = {v1 , v2 , v3 } for V .
(iii) (4 marks) Find the least squares solutions of Ax = b.
(iv) (4 marks) Extend the basis T in part (ii) to an orthonormal basis T ′ = {v1 , v2 , v3 , v4 } for R4 without using Gram-Schmidt. (i) Direct checking: u1 · u2 = 0, u1 · u3 = 0, u2 · u3 = 0.
So S is an orthogonal set and hence it is linearly independent. Denote the four columns of A by c1 , c2 , c3 , c4 . Then we have u1 = c1 , u2 = −3c1 + 2c2 , u3 = −c2 + c3 . Hence S ⊆ V (the column space of A).
Check that rank(A) = 3. So dim V = 3. Hence Sisan orthogonal basisfor V . 1 −3 0 1 1 , v2 = √1 1 , v3 = √1 −2 (ii) v1 = 2 1 12 1 6 1 1 1 1 T T (iii) We solveA Ax = A b. 4 6 6 9 0 6 12 12 16 −2 T AT A = 6 12 18 24 and A b = 0 . 9 16 24 33 1 (
1 0 0
1
1
0 1 0 −1/2 −1 A A | A b −→ 0 0 1 4/3 1/3 0 0 0 0 0 T
T
)
GJ E
By back substitution, we get the general solution: w = t, z =
1 1 4 − t, y = −1 + t, x = 1 − t. 3 3 2
6
So the least squares solutions of Ax = b are: 1−t x y −1 + 1 t 2 z = 1 4t . 3 −3 w t 1 −1 (iv) Let v = 1 be one of the least squares solutions in (iii). 3 0 1 −1 Then p = Av = 0 is the projection of b onto V . 0 0 0 Hence p − b = 1 is orthogonal to V , and hence to v1 , v2 , v3 . −1 0 0 So we can take v4 = ± √12 1 . −1
7
Question 3b
[6 marks]
Let S = {u1 , u2 , . . . , um } and T = {v1 , v2 , . . . , vm } be two orthonormal bases for a proper subspace V of Rn . ) ) ( ( Let C = u1 u2 · · · um and D = v1 v2 · · · vm be matrices formed using the basis vectors of S and T as their columns respectively. Determine whether the following are true or false. Justify your answers. (i) (2 marks) C and D are orthogonal matrices. (ii) (2 marks) If the reduced row echelon form of (D | C) is given by (I | P ), then P is
the transition matrix from S to T . (iii) (2 marks) C T D is the transition matrix from T to S .
(i) False. The sizes of C and D are n × m, so they are non-square matrices, and hence cannot be orthogonal matrices.
(ii) False. The size of P is n × m, So it is a non-square matrix, and hence cannot be a transition matrix.
(iii) True. T u1 u1 · v1 T ( ) u ·v u2 v1 v2 · · · vm = 2 1 CT D = .. .. . . T um
u1 · v2
u2 · v2
· · · u1 · vm
· · · u2 · vm ...
um · v1 um · v2 · · · um · vm
which is the transition matrix from T to S .
8
Question 4a 1 0 0 2 Let C = 0 2 1 0
[12 marks] 0 1 2 0 . 2 0 0 1
(i) (4 marks) Find the characteristic polynomial and all the eigenvalues of C. Show your working. (ii) (4 marks) Find a basis for each eigenspace of C. Show your working. (iii) (4 marks) Find a matrix P that orthogonally diagonalizes C and write down the corresponding diagonal matrix D. Explain how your answers are derived. x − 1 0 0 −1 x − 2 −2 0 0 x − 2 −2 0 x − 2 −2 0 = (x−1) −2 x − 2 (i) det(xI−C) = 0 + 0 −2 x − 2 −2 x − 2 0 0 0 0 x − 1 −1 0 0 −1 0 0 x − 1 2 2 2 = (x − 1) [(x − 2) − 4] − [(x − 2) − 4] = (x2 − 2x)(x2 − 4x) = x2 (x − 2)(x − 4) So the eigenvalues of C: 0, 2 and 4.
(ii) For λ = 0:
−1
0
0
−1 0
1 0 0 1 0
0 −2 −2 0 0 GJ E 0 1 1 0 0 0 −2 −2 0 0 −→ 0 0 0 0 0 −1 0 0 −1 0 0 0 0 0 0 By back substitution, we get the general solution: w = t, z = s, y = −s, x = −t.
So the eigenspace E0 for λ = 0 is {(−t, −s, s, t)T | s, t ∈ R} and a basis for E0 is 0 −1 −1 0 , . 1 0 0 1 For λ = 2:
1
0
0
0
0
−2
0
−1
−2 0
−1 0 0
0
0
0
1
0 0
0
GJ E
−→
1 0 0 −1 0
0 1 0
0
0 0 1
0
0 0 0
0
0 0
0
9
By back substitution, we get the general solution: w = t, z = 0, y = 0, x = t. So the eigenspace E2 for λ = 2 is {(t, 0, 0, t)T | t ∈ R} and a basis for E2 is 1 0 . 0 1 For λ = 4:
3
0
0
0 2 −2 0 −2 2 −1 0 0
−1 0 0
0 3
1 0
0
0 0
0 E 0 1 −1 0 0 GJ −→ 0 0 0 1 0 0 0 0 0 0 0 0
By back substitution, we get the general solution: w = 0, z = t, y = t, x = 0. So the eigenspace E4 for λ = 4 is {(0, s, s, 0)T | s ∈ R} and a basis for E4 is 0 1 . 1 0
(iii) The four eigenvectors in the bases for the eigenspaces: −1 1 0 0 −1 0 0 1 , , , 1 0 0 1 0 1 1 0 are orthogonal.
We normalise these vectors to get an orthogonal 0 −1 1 1 −1 0 0 P = √ 0 0 2 1 0 1 1
matrix 0 1 1 0
0 0 that orthogonally diagonalise C to give the diagonal matrix D = 0 0
0 0 0
0 0 0 . 0 2 0 0 0 4
10
Question 4b
[8 marks]
Let M is an n × n matrix such that M 2 = M and both 0 and 1 are eigenvalues of M . (i) (4 marks) Show that the column space of M is the eigenspace E1 associated to eigenvalue 1. (ii) (4 marks) Show that M is diagonalizable (i) Let v ∈ E1 (the eigenspace associated to eigenvalue 1).
Then M v = v which implies v belongs to the column space of M . Hence E1 ⊆ column space of M (1).
Let v ∈ column space of M . Then v = M w for some w ∈ Rn .
So v = M 2 w = M (M w) = M v. This implies v ∈ E1 .
Hence column space of M ⊆ E1 (2).
By (1) and (2), we conclude that column space of M = E1 (ii) From (i), dim E1 = dim(column space of M ) = rank M . On the other hand, the eigenspace E0 associated to eigenvalue 0 is the nullspace of M . So dim E0 = dim(nullspace of M ) = nullity M . By Dimension Theorem, dim E1 + dim E0 = rank M + nullity M = n. This implies there are n linearly independent eigenvectors of M , and hence M is diagonalizable.
11
Question 5a
[12 marks]
Let T : R3 → R3 be a linear transformation such that 0 0 1 2 1 0 T 1 = 1 , T 0 = 0 , T 2 = 2 . 2 2 2 4 0 4 (i) (3 marks) Find the standard matrix of T . Show how your answer is derived. (ii) (3 marks) Find the kernel of T . Give your answer as a linear span. (iii) (3 marks) Find the largest possible subspace V of R3 such that every vector v ∈ V
maps to itself under T . Explain how your answer is derived. 1 3 (iv) (3 marks) Are there any vector v ∈ R such that T (v) = 2? Justify your answer. 0
(i) Let A be the standard matrix of T . 0 1 2 1 0 0 A 1 = 1 , A 0 = 0 , A 2 = 2 . 4 0 4 2 2 2 0 0 1 1 By stacking, we have: A 1 0 2 = 1 2 2 2 0 −1 4 0 1 1 0 2 0 = 1 ⇒ A = 1 0 2 1 0 2 0 3 8 2 2 0 2 4 4 (ii) ker T = nullspace A:
4 −2 1 0
1 0 3 8
3 2
2 0
0 2 4 4 −2 1 3 0. 2 2
1 0 1/4 0
GJ E 0 0 −→ 0 1 0 0 2 0
0 0
0 0
1 By back substitution, we get the general solutions: z = t, y = 0, x = − t. 4 So ker T = span{(−1/4, 0, 1)}.
(iii) The largest possible subspace V of R3 such that T (v) = v for all v ∈ V is E1 , the eigenspace of A associated to eigenvalue 1.
From the given conditions as well as part (ii), we know A has three distinct eigenvalues 1, 2, 0. Soeach has dimension 1. eigenspace 0 0 From A 1 1 2
=
2
0 1
2
, we know that E1 = span
.
12
1 (iv) No. This is the same as saying 2 ∈ column space of A. 0 4 −2 1 3 1 0 1/4 0 1 GJ E 0 3 0 6 −→ 0 1 0 0 3 8 2 2 0 0 0 0 1
1 Since the above system is inconsistent, we conclude that T (v) = 2 for any vector 0 v ∈ R3 .
13
Question 5b
[8 marks]
Let T : Rn → Rn be a linear transformation and R(T ) is the range of T . Denote T 1 = T and T k+1 = T ◦ T k for all (integers) k ≥ 1.
(i) (2 marks) Show that R(T k+1) ⊆ R(T k ) for all k ≥ 1.
(ii) (6 marks) Suppose T m is the zero transformation for some m > n. Show that T n must be the zero transformation. (Note that T itself need not be the zero transformation.) Hint: Show that if R(T k ) = R(T k+1 ) for some k ≥ 1, then R(T k ) = R(T h ) for all h ≥ k . (i) For any k ≥ 1, let v ∈ R(T k+1 ). So v = T k+1 (w) = T k (T (w)) for some w ∈ Rn . Hence v ∈ R(T k ).
This implies R(T k+1) ⊆ R(T k ). (ii) First of all, we show that if R(T k ) = R(T k+1), then R(T k ) = R(T k+2 ). Let v ∈ R(T k ). Then v ∈ R(T k+1 ). So v = T k+1 (w) = T (T k (w)) for some w ∈ Rn . Let u = T k (w) ∈ R(T k ) = R(T k+1). So u = T k+1 (x) for some x ∈ Rn . So v = T (u) = T (T k+1(x)) = T k+2 (x) ∈ R(T k+2). Hence we have R(T k ) ⊆ R(T k+2 ).
On the other hand, by (i), R(T k+2) ⊆ R(T k+1 ) = R(T k ).
So we conclude R(T k ) = R(T k+2 ).
Inductively, if we have R(T k ) = R(T k+1), then R(T k ) = R(T h ) for all h ≥ k .
Now let p be the smallest integer such that R(T P ) = R(T p+1). Then
R(T ) ) R(T 2 ) ) · · · ) R(T p ) = R(T p+1 ) = · · · = R(T m ) = {0}. This implies n ≥ rank (T ) > rank (T 2 ) > · · · > rank (T p ) = rank (T p+1 ) = · · · = rank(T m ) = 0 since T m is the zero transformation. Since we have a strictly decreasing sequence from rank(T ) to rank(T p ), and rank(T p ) = 0, this implies p ≤ n.
Consequently, rank(T n ) = 0 ⇒ R(T n ) = {0} ⇒ T n is the zero transformation....