1. Find all 2× 2 matrices A which satisfy the equation A2 = 2I. Solution. Let A = ( a b c d ) . Then the above equations becomes( a2 + bc b(a+ d) c(a+ d) d2 + bc ) = ( 2 0 0 2 ) . Then a, d = ±√2− bc and either b, c = 0 or a = −d. If b, c = 0, then the matrix has the form (±√2 0 0 ±√2 ) . Else if a = −d, then bc ≤ 2 to keep the matrix real, so that the matrix has the form(±√2− bc b c ∓√2− bc ) . 2. Compute the permuted LDV decomposition of the matrix0 1 20 2 3 1 −1 0 . Determine the rank and dimension of the kernel as well. Solution. Note that there is more than one way to compute a permuted LU decomposition. (a) 0 1 20 2 3 1 −1 0 −1 = −3 2 1−3 2 0 2 −1 0 0 0 10 1 0 1 0 0 0 1 20 2 3 1 −1 0 = 1 0 00 1 0 0 1/2 1 1 −1 00 2 3 0 0 1/2 (b) −1 2 00 0 1 1 1 0 −1 = −1/3 0 2/31/3 0 1/3 0 1 0 0 1 01 0 0 0 0 1 −1 2 00 0 1 1 1 0 = 1 0 0−1 1 0 0 0 1 −1 2 00 3 0 0 0 1 (c) 5 −1 23 2 1 0 2 0 −1 = −1 2 −5/20 0 1/2 3 −5 13/2 5 −1 23 2 1 0 2 0 = 1 0 03/5 1 0 0 10/13 1 5 −1 20 13/5 −1/5 0 0 2/13 1 23. Let P n be the vector space of polynomials of degree ≤ n. Find the dimension of this vector space. Solution. The dimension of P n is n + 1. We can show that the polynomials {1, x, . . . , xn} form a basis of P n. First we can argue that these monomials are independent. Suppose some linear combination of them were 0, namely c0 + c1x+ . . . cnx n = 0. If one of the ci 6= 0 then we would have an actual polynomial on our hands, which is definitely not the 0 polynomial by definition. So it must be that ci = 0 for all i. (Kind of tautolog- ical I know, that’s just how monomials work, they never cancel with each other.) These functions span P n by definition; all degree n or less polynomials are a linear combination of {1, x, . . . , xn}. Thus they form a basis. To find the dimension of a vector space, we can just count the number of vectors in any basis. Therefore dimP n = n+ 1. 4. (a) Let A = (−1 3 1 2 ) and x = ( x y ) . Compute the expression xTAx. (b) Consider the polynomial in two variables 2x2 + xy + 3y2. Write find a matrix B such that polynomial in the form 2x2 + xy + 3y2 = xTBx. (c) Show that every polynomial of the form ax2 +bxy+cy2 can be written in the form xTMx where M is a symmetric matrix. Solution. (a) −x2 + 4xy + 2y2 (b) B = ( 2 1 0 3 ) (c) The matrix M has the form( a b/2 b/2 c ) . 5. Let V be a vector space. Let U and W be subspaces of V . (a) Show that the intersection U ∩W is a subspace. (b) Let V = R4. Find subspaces U and W such that dimU ∩W = 0. Solution. (a) Let v, w ∈ U ∩W . In particular both v and w are in both U and W . First U ∩W is nonempty since 0 ∈ U and 0 ∈ W . Then v + w ∈ U since U is a subspace and similarly for W . Thus v + w ∈ U ∩W . Finally, if c ∈ R, then cv ∈ U since U is a subspace and similarly for W . Thus cv ∈ U ∩W . (b) If we let U = span{(1, 0, 0, 0)T , (0, 1, 0, 0)T} and W = span{(0, 0, 1, 0)T , (0, 0, 0, 1)T}. These subspaces have trivial intersection since all of the vectors are linearly independent. 6. Define Mm×n(R) to be the set of m× n matrices with real entries. (a) Show that this is a vector space under the operations A+B, cA, where (A+B)ij = Aij +Bij (cA)ij = c(A)ij. What is the dimension of Mm×n(R)? (b) Recall that the trace of a square matrix A is the sum of its diagonal elements. trA = ∑ i ai,i = a1,1 + · · ·+ an,n 3Show that the set of matrices A with trA = 0 is a subspace of the vector space Mn×n(R). (Optional Challenge: What’s the dimension of the subspace of trace 0 matrices?) Solution. (a) Proving the vector space properties for Mm×n(R) is essentially the same as it is for Rn. I’ll highlight the important parts. For the zero vector in Mm×n(R), that role is played by the matrix of all 0’s, which I will denote also by 0. Indeed entrywise addition by 0 is trivial; A + 0 = 0 + A = A. Similarly, the negative vector −A can be found by just making all the entries of A negative, i.e. (−A)ij = −(A)ij. Then it is clear that A + (−A) = 0. The rest of the properties follow as you would exepct. (Not quite as unobvious as 2.1.2.) As for the dimension of Mm×n(R), we need to find a basis of this vector space. The intuition here is that it’s basically the same as Rmn except instead writing a giant mn-sized column vector, we write it in a grid instead. So we claim that dimMm×n(R) = mn. To show this we need to find a basis and count the number of elements in it. We can find a basis by making “standard basis matrices” just like we made standard basis vectors. Remember ei was a vector of all 0’s, except a 1 in the ith spot. Now we can do that with matrices. Define a matrix E(ij) by letting E (ij) ij = 1 and 0 otherwise. Here are the standard basis matrices for the 2 by 2 case for example. E(11) = ( 1 0 0 0 ) E(12) = ( 0 1 0 0 ) E(21) = ( 0 0 1 0 ) E(22) = ( 0 0 0 1 ) The set {Eij} forms a basis of Mm×n(R). They are linearly independent since they all have 1’s in different spots, and they span since given any matrix A, we can write A = ∑ ij (Aij)E (ij). For example (−2 3 1 5 ) = −2E(11) + 3E(12) + 1E(21) + 5E(22). Since the standard basis matrices are independent and span, they form a basis. If we count them up, there are mn basis vectors so dimMm×n(R) = mn as desired. (b) First the zero matrix 0 has no trace, so it satisfies the first property. Let A and B be matrices with tr(A) = 0 = tr(B). For additive closure, note that tr(A+B) = tr(A) + tr(B), since matrices add component wise. Therefore tr(A + B) = 0 + 0 = 0. To show that scalar multiplication is closed, note that tr(cA) = ∑ i c(A)i,i = c tr(A) = 0. Thus the trace zero matrices form a subspace. Optional Challenge solution: As for the dimension, we can be really really clever and apply rank-nullity. Pretend that Mn×n(R) is just a copy of Rn 2 by writing all the entries of the matrix in one big column; maybe like first row on top of the second row, etc. Like this A = (a11, a12, . . . , a1n, a21, . . . , ann) T . Now you can think of the trace as a matrix multiplication operation as follows Since tr(A) = a11 + · · ·+ ann this kind of looks like a matrix T times the giant n2-vector above. If you’re a little clever, you can see “trace matrix” T will have 1’s for every aii entry and 0’s otherwise. To be explicit, the 3× 3 case looks like this. 4T (A) = ( 1 0 0 0 1 0 0 0 1 ) a11 a12 a13 a21 a22 a23 a31 a32 a33 = a11 + a22 + a33 = trA. Taking the trace is the same as multiplying by that row matrix, which we can think of as the matrix representing the trace operation. We’ll call this 1 × n2 matrix T . So the set of trace 0 matrices, is just the set of matrices such that tr(A) = TA = 0 can be considered as kerT ! So all we have to do is compute the dimension of the kernel of this 1× n2 matrix. Well, it’s a 1 × n2 matrix, so the rank can be at most 1. In fact the rank is 1, since we have a nonzero column. (It’s already in RREF actually!). Since rank(T ) = 1, then dim kerT = number of columns− rankT = n2 − 1. Therefore, the set of trace 0 matrices is n2 − 1 dimensional. 7. (a) Determine whether the vector ( 1 2 3 4 )T is in the span of the vectors −1 1 −2 0 0 0 −1 2 7 −3 0 2 . (b) What is the dimension of the span of these 3 vectors? Can the 3 vectors possibly form a basis of R4? Solution. Put the three vectors in the first three columns of a matrix, and make (1, 2, 3, 4)T the last column of the matrix. If the system has a free column in the fourth column, then the last vector is dependent on the other three. If it doesn’t, then the fourth vector is independent of the other three. This matrix actually row reduces to the identity. −1 0 7 1 1 0 −3 2 −2 −1 0 3 0 2 2 4 −→ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 Therefore the vector (1, 2, 3, 4)T is not in the span of the other 3. Furthermore, the first 3 vectors can’t possibly be a basis of R4. First of all, all bases of R4 have 4 vectors in them. But additionally, we just showed that these 3 vectors don’t span, so they can’t be a basis. 8. Consider the vector space C0(R) of continuous functions on R. Show that the functions f(x) = cos(2x), g(x) = cos2(x) and h(x) = 1 are linearly dependent in this vector space. Solution. The double angle formula is cos(2x) = 2 cos2(x)− 1. 5Rewritten, this is a linear relationship between the functions so that f(x)−2g(x)+h(x) = 0 and they are dependent. 9. (a) Find the numbers a such that the columns of the following matrix form a basis of R3. A = a 1 20 a 1 −1 2 a (b) For what a is the rankA = 1? How about rankA = 2? Solution. (a) By the main theorem for square matrices, detA = 0 iff the columns of A do not form a basis. Taking the determinant, we obtain that detA = a3 − 1 = 0. The only solution in the reals is a = 1. So when a 6= 1, the columns form a basis of R3. (b) We know that rankA = 3 when a 6= 1. So we just have to check what the rank is when a = 1. In this case, the matrix row reduces to1 0 10 1 1 0 0 0 which is rank 2 since it has 2 pivots. Alternatively, you could have seen that the first two columns of A are independent one is not a multiple of the other. Either way, the rank is 1 for no a ∈ R, and for a = 1, the rank is 2. 10. Show that two vectors v1, v2 in R2 form a basis when v1 is not a multiple of v2. Solution. Suppose for contradiction that {v1.v2} form a basis and v2 = cv1. But this can be rewritten as cv1 + (−1)v2 = 0 so that v1 and v2 have a linear relationship, and therefore are not independent. Then they can’t be a basis. This is a contradiction, so v2 6= cv1 for any c. They are not multiples of each other. 11. Suppose a matrix M has 5 columns, labeled v1, . . . , v5. Suppose that M has the following RREF form. 1 0 3 −1 0 0 1 −2 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 (a) Find the rank of M . (b) Which columns of M form a basis of the image of M . (c) If possible, write v3, v4, and v5 in terms of the vectors before it. (d) Find kerM and the nullity of M . (e) How many independent rows does M have? (f) Is M invertible? Solution. (a) rank(A) = number of leading 1’s = 3. (b) Since the 1st, 2nd, and 5th columns have leading 1’s in the RREF, then v1, v2, v5 form a basis of the image of M . (c) From the RREF v3 = 3v1 + (−2)v2 and v4 = (−1)v1 + 1v2. The vector v5 is independent, so it is not possible. (d) From the RREF, we know that there are 2 free columns. If we label the columns by the variables, x, y, z, w, u, then z, w are free. The rows of the the RREF tell us the equations between the variables, namely x = −3z+w, y = 2z−w and u = 0. Therefore vectors in kerM have the form 6 x y z w u = −3z + w 2z − w z w 0 = −3 2 1 0 0 z + 1 −1 0 1 0 w and kerM = span{(−3, 2, 1, 0, 0), (1,−1, 0, 1, 0)}. The nullity is the dimension of the kernel, so in this case 2. Alternatively, we could used rank-nullity since we already knew the rank. (e) Since M has 3 independent columns, it also has 3 independent rows. Remember rank(A) = rank(AT ). (f) M is not invertible, since rank(M) 6= 5. 12. Find the solution sets to the following linear systems where possible. (If no solution, say “no solution”.) a) ( 1 −1 −2 2 )( x y ) = (−1 1 ) b) x+ y + z = 0 x− 2y + z = 1 −x+ y + z = 2 c) −x+ y − z + w = 0 x− y − z + w = 0 y + 2z = 1 Solution. (a) The matrix ( 1 −1 −1 −2 2 1 ) has RREF form ( 1 −1 0 0 0 1 ) . So this system is inconsistent and has no solution. You can think of the constant vector (−1, 1) as lying outside of the image of ( 1 −1 −2 2 ) . (b) (x, y, z) = (−1,−1/3,−4/3) (c) x y z w = −2 −2 1 1 w + 1 1 0 0 . 13. Define R∞ to be the set of all infinite sequences of real numbers. (a) Show that the set of all convergent subsequences is a subspace. (b) Determine whether C is finite or infinite dimensional. Solution. First of all, the way I phrased the question doesn’t tell you how R∞ is a vector space. We can write a sequence as a tuple that just never ends. (a1, a2, . . . ) You can add these like vectors in Rn. They add component-wise, and scalar multiply component-wise as well. These operations satisfy the 7 axioms. 7(a) Let (ai) = (a1, a2, . . . ) and (bi) = (b1, b2, . . . ) be convergent sequences. Since they’re convergent, let (ai) → a and (bi) → b. Recall that the sum of two convergent sequences is also convergent, so that (ai + bi)→ a+ b. Thus C is closed under addition. Given a scalar c, it is clear that c(ai) = (cai) → ca. Therefore C is closed under scalar multiplication as well. The set C is also nonempty (since (0, 0, . . . ) ∈ C), and therefore C is a subspace of R∞. (b) This subspace is infinite dimensional. Assume for contradiction that there exists a finite basis {(xi)1, (xi)2, . . . , (xi)n}, so that dimC = n. Then by Theorem 2.31, any set of sequences {(yi)1, . . . , (yi)k} is linearly dependent when k > n. We can show that this leads to a contradiction by finding k linearly independent convergent sequences for k > n. Pick any number k > n. Let (eij) be the sequence defined by{ eij = 0 i 6= j eii = 1 Here i is not an exponent. It is in index, I’m just putting where the exponent usually goes because there was already another index in the subscript. For example (e1j) = (1, 0, 0, . . . ) and (e 3 j) = (0, 0, 1, 0, 0, . . . ). These are essentially the standard basis vectors, but now they are sequences instead. First, note that (eji )→ 0 for all j. This is true since if we let N > j, then for all n > N , |ejn− 0| = 0 < for all > 0. (Oops I used n twice, different n here.) So all of our “standard basis sequences” are convergent to zero, and (eji ) ∈ C. Now consider the set of sequences {(e1i ), (e2i ), . . . , (eki )} where k > dimC = n as you recall. By Lemma 2.31, this set of vectors should be dependent, since k > dimC. But we can show that they are independent. For given a linear combination c1(e 1 i ) + · · ·+ ck(eki ) = (0, 0, . . . ) adding these component wise gets us the equation (c1, c2, . . . , ck, 0, 0, . . . ) = (0, 0, . . . ). Therefore c1 = · · · = ck = 0, and the (e1i ), . . . , (ekn) are independent. Therefore we have contradiction, and C is not finite dimensional. Perhaps a faster way to say this is that C has arbitrarily large sets of independent vectors in it, so there can be no finite basis.
欢迎咨询51作业君