Here are the solutions to the practice problems I gave out in class today: Final Practice Problems Solutions. Here are the problems themselves: Final Practice Problems. The solutions to all of the old homework have been posted as well.

I should be in my office Tuesday from 1-2. I have a final after that so I probably won’t be available much after 2. I plan to be in the Neuberger Hall third floor atrium from 9-10 on Wednesday, which is right before our final. I’ll probably be in my office before then as well for some time.

Remember that you get one page of notes, front and back, as well as a calculator. You won’t need the linear transformations handout. There won’t be any questions about set theory or material from chapters 1 or 2. The test will primarily be on material from chapters 4 and 5, with some questions potentially involving matrix algebra, inverses, or linear independence, which showed up in chapter 3.

Wednesday was the last day of lecture. We looked at another example of the Gram-Schmidt Process and saw how it can be used in the QR factorization of a matrix with linearly independent columns. I briefly touched on some of the topics from section 5.4, including the Spectral Theorem (which maybe sounds cooler than it actually is). However, there wasn’t enough time to go into any depth about the topics or procedures from that section, so there won’t be anything from it on the final. I also handed out the review guide for chapters 4 and 5.

On Friday I will spend the first half of class answering any of your questions. I will then hand out some practice problems that cover the important material from the last two chapters. I will post the solutions to the problems later. I would highly encourage coming to class on Friday for some very good preparation.

I have really enjoyed teaching this class this term. I hope you’ve gotten a lot out of it.

On Monday we finished up section 5.2, where we learned about orthogonal projection and orthogonal decomposition. In the latter, we saw that given a vector and a subspace, we can always write the vector as the sum of two orthogonal vectors, one of which is in the subspace. This representation turns out to be unique.

We then started to look at the Gram-Schmidt process, which allows us to build up an orthogonal (or orthonormal) basis for a subspace given any basis for it. We’ll look at some more examples of this on Wednesday, as well as talk about QR factorization of matrices. I will try to cover the main topics from section 5.4 as well. I will also hand out a review sheet that will list all of the important topics that have been covered since the last exam.

The final exam is on Wednesday the 12th, from 10:15 to 12:05. It technically is a cumulative exam, but there won’t be any set theory questions and it will focus mainly on the material that has been covered since the last exam. As before, you get one pages of notes, front and back, as well as a calculator.

On Friday we started section 5.2 on orthogonal complements. We expanded our definition of orthogonality of sets to orthogonality of subspaces (which are just sets with a few special qualities). We said that two subspaces are orthogonal if every vector in one is orthogonal to every vector in the other. We saw how to find the orthogonal complement of subspaces that are given my equations, such as lines and planes through the origin, and also ones that are represented with basis vectors. At the end of the class we discussed the idea of orthogonal projection, which is where we will pick up on Monday. We will also be starting section 5.3, where we learn about the Gram-Schmidt process of creating an orthonormal basis for any subspace.

Today we started chapter 5, where we began by revisiting the idea of orthogonality. Earlier in the term we defined orthogonality in terms of two vectors – they are orthogonal if they meet at a right angle. Computationally, this means that their dot product is zero. We extended the definition to come up with the idea of an orthogonal set – a set of vectors where each pair of distinct vectors is orthogonal. We can then talk about an orthogonal basis, which is just a basis for some subspace that is also an orthogonal set.

From there we went into the idea of an orthonormal set, which is an orthogonal set of unit vectors. In other words, all of the vectors meet at right angles with each other and they all have length one. We can then similarly define the idea of an orthonormal basis – a basis for a subspace that is also an orthonormal set. We looked at some examples of finding orthogonal and orthonormal bases for subspaces. In later sections we’ll discover additional ways to come up with these sets.

We then defined an orthogonal matrix (a square matrix whose column vectors form an orthonormal set) and covered some theorems regarding them. We finished with some properties of orthogonal matrices. There are a few properties left to cover, which I’ll do at the beginning of Friday’s class.

As promised, here are the hints for problem number 28 from section 5.1: For part a, use theorems 5.5 and 5.8.b. For part b, think about the unit circle (in particular, points on it). For part c, use theorem 5.6. For part d, you just have to use inspection and give an informal but convincing argument.

We’ve seen in the last few sections that triangular and diagonal matrices are nice to work with because their eigenvalues are just the numbers on their main diagonal. Not every matrix is of that nice form, though. It would be nice if we could turn any matrix into a diagonal matrix while still preserving its eigenvalues. It turns out that we can’t do that in general, but we saw on Friday that there are cases when we can come up with a diagonal matrix based on our starting matrix.

We first defined the idea of similarity of matrices. We then went on to diagonalization – what it is, when we can do it, and how to do it when we can. It’s a pretty simple process that builds off of the work we’ve been doing lately in finding eigenvectors and eigenvalues.

This is the last material from Chapter 4 we’ll be covering. In the remainder of the term we will be working in Chapter 5.

Today we readdressed the question of how to find eigenvalues and eigenvectors for a given square matrix A. This time A could be 3×3 or larger, which is why we needed to learn how to find the determinant of larger matrices. The process is the same as before, but it’s a bit more lengthy because finding the determinants usually takes longer. As evidenced today in class, it’s important that you perform these calculations carefully because it’s very easy to make small mistakes. However, as with everything in our class so far, it’s extremely easy to check our work. Once we find a candidate eigenvector/eigenvalue pair, we can just calculate Ax and \lambda x and see if they’re equal.

By the way, while swapping two columns does flip the sign of the determinant, it definitely doesn’t just flip the signs of the eigenvalues. To see this, try the matrix [1 0],[0 6], where those are read as row vectors.

Today we saw a geometric interpretation of the determinant. We saw that if take the column vectors from your linear transformation matrix and build a parallelogram/parallelepiped/higher-dimension-equivalent from them and then find its area/volume/higher-dimension-equivalent, that value is the absolute value of the determinant of that matrix. This gives a very good reason why linearly-dependence among the columns gives rise to a zero determinant. If you draw the corresponding figure you’ll get a degenerate case of whatever shape you should get, which then has zero area/volume, etc. It also explains why matrices for linear transformations that don’t change the length of any vectors have determinant one.

We then learned (after a small stumble on my part) another method to find the determinant of 3×3 matrices by summing over products of carefully chosen diagonals of a larger array. We then learned many properties of the determinant. Particularly, the changes we can make to a matrix and how they change the corresponding determinant. I gave geometric interpretations of these properties as well.

Now that we know how to take the determinant of larger matrices we will be able to find eigenvectors and eigenvalues of larger matrices, which will be what Wednesday’s lecture is about.

On Friday we learned techniques for finding determinants of 3×3 and larger matrices. We defined the (i,j)-cofactor of a matrix and how to use these to expand along any row or column of a square matrix to find the determinant.

On Monday we’ll develop some geometric intuition about what determinants actually mean. We’ll look at another way to calculate the determinant of a 3×3 matrix and look at some general properties of the determinant. We will then take these determinant skills and use them to find eigenvectors and eigenvalues of larger matrices later in the week.

In Wednesday’s class we started by investigating the idea of vectors that are parallel to their transformed versions under a linear transformation. In other words, we take a square matrix A (which represents a linear transformation) and a vector x, and see if the transformed vector Ax is parallel to (in other words, a scaled version of) x. We called vectors that satisfied this property eigenvectors and we called the corresponding scalar eigenvalues. We saw how to find the eigenvalue of a given eigenvector and how to find an eigenvector for a given eigenvalue. We ended the class looking into how to find the eigenvectors and eigenvalues for a given 2×2 matrix A. On Friday we will continue this process, as well as develop tools that will allow us to find these objects for square matrices that are larger than 2×2.

Follow

Get every new post delivered to your Inbox.