When I was in undergrad, I dutifully did all my linear algebra homework, not really understanding why. I figured, “if they want me to find a vector or two for a given matrix that satisfies **M.v=lambda v **, fine, I’ll do it.” It wasn’t until a year later when I was taking quantum, theoretical mechanics, and optics that I really understood what all that “eigen” stuff was all about. I really regretted not paying better attention.

My department has decided to offer a “Math Methods for Physicists” class for the first time this year. It involves a lecture and a “lab.” I put it in quotes because it’s really just extra time to learn some Mathematica syntax and to use it to understand some of the problems being explored in the lecture.

Two weeks ago they were learning about eigen-systems in lecture, so I thought we’d play around a little with some graphics commands to get a better sense of how eigenvectors and values work. First, I asked them to give me four random numbers. They dutifully gave those to me and I structured them as a 2×2 matrix. I then asked what that would do to a 2×1 vector. They didn’t understand at first, but I explained that I just meant multiplying the 2×2 with the 2×1 to get another 2×1. But they still didn’t know what I was asking. That’s when I encouraged them to think of it graphically, and we started using the Graphics[Arrow[. . . command.

First, I had them make a list of vectors to represent the numbers on a clock face. They decided on list=Table[{Cos[theta], Sin[theta]}, {theta, 0, 2 Pi-Pi/6, Pi/6}] to produce that list. We then ListPlotted that list (ListPlot[list]) to make sure we got a circle of dots. We also made the matrix m = {{rnd, rnd}, {rnd, rnd}} where the rnd’s are the numbers they threw out. Finally, we did ListPlot[m.#&/@list] which just plots the 12 points that are all transformed by the matrix. It yielded what looks like an ellipse. Here’s an example with m = {{1,2}, {3,4}}:

So now they were getting the hang of the fact that the matrix simply transforms points on the plane.

Next I asked if there were any dots that represented the eigenvectors of the matrix. They didn’t really understand what I meant, and a couple seemed to want to grab a sheet of paper and solve for the eigenvectors by hand. I stopped them and asked what the eigenvector relationship meant graphically. A couple volunteered that it meant a vector that simply stretched or shrank, but didn’t change it’s angle with respect to the x-axis. So I said: go find some!

To do that, they used a Manipulate command. Here’s a video of me doing the same thing:

When someone would find a vector that worked, I’d ask them what they thought the associated eigenvalue was. It took a while, but they realized that it was a measure of the stretch or compression of the original vector. They also realized what it looked like when the eigenvalue was negative.

One funny thing that happened was I assumed the eigenvectors would be perpendicular. It turns out they’re not for the general case, but I told them that most matrices they’d deal with in physics would tend to have perpendicular eigenvectors.

I really liked how this exercise gave them some new approaches to the whole concept of eigensystems. We talked about how they weren’t just hunting around randomly, but instead were able to see which way to make adjustments to get to the eigenvectors. I also felt that this approach made them realize that there can be a geometrical/physical meaning for all of this.

What do you think? Here’s some starters for you:

- I was in this lab and was totally lost. I still am, because you are a really bad teacher. Can I still drop?
- I was in this lab and felt that the graphical approach helped me learn. Here’s why . . .
- I use this technique but find that it fails in these ways . . .
- I like this approach. Could it work for this situation . . .?
- Did you have them compare their answers with the “Eigensystem” command in Mathematica?
- Why did you think they should be perpendicular?
- Why aren’t they perpendicular?

This is a great example of learning by exploring! Even you were surprised. I haven’t thought deeply about it, but I thought eigenvectors are linearly independent and therefore perpendicular. Now I see they are skewed, but any vector still has a unique decomposition into eigenvectors in this skewed coordinate system. So when are eigenvectors perpendicular?

They’re perpendicular when the matrix is symmetric (so the off diagonals are equal in the 2×2 case). I tried looking at this in reverse. Can you pick 2 vectors (along with 2 eigenvalues) and craft a matrix. Yep, you sure can! You can choose any 2 (non-colinear) vectors and it works just fine.

Interestingly, after your comment, I did a quick google search to find a link to help you and me out. The funny thing is that a lot of the top links have quotes like “eigenvectors will *always* be perpendicular to each other.” I guess they’re used to working with symmetric matrices too. Thanks for the comment!

I am wondering whether it is better to do this sort of activity before students encounter eigenvectors, or after (as you did; I know that you didn’t have a choice). My gut feeling is “before” makes more sense, since it means less wasted time trying to figure out what the point is when you are doing all of this eigen-stuff with matrices.

On the other hand, it seems really nice that you can ask your students, “Which of these are eigenvectors?” When they answer, it is nice you can follow it with, “What is the associated eigenvalue?”

I suppose it is probably worth doing this activity twice: once to introduce the idea, and once after students have been working with the eigenstuff for a while.

This was a nice activity.

Pingback: Span | NerdlyPainter

Pingback: Synchronous classical mechanics brainstorming | SuperFly Physics