Yesterday I went on twitter to try to get some help on teaching Fourier analysis for my sound and music class:
I got some great advice, and it led to some great conversations with some math professors at my school as well. I taught the class today and wanted to capture some thoughts on how it went.
Goal(s)
In class we use Audacity a lot to determine the frequency spectrum of various sounds. We’ve done it to measure the Doppler effect, and we’ve used it to verify whether an instrument can be modeled as an open/closed versus open/open pipe. Today I wanted to get the students to a point where that “Plot Spectrum” option in Audacity isn’t considered magic. Taking a hint from how Richard Feynman wrote “QED,” I want them to see that it’s possible for a computer to determine the frequency content of a sound, without showing them, necessarily, the slickest way to do the math. In “QED” Feynman talks about the horrendous calculations necessary to determine the mass of the “naked” electron, but he doesn’t try to make them less horrendous, he simply tries to make every step possible. He figured if the reader sees that it’s possible, he’s been successful, even if no one would ever really do the calculation that way. I feel the same way about Fourier analysis.
Analogies
On twitter and in my math buds’ offices, we talked about a potential analogy that goes something like this: a sound can be broken into various amounts of a set of frequencies in the same way a point on the plane can be projected onto differing amounts of each axis. If you take a point, you can trace down to either axis and get the coordinates. Similarly, with a sound, you can “do something” to get the frequency content. This analogy has the notion of completeness (2 axes (even if they’re not perpendicular) span the space of a plane) and orthogonality (there’s only one way to break down the coordinates for a point given a set of axes). It’s simple, draw-able, and, I presume, understandable. However, it’s hard to get students to then cross the bridge to Fourier stuff where the number of axes increases like crazy (to infinity for Fourier transforms) and “tracing down to the axis” is hard to explain. On twitter I got excited about this approach, but my local buds pooh poohed it to the point where I decided not to pursue it in class.
Without a computer
In class I told them about the prep work I’d done for class, and asked if they thought what Audacity does was magic. There were mostly blank stares at that, but certainly no one jumping up and down saying “no, of course not, a human programmed it after all!” To get us going down the path, I asked how you could determine the frequency content of a sound without a computer. My first example was how could you tell if a vocalization (“AHHHHHH”) contained a particular pure tone (me whistling). No one spoke up right away, but then someone had a cool idea (not a direct quote, but rather a dramatization):
What if we took a wine glass that was designed to ring at that whistle sound. Then, we could AHHHHH at it and see if it makes a sound. If it does, then the AHHHH has that frequency present. -Jake, a student in my awesome class
We now call that the Jake wineglass approach. I loved it, and I took some time to connect it to what we’ve seen in lab. First I showed them this movie that I made for them:
and then I talked about the boxes we have in lab for amplifying/resonating the preferred tone of tuning forks:
We made an analogy with the wineglass: glass <=> resonator box, AHHHHH<=>tuning fork.
We then talked about whether that was the same as the “Plot Spectrum” tool. I needed to provide a little help here, something I want to back off on next time I teach it. I pointed out that all that work would simply be one data point on the spectrum. I asked how we’d get the rest, and they came around to the notion that we’d need a lot of wine glasses (or possibly one with varying amounts of water in it – something we’ll play with in lab, hopefully tomorrow).
Why did I like this? They came up with a cool way to do it, realizing it was possible, while also being awed by how much work it would be.
Wine glasses inside the computer?
So I asked whether they thought that’s how the computer did it. After some chuckling, they were convinced that there must be more to it, from the computer side at least. It was time to break out some simulations.
The first one I showed them is something that a colleague suggested. There are lots of these types of sims out there online, but I wrote my own for one particular feature that I wanted. It shows a random waveform, and allows you to (separately) display one of five different frequencies on top of the function while adjusting its amplitude. The goal is to find the amplitude that “best fits” that frequency to the wave form. Here’s a screenshot:
So, here’s the hard part. Most of them tended to agree when the curve “looked best” as I adjusted the amplitude. But I warned them that it’s hard to program a computer to watch until it “looks good.” So I put them in groups to try to determine a way that a computer could do it.
The first suggestion was to try to get the blue curve to spend as much time above the red curve as below. In other words, when you achieve that, you’ve got the optimal coefficient. I jumped to the computer and set the amplitude to zero. Then the student who suggested it saw the problem. A zero amplitude would always be the best way to optimize that. We call that the Kareem-good-but-flawed-idea (not really).
Then came the Mika-interesting-and-subtle-idea. She suggested that we should maximize the number of times the curves intersect, saying that at the optimum they’d be on top of each other and would constantly intersect. So we counted the intersections at A=0 and again at the “looks best” point. Sure enough, the number of intersections go up.
Ok, here’s my biggest mistake of the class. I was anxious to get to the notion I wanted to run with, so I kind of pushed it down their throats: What if you look at the area between the curves and tried to minimize that? It’s related to the intersection idea, but clearly more subtle. Here’s a screenshot of that on my simulation:
Pretty, huh? Well, I got the nodding heads I was looking for, but I don’t feel that they owned the idea. We call this “minimizing the red.” We played around with it for the various frequencies and decided that it seemed to match their notion of “looks best.”
SIDEBAR: Back in my office, I sat down to show mathematically when this approach is the same as the accepted way to calculate the coefficient. It turns out to be pretty subtle: you have to minimize a functionalized integral. But, it looks like it works, just as my gut (and my math buds’ guts) thought. Note, I’m sure they would say that “of course that works, you just have to use the blah-blah-blah theorem and you’re done!”
So we talked about how the computer seems to be able to do this for thousands of different frequencies in less than one second. They seemed impressed. Note, that’s not what the computer does, using instead the Fast Fourier Transform, but my students “got” that it could be done. Success(?)!
Match-the-wave game
So then we were off to using this great PhET simulation. Go play with it, it’s fun! I tried to show them the connections both to my simulation and to what Audacity calculates. We listened to combinations of harmonics, commenting on the horrible sounding seventh harmonic (we’ve talked about that before) and listening to perfect octaves, fifths, and thirds.
Then we played the “match-the-wave” game, where they provide a complex waveform and give you several frequencies to adjust to get it matched. The first question I asked was which frequency to try first. There was one well articulated argument for the highest frequency (dramatization):
I think we should do the shortest wavelength first. We can match the smallest features of the waveform with that, and then use the longer wavelengths to move some of the peaks up and some down. -another great student in my class
So we did that. After getting that highest frequency to match, I asked whether we should keep building or reset that one to zero and start fresh with a different frequency. It’s clear (I hope) from the “quote” above that the approach was hoping for building up, but I let the groups argue about this for a while. The consensus was to build up. But then I asked what they thought a computer did. The consensus on that was that it likely just did each frequency separately (Awesome!).
So we matched it as best we could by building up, and basically came to a close with the discussion.
What I liked
I liked the ownership of some of the concepts that we discussed today. The wine glass approach is something that I’ll continue to use, hoping to drive that home in lab tomorrow. I liked how they engaged with the simulations (even though I was the only one “driving”). I really liked the Mika-whatever-I-called-it-above-thing-about-intersections. That was a really cool observation that led to some cool discussion.
What I didn’t like
I lectured too much. I need them to own all of this, and I just have to shut up sometimes. I’ll keep working on that. I also didn’t like that we didn’t really hit completeness (any waveform is possible) or orthogonality (there’s a unique combination that matches the waveform). I’m not sure how to do that better, but I know that with my physics majors we spend quite a bit of time with that.
Comment starters
(I really like putting these in my posts, and, once again, I want to thank my friend Rhett Allain for the idea)
- I’m in this class and I think this is a dead on description of what happened. My favorite part was . . .
- I’m in this class and this is a horrible bastardization of what happened in class. What really happened was . . .
- If you don’t do completeness and orthogonality you have no right calling this Fourier analysis. You might as well call it . . .
- I like your simulation, can I use it to . . . ?
- I can tell you wrote that simulation in Mathematica. You should know that it’s fantastically expensive and you should never use it ever. (That’s for a couple readers who know who they are 😉
- I like what you’ve done, but here’s something I would have done in the (beginning|middle|end) . . .
Interesting that I was just at this same topic in class on Monday as well…
I take a decidedly less mathematically-intense approach than you do. You seem to have made the decision that one of the goals (standards?) for the class is that they understand (on an appropriate level) HOW a Fourier transform works. I have made the decision that for my class (which are non-science students) that they don’t need to know how it works, but what it tells us. The main concept, as I see it, is that I am switching from talking about ONLY pure tones to complex tones and that the majority of sounds we hear in our life are complex: meaning they contain more than one frequency component. We want to have a understanding of what it can tell us so we can use this tool the rest of the semester. In my mind, the HOW it works takes away from what it tells us for this particular audience.
I guess I accept that for my students the transform remains a black box. I do have them work with the PhET simulation, but it is imperfect since you really can’t adjust phases.
Also, I don’t think the example of the tuning fork w/ pop can is wrong, but it seems to me to be potentially confusing or misleading to students, since you’re really demonstrating resonance but trying to connect it to spectral analysis. Of course, if you’re showing historical approaches to sound analysis (Koenig style resonators, e.g.) then it probably connects really well with students.
Always interesting to see your approach. Thanks for sharing!
I definitely want them to understand that it works and that it’s possible to do, but I don’t know if I’d go all the way to “how.” However, I certainly agree with you that it’s important that they know what to do with that information, once Audacity gives it to them.
I sort of understood what we learned in class, but I still don’t really get what you wanted us to understand from class. That Audacity matches up frequencies by analyzing the number of intersections on the graph? Or that Audacity meausures the area in between the different intersections to determine a frequency? And is it a formula that is doing that?
Thanks for the comment, Laura. Unfortunately, Audacity does neither the intersections approach, nor the red area area approach. However, what it does do is identical to the results you’d get with either of those. There is a formula, but I don’t think I’d hold you guys accountable for it.
Hi Andy. I really enjoyed reading about your approach here and how much you were able to give them ownership of the ideas. I hope that you are able to teach this course again in the near future so that there can be a revise and reflect post on how the sequence works out the second time.