Virtual Physics Conference

I’m part of a grant team right now brainstorming a new project, and a part of it is potentially hosting a conference. We kicked around some ideas about it, and as usual in situations like this, we casually talked about what a virtual conference might look like. That got my brain going so I thought I’d get some thoughts down here.

My goal: A virtual conference for physics teachers to be help potentially in the summer of 2020.

Whenever I’m a part of conversations like these, the typical pros and cons list look like this:

  • Pros
    • Cheap (I almost stopped this list here)
    • Flexible
    • Comfortable
    • Wider reaching
  • Cons
    • Not as immersive
    • Missing “hallway conversations”
    • Less connections
    • Less commitment from participants

I’ve been thinking about all of those and I think I’ve thought of at least a beginning of a plan that address all of them. Certainly the pros will still be there, but hopefully it’ll be an experiment worth doing if we can address the cons at least to some degree.

Technology

I’ve used a ton of different technology for doing meetings like these. Back in the glory days of the Global Physics Department we used both Elluminate Live and later Blackboard Collaborate (really the same software, just bought out by Blackboard). Since then I’ve used WebEx, Google Hangouts, and Zoom a ton and I’ve occasionally used others as well. For this experiment, I would mostly want a reliable technology, and the one that I’ve had the most luck with there is Zoom. But below I’ll lay out what I think the needs would be.

Participants at a minimum would need a computer/phone with decent internet speed and speakers. A microphone would be great and a camera would be too, but I think I’d be open to where we’d draw the “necessary” bar.

Speakers would need audio and video and screen sharing capability. It’s possible we could ramp up to something like dual monitors or something but I’m definitely open to suggestions.

Rough outline

My vision is something like this:

  • Parallel sessions
  • ~5 speakers per session
  • 4 sessions blocks in a day
  • A single day

Immersion/Commitment

This is the toughest nut to crack, I think. The longest online conferences I’ve been in were 8 hours long and it was hard to stay focused. So what would it take to get people to stick?

Taking the outline elements from above: Parallel sessions allows people some choice. Certainly at in-person conferences people really appreciate that, especially when a session doesn’t have what you’d thought it was going to have. ~5 speakers per session makes it seem like you could potentially hold all that info in your head at a time and really have a great conversation going. Four session blocks in a day just seems reasonable and one day is a great start for this experiment, at least I think that’s true.

Addressing issues like “my favorite part of conferences are the impromptu conversations that happen between sessions” is something I’ve been thinking about a lot. I think it would be great if we had technology that allowed the following:

  • Every session has a Zoom room (I’ll just use zoom vocabulary here to simplify) with a main speaker at any given time but a running commentary that people can participate in.
  • Questions will be submitted and voted on during each talk so that speakers can answer them in a crowd-prioritized way.
  • Discussion will use software like my “my-turn-now” software that allows for equitable discussions.
  • [This one I don’t know about existing solutions] This one is what I’ve been thinking would help the most with some of the cons above. I call it “hallway conversations.” I want any two-or-more groups to be able to spontaneously spawn a new zoom room. They would get video conferencing, a chat board, and a white board. They could welcome anyone else in who “knocks” and they could choose to be either public “Andy and Super-Cool-Person’s room” or private.
  • Drop in rooms for common topics
  • You’d get a personal recap record of every room you were in along with whatever contact info people in that room were willing to share. You’d also get a chat transcript and any whiteboards.

Imagine sitting in your pajamas with a beer and seeing that people you are excited to meet are in a public room. You knock and they let you in! You then can meet them and either hang at the periphery to just listen or jump right in. Kind of sounds like an in-person conference, doesn’t it? The originators could leave and the room would still exist until there’s not at least two people in it. The personal recap record would really help you maintain any contacts you’ve developed.

My other big idea is meals, specifically lunch. I envision partnering with something like Door Dash to get everyone a meal at the same time. They’d pick their meal at registration (possibly even same day, I suppose) and then it would be delivered to everyone at the same time (yes, I know, there’d be some time zone problems but I think it might be cool enough to convince west coast people to eat at 10). There’d be Zoom rooms for every type of food. You’d be in a video conference with anyone else eating “at the same restaurant” and you could hopefully be involved in some fun conversations (and of course you could still launch a “hallway conversation” if you wanted to).

Cost

This couldn’t be free, as the Zoom cost won’t be zero. But it would surely be cheaper than gas/plane + hotel that a normal conference would have. If we had 5 parallel sessions and 5 speakers in each session and 4 session blocks that’s 100 people. If we charged $100 per person that would be $10,000 which might be enough for the Zoom ideas above. I plan to research this a lot more.

Flipped?

A collaborator of mine shared this white paper from the University of California system that talks about an approach to virtual conferences that sounds a lot like a flipped conference. Speakers record their talks ahead of time and each talk has a discussion board associated with it. I think that’s a cool idea, but I’ve always been unable to get my cognitive energy focused like that ahead of a meeting. The plan above allows you to come in cold (with the exception of your own talk of course) and just let it flow over you dynamically. I’m curious what others think, though.

Your thoughts?

So that’s where I’m at with my brainstorming. Your thoughts? Here are some starters for you:

  • I love this idea, where can I sign up? I just had a couple of thoughts to make it better . . .
  • Um, ever heard of google? This exists and is called . . .
  • If I can’t shake someone’s hand I don’t think it’s a real relationship. How are you going to do that?
  • Love the “hallway conversations” but I think you’d also have to think about . . .
  • $100?! Way too _____. Instead you should . . .
  • I would love to facilitate a session. Can I shoot you some ideas? Who’s on the committee?
  • Could we do a poster session too? I have some ideas about how that could work
  • Door Dash exploits their delivery people. Instead think about partnering with . . .
  • Here’s an interesting way to mix your ideas with the flipped conference ideas . . .
Posted in community, glodal physics department, teaching, technology | 8 Comments

Google Apps Script Physics Problem Database

I tweeted out the other day an opinion about using google apps script (GAS from now on) as a web framework:

That led to some follow up from my awesome tweeps, including a nudge to write this blog post, so here you go.

This post will be mostly about how to use GAS as a data-driven, responsive website, with the Physics Problem Database really just the example I put together to show things.

Why GAS?

A data-driven website needs to store and retrieve data. Most of my other projects tend to use mysql databases for that (and PHP (yes, stop laughing and look up Laravel) for the html/interfacing) but that approach can have a pretty big startup cost (mental energy and time, not necessarily money). I certainly know how to spin up a new Laravel site and set up a new mysql database, but I know that’s a huge barrier for folks who want to just build something small-ish.

I’ve been using GAS for a long time now to help automate certain tasks (and you’ll note at that first link that I’ve thought about GAS as a website driver before – the difference in this post is that I don’t bother with the sketchiest part of that post in this new work – namely using the query command in a new sheet all the time). The way it can interact with a spreadsheet is what’s really driving this post. Basically I’m exploring how you might use a spreadsheet instead of a database to really get something up and running.

Benefits?

  • You don’t need a server! Or even a coding environment. I did nearly all of this coding on a chromebook because all you need is a google account and they provide the IDE (integrated development environment), the storage of the “database,” and the hosting of the pages
  • The “database” is a very user friendly environment. What sql would call tables, I just call different sheets. It’s very easy to see, edit, and delete the data in the “database”.
  • Both the server-side and client-side code is javascript. I’m not necessarily praising the language here, though it is fun to code in, but rather mostly praising the fact that you only have to know one thing (plus html, of course).
  • Authentication is basically built in. See below for more on that
  • AJAX (or the ability to update or query the “database” without reloading the whole page) is particularly easy

Drawbacks?

  • It’s not super fast. You’ll see how the physics problem database takes about ~5 seconds to load.
  • The spreadsheet can only get so big. I believe the relevant quota is 5,000,000 cells. I would guess that you could do fine with 1,000 – 10,000 main records.
  • You have to build your own unique ids, whereas sql will normally just do that automatically. You have to do this rather than just finding the row things are on to protect against someone changing the order of the cells in the spreadsheet (deletions, adds, sorting, etc). I suppose if you make it so that you’re the only one who can access the spreadsheet and make a promise to yourself never to change the record order, then you could skip this. This is especially important if you do some one-to-many or many-to-many relationships among the sheets.

Now I’ll shift over to using the Physics Problem Database as context to explain how you can stuff.

Physics Problem Database

Years ago the Global Physics Department put a fair amount of effort into a physics problem database. We thought it would be fun to both build such a thing for teachers to use, especially those doing Standards-Based Grading (who often have to give students new and different problems to try) *and* to help our members learn how to code. While a ton of people were interested, the barriers of learning how to get a database-driven webpage running were tough. So I thought I’d use that idea as context to really push this GAS approach.

For those of you who don’t care about what I have to say below about what I learned in doing this, here’s the direct link to the GAS Physics Problem Database

Goals:

  • Display physics problems that people could use
  • Allow only authenticated uses to be able to add problems
  • Develop a tagging system with a limited number of approved tags

The first thing I did was decide the data structure. After minimal thought, here’s what I came up with:

  • Problems
    • unique id
    • problem
    • user id
    • date
  • Tags
    • unique id
    • tag
    • user (didn’t end up using this)
    • date (didn’t end up using this)
  • Users
    • unique id
    • email
    • name
    • date
  • Problem_tag (this is the Laravel naming convention – it’s what some call a pivot table since this facilitates the many-to-many relationship between problems and tags)
    • unique id (not sure this is necessary)
    • tag id
    • problem id

Next I started by making the page that would just display all the problems. I wanted the display to show the problem and any tags that go with it. I think I meant to show who wrote the problem too, but I don’t think I coded that yet (though it would be super easy to do).

Ok, so how to manage the data? What I decided to do was to just load all the data in all the sheets into a massive javascript object. I actually do this a lot with other GAS projects that I work with. It seems that several hundred rows of data works just fine, so I think this is at least somewhat scaleable (which google insists is spelled wrong, by the way). Here’s the code that does that:

function loadData() {
  var ss = SpreadsheetApp.getActiveSpreadsheet();
  var sheets=ss.getSheets();
  var whole={};
  for (i=0; i<sheets.length; i++) {
    var data=sheets[i].getDataRange().getValues();
    // creates object with column headers as keys and column numbers as values:
    var c=grabheaders(data); 
    var list=data[0];
    ob={};
    for (j=1; j<data.length; j++) {
      ob[data[j][c["unique"]]]={};
      for (l=0; l<list.length; l++) {
        ob[data[j][c["unique"]]][list[l]]=data[j][c[list[l]]];
      };
    };
    whole[sheets[i].getName()]=ob;
  };
  return whole;
}

That produces and returns an object called “whole.” It has a key for every tab in the spreadsheet. The value for each key is an object with keys set to the unique ids. The values of those are objects whose keys are the the column headers in that tab. Say you wanted to find the problem associated with a particular problem_tag relationship. You’d get it with whole[“problems”][whole[“problem_tag”][unique-number-you-care-about][“problem id”]]. I know, it’s hard to read, but you can navigate all relationships this way.

How do you send that to be parsed in the html document? First note that all GAS projects can be made up of javascript documents and html documents. They’re all actually stored in the single script document. I use templated html where you can intersperse <? useful server-side javascript ?> into your html. So the table for the problems is done with this code (ugh, the html syntax highligher is failing on all of the “>” characters, replacing them with &gt – sorry about that):

<table class="table table-striped">
     <thead>
      <tr>
       <th>Problem</th>
       <th>tags</th>
      </tr>
     </thead>
     <tbody>
      <? Object.keys(data["problems"]).forEach(function(key) { ?>

       <tr>
        <td><?= data["problems"][key]["problem"]?></td>
        <td><?!= findTags2(key,data) ?> </td>
       </tr>
      <? }) ?>
     </tbody>
    </table>

The “forEach” part is going through all of the problems in the problems object (also note that I’m passing “whole” as “data” – don’t ask why.) Then each one adds a row to the table, displaying the text of the table with data[“problems”][key][“problem”]. Then it runs a function (on the server, before the page is rendered) called findTags2 that accepts the key (unique id for the problem) and the full data object and then returns a list of hyperlinked tags that, when clicked, show a page with just problems with that tag. That page does that filter by doing the “loadData” above and then deleting any elements that aren’t connected to that tag before sending data to a very similar html page. Note that to add in the creater of the problem I would just add something like <td><? data[“users”][data[“problems”][key][“user id”]][“name”] ?></td>

The only other thing the page does right now is allow authenticated users to add problems. That page is given all the tags and grabs the user’s email (you have to be logged into a google account to use the page). There’s a simple text entry box for the problem and the user can then select any appropriate tags. When they hit submit there’s an AJAX call to update the spreadsheet. All that means is that the page doesn’t have to reload to do it. The data is sent to the server, it updates the spreadsheet (in several ways – see below) and then returns a note saying it was successful. That updates some text on the page. It all takes about a second. The spreadsheet updates are:

  • Put the problem into the “problems” tab. For that you can use the “append row” method in Google’s SpreadsheetApp. For the unique id I just use the number of milliseconds since January 1, 1970, making the assumption that I won’t run the script twice in the same millisecond.
  • Then the “problem_tag” tab is updated, with a new row for every tag that was checked by the user. This is where I use the unique id for the problem (the unique id for each tag is embedded in the form to allow them to be passed to the server correctly).

The authentication is super easy if you’re doing this in a google-school domain. Basically you set the script to run as you (the developer) and use the users tag to check to see if the user email (that google provides for any user visiting the page) is in your approved list. That way you’re letting google do all the authentication (they have to be in your domain) and you can only allow those who are in your users tab to be able to even access the page.

Unfortunately the authentication is a little harder for normal consumer google accounts, but still doable. Unfortunately the command that returns the visitors email only works if you allow the scripts to be run as the person visiting the page. That means they need access to the spreadsheet, something you don’t have to do in the domain version. What’s cool, though, is that you can just give the whole world “view” access and this script will still work. What you have to do in addition to updating the “users” tab is to give those people “edit” access to the spreadsheet. Then everything works!

When users visit the page for the first time they have to go through a permissions check. Basically google checks your script to see what things you’re doing and makes sure the user is ok with that. The first time I did what’s described above for my consumer google account I noticed that the permission warning said that the script would have the ability to view, edit, and delete any of my drive files. Now I know I’m a trustworthy guy, but I figured even my friends would have a problem with that. Luckily I found this page that made it clear you can limit the access to just the associated spreadsheet, something I already was doing by giving them “view” access! Problem solved.

So, I think I’ve got a roughly-working beta version up and running. Please let me know your thoughts. Here are some starters for you:

  • I like this, especially how I could develop a web page typing a new line on every computer I stumble onto without having to load a full development environment.
  • I hate this: I need to do all my coding on my own local machine before even thinking about putting it up on the web. It’s too bad there’s no way to do that with GAS.
  • I like this but I’m nervous about whether it would scale. Why haven’t you just pasted in a bunch of nonsense problems to see when it breaks?
  • I got a “you’re not authorized” message when I tried to hack in and load a bunch of crap into your crappy database. Can you please give me access?
  • Your tag choices are dumb. Instead I think you should use . . .
  • I think you didn’t need to bother with restricting the permissions scope to just the one spreadsheet. I trust you!
  • If I’m at a google school can I build something that people outside of my domain could use?
  • Can users select problems and save them? Print them? LaTeX them?
  • What happens if you share the script with someone? Can they collaboratively edit? At the same time?
  • I’ve been laughing so hard at the fact that you sometimes code in PHP that I haven’t been able to digest the rest. Can you make it a flash video and put in on your myspace page?
  • I think it’s dumb to load the whole spreadsheet into memory. Just load in all the unique numbers and the rows they’re on and load stuff when you need it!
  • I just tried to email you about this at your job and got an away message saying you’re on vacation. You do this crap for fun?!
  • I see you have LaTeX in one of the problems. Are you just using MathJax?
Posted in HUWebApps, physics problem db, programming | 6 Comments

Shooting circuits

I’ve posted before about how I struggle teaching complex circuits (really just circuits that contain batteries and resistors in ways that can’t be analyzed with parallel and series tricks). There you’ll read about how I find that if I just give my students one of the unknowns for free it allows them to show me how well they understand the basic principles of circuits without getting bogged down in the math of, for example, five equations and five unknowns.

I’ve shared the ideas from that post a bunch and occasionally I get feedback that it robs students the ability to actually solve the circuits from scratch, since I’m giving them one of the unknowns for free. This post is about thoughts I’ve had about that, including some more substance to my ideas at the end of that post about guessing and checking.

Bridge circuit

The gateway drug that demonstrates the need of tools beyond series and parallel tricks is the bridge circuit:

Typical bridge circuit

The problem with this circuit is that you can’t model the resistors as a combination of series and parallel elements. Go ahead, try, I’ll wait!

… nope R1 and R2 are not a parallel pair

… nope R1 and R4 are not a series pair

… etc

Ok, now that you’re on board with that, the question is how to analyze such a circuit using the basic principles that went into developing the series and parallel tricks, namely that current flowing into a node flows back out again (conservation of charge or “no piling up!”), batteries raise the voltage from one side to the other by the EMF of the battery, and resistors reduce the voltage from one side to the other in a way that’s proportional to the current flowing through them (and the proportionality constant is conveniently named “resistance”).

Other answers to that question include:

  • Kirchhoff’s laws (do a bunch of loops and a bunch of nodes and hope you have the right mix that enables a successful linear algebra solution)
  • Mesh approaches that are really the same thing, with just a little different focus
  • Go in the lab and measure everything

My answer, as noted in that last post (it was 5 years ago!), is to make a guess for one of the currents and then follow through the ramifications of that guess until you reach a discrepancy. For the circuit above, for example, I would (note that when I say “voltage” I actually mean the voltage difference between that point and the bottom of the battery):

  • Start by making a guess for the current through R1
  • That enables me to calculate the voltage at the left node
  • That enables me to calculate the current through R4
  • Those two currents enable me to calculate the current through R3.
  • That enables me to calculate the voltage at the right node
  • That enables me to calculate the current through R2 (because I know the voltage drop across it
  • HERE COMES THE COOL PART
  • That enables me two ways to calculate the current through R5:
    • One way is to consider the voltage drop across it (which we know) and then determine the current
    • The other is to use the current flowing into the right node and make sure nothing piles up

Unless you make a lucky guess, those two calculations will not be the same. I’m calling their difference a “discrepancy”.

So what now? Well, as I stated in the last post, do all that again with a different guess and find out how the discrepancy changes. Since it’s a linear circuit, you then “just” need to extrapolate from those two data points to find out what guess would yield a zero discrepancy.

When I wrote about this 5 years ago, I gave a nod to the fact that it’s a lot of work to do all that. But now that I’ve actually tried it a few times, it really isn’t! The first pass is when you establish the relationships, and the second is easy if you use a tool like a spreadsheet. It also turns out that if your first guess is zero and your second guess is one the extrapolation is really easy as well.

What I mean by that last point is that if d0 and d1 represent the two discrepancies for a guess of zero and one respectively, the correct current is simply d0/(d0-d1).

Here’s an example. Let’s say that R1=1 ohm, R2 = 2 ohm etc and that V=10. Here’s the first pass assuming the current through R1 (labeled I1) = 0:

  • I1=0
  • Vleft=10
  • I4=10/4=2.5 down
  • I3= 2.5 left
  • Vright= 10+2.5*3 = 17.5
  • I2=(17.5-10)/2=3.75 up
  • I5a=17.5/5=3.5 down
  • I5b=6.25 up

So d1=6.25 – (-3.5)=9.75 (also note that Vright gives you a clue this is a bad guess since you wouldn’t expect any part of the circuit to have a voltage higher than the battery)

Here’s the second pass with I1=1:

  • I1=1
  • Vleft=10-1*1=9
  • I4=9/4=2.25 down
  • I3=2.25-1=1.25 left
  • Vright=9+1.25*3=12.75
  • I2=(12.75-10)/2=1.375 up
  • I5a=12.75/5=2.55 down
  • 15b=1.25+1.375=2.625 up

So d2=2.625-(-2.55)=5.175. Getting better.

That means that the correct current through R1 is 9.75/(9.75-5.175)=2.13 amps.

Yes, that seems to have gotten ugly, I admit. But repeating identical calculations is what spreadsheets are built for. Here’s one I built for this problem (note that I decided down or right would be considered positive):

Cells B2:D9 have the formulas indicated in column A. The yellow cell has the formula B9/(B9-C9)

Note the zero discrepancy in cell D9! Hmm, I wonder if certain people in my life will read that last sentence and let me have it.

So now I’m starting to think this method has some merit. We’re always talking about the value of spreadsheets in physics teaching (usually lab, but still) and now with this approach you’ve really only got to see what students do for the formulas in column B to see if they get the physics!

Not only would you look at their formulas, but the order they go through the circuit is important and makes me feel that this approach is closer to “problem solving” than “exercise” that Ken Heller is always pestering me about. What I mean is that in the usual Kirchhoff procedure students are given an excellent algorithm that has simple choices involved: What loops and nodes should I do? What direction should I go around the loops? Contrast that with carefully seeing what new piece of information you can discern from the previous step as is needed in this method. I think it involves more decision making. It also has some great teachable moments like above when I pointed out a voltage that was higher than physically possible.

Why I call it shooting circuits

I was sharing this approach with a colleague yesterday and she said it reminded her of the shooting method for solving second order differential equations. Here’s an example of how I use that to solve for the quantum states of a hydrogen atom. In that method you start with a guess of the wavefunction at one side of a quantum well and then look to see how it screws up on the other side. Then you make an adjustment to your guess and try to extrapolate the results so that it doesn’t screw up on the other side.

So this is a lot like that. What the heck, we’ll call it shooting circuits!

Series/parallel comparison

Consider this incredibly common circuit:

A typical circuit used to apply series and parallel tricks

First let’s consider the work necessary to calculate the current through all the resistors:

  • Combine R2 and R3 into Req1
  • Combine R1 and Req1 into Req2
  • Determine the current through Req2
  • Recognize that the current through R1 and Req1 is that same current
  • Determine the voltage at the node by finding the voltage drop across R1
  • Determine the current through R2 and R3 similarly using the now known voltage drops across them.

It’s interesting that many students think they’re done at step 3 (or possibly 2). They groan when you tell them that they still have to reconstruct the circuit to find all the currents.

Now let’s do it the new way, again using R1=1, R2=2, R3=3 and V = 10:

Similar to the previous spreadsheet but for the series/parallel circuit

So it’s 4 different statements of physics (A2:A5) and we’re done! All four of those statements demonstrate the student’s mastery of either Ohm’s law for a resistor or the node law. But remember that the order is interesting too! Can you do it in a different order? Does it work if you choose to make your guess for one of the other currents? Give it a try!

Pitfalls

I’ve been playing with this quite a bit and haven’t really found many pitfalls. One minor one involves the most basic parallel circuit (one battery, 2 parallel resistors). If you guess the current through one of the resistors you immediately get a discrepancy regarding the voltage drop across that resistor. That’s cool, as then you can apply the method, but you learn nothing about the other resistor! So then you’d have to repeat for that one, I guess. I think that means that a complex circuit that basically has two parallel parts might suffer from that problem.

Your thoughts?

Here are some starters for you:

  • I like this method, but would it work for . . . ?
  • I think this method sucks and here’s why . . .
  • What’s wrong with ending a sentence with a number and then an exclamation point?
  • What circuit drawing software do you use? They really look great!
  • Mathematicians would call this method . . .
  • Can you tell me more about how you can assume the discrepancy is a linear function of the original guess?
  • Of course you can describe a bridge circuit as series and parallel! Here’s how . . .
  • Seriously, you want students to do their homework with a spreadsheet!? You’re an idiot
  • I’m not sure I understand your pitfall situation. Can you describe it better?
  • Here are 7 more pitfalls I thought of within 10 seconds of reading this:
  • I clicked through to the old post (which you oddly called your last post – what the heck?) and gave up on you when I saw you hate Kirchhoff’s loop law!
  • Would this work for the “you have a cube made of resistors . . .” problem? I hate that problem.
Posted in general physics, physics | 8 Comments

App for facilitating calling on people

“Two posts in one day?” you ask? Yep, I’ve kind of forgotten how useful it is to organize my thoughts here and to get such useful feedback from you awesome folks.

I’ve been working on a new web app and I’m looking for ideas for how to improve it. It’s called “My Turn Now” and it helps people “raise their hands” in a discussion in a way that allows the facilitator(s) to equitably lead the discussion. The name comes from the phrase my middle kid used to say (imagine a really cute 5-year-old voice when saying it) when they wanted a chance to try something.

The problem it addresses

I was actually inspired to write it when I took over facilitating a standing committee of faculty. It only had 8-10 people on it but it was clear that a few were frustrated at how they were occasionally being ignored or talked over. I wanted the ability for me to better keep track of who wanted to contribute and to do it as equitably as possible.

It was inspired by the “raise hand” feature of so many online web conferences, most notably Elluminate Live back in its heyday. If participants hit the button the facilitator (and the rest of the “room”!) were shown the chronological list of the raised hands.

How it works

The facilitator begins a meeting and sends around a link to all participants. They’re shown a window with two buttons side-by-side. One is for “new topic”s and one is for “follow up” questions. Underneath each is a live chronological queue of each type of question, showing the name of each person who has raised their hand and how long ago they did it. Here’s a dummy example (note that this one spanned multiple days).

Example of what a user (non-facilitator) sees

This is a screenshot of user sdfdsfsd. That’s why only that “raised hand” has buttons next to it. Each user can unraise their hand or transfer their question over to the other queue.

The facilitator has a similar view but with buttons next to each that allow it to “call on” the person. Really that just removes it from everyone’s screen.

As a facilitator you can watch both queues and decide how long to let the current topic go while also watching to see how many people want to contribute.

At the end of the class/meeting/whatever, the facilitator can get a report about the discussion. Here’s an example from the first meeting I used it in a couple years ago:

Chart available to facilitator(s) after a discussion

The small text in the middle explains how to read the colorful chart. A quick impression is that this meeting spent most of its time following up a single idea because everything went blue for most of the meeting.

The chart at the top can be useful in seeing what kind of contribution each person made. It can also help you get a sense of the experience each contributor had.

Programming logistics (skip if you don’t care)

The database schema for this app is pretty straightforward. I store meeting details in one table, and hand raises in another, updating whether its a new topic or follow up and whether its been called on. The “created_at” and “updated_at” are automatically updated so the date chart above is pretty easy.

The chart uses the fantastic Google Charts API. I love using that. You just have to get your data in the right format and it just works.

The hard part was finding a way to push the data to all the participants in real time. I have played around a little with Meteor which is really good at that, but I could never get my local server working right. Luckily I dug a little in the Laravel/PHP world and stumbled on Pusher. It does all the dirty work of the crazy realtime crap, leaving me with just managing the data. Note that the free version of Pusher has a cap of 100 simultaneous connections so if I really want to extend the use of this I’ll have to start paying some money. I’d only do that if it’s worth it, of course.

What excites me about it

I know I’m not great at calling on people equitably. I also know that when I’m best at that, I’m not great at actually following the discussion. I think this could be a great tool for folks to diagnose issues with how they (or possibly their student discussion leaders?) facilitate conversations.

Feedback I’ve received has been interesting. I’ll get to the negative stuff below, but one major positive is that people love getting to know the names of people. I did it in a group of 20 or so faculty and I got exactly that feedback. It was interesting because I just assumed they all knew each other.

I think the chart/roundup could be really useful in diagnosing lots of things:

  • How much did everyone contribute?
  • Who has to wait the longest on average?
  • Are there patterns to who I call on?
  • Do I spend too long on single topics?

I also think that having everyone see just who and how many are interested in participating can help people self-regulate their own contributions.

If someone is way down the “new topic” queue but realizes their point meshes with the current conversation topic, they can hit “transfer” and likely move way up because that queue might be shorter. Similarly if the current topic goes away from your follow up, you can shift over to the new topic queue.

Problems

Other, shall we say less-positive, feedback is mostly about how unnatural it feels. People really like to 1) just start talking and/or 2) physically raise their hands, often while using body language to indicate the relevance of their particular contribution.

There were a lot of technical problems with version 1.0 (small buttons, hard to see, duplicate names, etc) but I’ve mostly cleared those up with version 2.0. I’m not really as worried about those I guess.

What’s next?

So now I need help.

  • Is this something I should encourage others to use?
  • What are the best test cases for it?
  • What are the major assumptions I’ve build in that I might be blind to?
  • What student populations might be helped? hurt?
  • What should be added? Subtracted?

Here are some starters for you:

  • I think this is cool! Can I use it? I’m excited to use it in . . .
  • I can’t believe you ripped off my idea. Ever heard of Google? Use it, jerk.
  • I like the chart, especially the part that . . .
  • I hate the chart. Instead you should . . .
  • I checked out Meteor and Pusher. They suck. Instead you should . . .
  • Why don’t you just write an iOS app?
  • Why don’t you just write an Android app?
  • This assumes students have smart phones. You need to stop assuming people have those.
  • Wait, you program in PHP. Last post I’m ever going to read of yours, goodbye.
  • Why don’t you write this in Mathematica?
Posted in programming, teaching | 3 Comments

Talking to parents of admitted students

One of the roles I have in the dean’s office is to talk to parents at admissions events. This week I talked with three different groups of parents of admitted students the day before their students registered for the fall. I wanted to take some time to get down some of the things we talked about.

Helicopter versus snowplow parents

Right at the beginning I talk about my view of helpful parents for college students. As the director of the First Year Seminar I think a lot about this, and one of the great things about working in the dean’s office is getting to know the awesome work that my colleagues in Student Affairs (like the Dean of Students) do. They’re the ones who have really taught me to value the supportive role that parents can play.

Here are my definitions:

  • Helicopter parents
    • Emotionally supportive
    • Help students understand the nature of the choices students have
    • Ultimately help students make decisions
  • Snowplow parents
    • Clear the path of anything in the way
    • Determine the direction of the path
    • Make decisions for the students

I recognize there’s a lot of nuance and gaps in those definitions, but they get me pretty far when talking with (mostly nodding) parents. Some people like “lawn mower parents” in place of “snowplow parents” but, coming from Minnesota, I really think about those times when you’ve gotten a foot of snow and only have time for a quick path. You define not only where you are going to walk, but where your mail carrier is going to walk, where your kids are going to walk, and even where the pets are going to walk. My colleague rightly points out, however, that snow plows often follow defined paths whereas lawn mowers can create very strange but well-defined paths. Regardless, the big deal is who makes the decisions.

As I talk about various signs of success that parents can watch for, I like to contrast how a helicopter vs snowplow parent might respond. If the student asks for help in deciding what to register for, the snowplow parent might say “you did well in biology 3 years ago, you should register for that” while a helicopter parent might ask “why did you do so well in biology 3 years ago?”

What problems do you enjoy solving?

Students at this point in their life are innundated with questions like:

  • What are you passionate about?
  • What do you care about?
  • What are you good at?

Those questions and others like them start to morph into “what are you going to major in?” While I find that question to be a part of interesting and useful conversations, I’ve started to use a different one: What problems do you enjoy solving?

Another way of asking that is to encourage students to reflect on times when they’ve looked up and been astounded to see it’s after midnight. What were they working on? Why were they so focused? Did they enjoy it?

What’s particularly interesting is how that question contrasts with the 3 above. A student might be passionate or care about something but not enjoy the work it takes to follow those passions. The simple example I use is “world peace.” People can be passionate about that, but many don’t enjoy the work it takes to achieve it. And the “are you good at it” one is particularly significant: If you know of problems that you enjoy solving, higher ed is a fantastic place to get better at doing it. Getting better then leads you to even more interesting problems! It’s not like we get you to the point where you’re awesome at something and everything is easy from that point on. How boring!

Parents can be awesome at helping students answer the question, which can then help them make all kinds of academic decisions. It can also help with the mid-October phone call that goes like this:

  • “How are things going?”
  • “Ugh, I’ve got 20 more calculus problems to do tonight.”
  • “Shoot that stinks”
    • This part is important. I learned a great parenting lesson from my sister-in-law: deal with your kid’s emotions first, then the logistics of the problem. It only has to take 10 seconds.
  • “Is this getting you any better solving problems that you enjoy solving?”

Ok, I know, it wouldn’t happen quite like that. But something close to it could happen.

Multiple doors

I also talk with parents about the delicate balance students have to achieve in keeping some doors open, shutting some, and diving through others. “What kind of problems do you enjoy solving” can be helpful with that, but I know the paralysis of wanting to keep everything open. It’s important to recognize that you have to dive through at least one and find whole new sets of doors. But if things go south and you have to back up again, are those other doors rusted shut?

My best piece of advice is to change the paradigm. Change them to windows, prop some open, whatever. Think about, for example, non-academic ways to explore those other pastures. At my institution you can take 4 classes, play in the Jazz Ensemble, play a sport, and volunteer at the elementary school across the street all at the same time. What can those non-academic experiences do to help you understand your door environment?

SEEC

I’ve written a little about the SEEC paradigm before, but I’ve found that it really helps when talking to both students and parents. Quickly, students should:

  • SEE that all ideas are connected
  • EXPLORE those connections
  • EVALUATE those connections
  • CONTRIBUTE new connections and ideas

Encouraging parents to talk to their students about this paradigm is useful, I think. Every course they take should add to the student’s “lenses” to look at the world. Every new lens helps you SEE whole new connections for an idea. Seeing them is always the first step to EXPLORING, EVALUATING, and, most importantly, CONTRIBUTING. Is that calculus homework going to help you SEEC knowledge?

It’s fun to talk to parents about what to watch for by Thanksgiving in the fall. Have they CONTRIBUTED a new idea? They should have. Either in a discussion or a paper or even a homework assignment. Certainly they should be CONTRIBUTING “big” ideas by the time they’re ready to graduate, but even in that first semester they can do it. But they have to S-E-E first, and that’s what our curriculum is all about (FYSEM, General Education, and Majors all do that).

Signs of success

Here are a few of the “signs of success” I encourage parents to watch for:

  • Being able to articulate what they’re up to using the SEEC paradigm
  • They should have had at least one personal conversation with every instructor they have by Thanksgiving. Typically at my institution the biggest class they’ll have is around 40 and I know my colleagues can handle this.
  • Are they owning their education? Do they turn the lights on in the classroom or wait for the instructor to do it. Are they EXPLORING cool connections that a new lens they’ve developed lets them?
  • They should average a new faculty or staff member name on their “forever list” every year.
    • “forever list” is my shorthand for that list of people you keep contact with. It’s your holiday card list, or the list of people you’d consider inviting to your wedding. I encourage parents to ask students if they’ve added one to the list around April of their first year. They really should be making that strong of a connection with a faculty or staff person every year. Admittedly Facebook has changed this equation a little, but mostly people get what I mean when I say this.

In addition to those signs, it’s helpful to talk to parents about the W curve. It’s a plot of how settled/happy/adjusted students are at college in their first term and it looks like a ‘W’.

Your ideas?

Thoughts to add or subtract? Here are some starters for you:

  • Thanks for this, it really helps me. What especially resonated was . . .
  • What a waste of time. I could have written this myself, but I would have changed . . .
  • Helicopter parents are the bane of my existence. Why are you praising them. Please take this post down.
  • Here’s a few more descriptions that can fill out your parent spectrum . . .
  • What did that last commenter mean by “parent spectrum”?
  • I think “follow your passion” is a much better way to talk to students and here’s why . . .
  • I think “follow your passion” has harmed some students. I’m not sure I like your approach any better but that last commenter was a little over the top so I just stopped by to say thanks for giving me something to think about.
  • I liked what you said about balancing open doors. Here’s what’s helped me with that . . .
  • Students should know the one door they want to go through before enrolling. It makes things much easier.
  • I think it should be SEECC with the second C being “communicate”
  • SEECC is really hard to pronounce. What was that last commenter thinking?
  • I like your “signs of success.” Here’s a few that I use as well.
  • Your “signs of success” are way off the mark. Instead you should use . . .

Posted in dean | 4 Comments

Scripting in the Dean’s office

Today I spent a good part of my day solving a couple of problems with a couple of colleagues. It was a pretty typical day in the Dean’s office (I’ve been the Associate Dean for undergraduate programs in the College of Liberal Arts for a couple years now) in that there were logistical problems to be solved that I was able to help with. What was interesting was that for both problems I leveraged my experience in scripting algorithms to accomplish tough tasks. I thought it might be fun to document a little of what I did to help with arguments about whether learning scripting is a useful thing for folks to do. Certainly a lot of the programming that I do is for particular physics projects, but it’s been interesting how often my skills have come in handy in the Dean’s office.

Assigning New Student Mentors to FYSEMS

One of the hats I wear is that I’m the director of the First Year SEMinar program (FYSEM). One of the cool things we do with that program is to build a student success triangle for new students consisting of a FYSEM instructor, a Campus Colleague (a staff member at the institution), and a New Student Mentor (NSM). We find that triangle works really well, and right now we’re right in the midst of assigning NSMs to the FYSEMs scheduled for this fall.

About half the FYSEM instructors identify NSMs that they want to work with. For the remainder of the NSMs, they give their top three preferences and I write an algorithm to pair them up, trying to make them all collectively as happy as I can, based on their preferences.

Basically I use the approach I lay out in these two posts. I randomly assign the NSMs to the available FYSEMs and look to see how happy everyone is. I generate an entire generation of such assignments and determine which ones are the happiest. From those I choose a few and mutate them by making a few switches (randomly). This produces the next generation and I repeat it.

What’s interesting is that I’ve done this for three cycles now and each time I have to make small tweaks to the cost/fitness calculation. This year we had one extra NSM and I had to determine which FYSEM could get two to maximize the happiness of everyone. I didn’t want to re-build the whole system (which currently assumes the number of NSMs and FYSEMs is the same) but realized instead that I could just duplicate one of the FYSEMs and then run the algorithm. Of course that forces that FYSEM to be the one that gets the spare, but the whole thing runs fast enough that I just repeated that with every possible FYSEM to be the extra one and at the end looked for the happiest situation. It worked great!

Fixing hyperlinks in a Google Doc

We’ve got an important visit coming to campus this weekend by a team of observers. They’ve been given a bunch of linked documents to get them prepared for their visit, but we hit a technical snag. It seems the documents we sent occasionally have broken links. They’re not really broken, they just seem to lose the folder structure that’s built in. Regardless, we wanted to make sure that when they were here they for sure had the access they need. I was talking with a colleague in the department and we wondered about using a structured Google Drive folder system as a backup. I thought it might work, but my colleague pointed out that all of the links came in with the wrong structure when we converted it all to Google Docs.

I said I could probably help, but I wanted to make sure that there was a clear path to doing it. He said that all the links end with the proper file name, and that those files were all in a different Drive folder. I said I could probably write a script to get it done, but I wasn’t sure how long it would take. I predicted two hours of learning to fix the first link and then two minutes to fix the other 246 links. He pointed out that he figured it would be 2-4 hours of his labor to do it, so it didn’t seem to be the obvious solution. However, I had the time and he wasn’t sure he could do it today, so off I went.

Long story short, I think it took me only a total of an hour to get the script working, and then it really was only two minutes to fix them all. Pretty cool!

First I just googled how to find links in a Google Doc and found this super helpful Stack Overflow post. It was frustrating to see how hard finding the links were, but I really loved two things about it: 1) It just hunkers down and deals with the fact that every character that’s part of a hyperlink has a connected url. That’s really a pain, but the code clearly just brute forces its way through until it gets to a character that doesn’t have a connected url. It only collects the url once, then spends the rest of its time hunting for the end. 2) It uses a very cool recursive approach, scraping any links it finds and, if it stumbles on a child of text it just sends that child through the very same function.

At first I just wanted to make sure that it could find the links I was looking for (ones that looked like “http://Evidence/…&#8221;). That worked pretty well, so I tried to figure out how to do the replacement. First I had to grab the text after the slash and decodeURI it (that’s a javascript function that mostly just turned %20 into a space). Then I had to find the Google Drive url for the file with that name. I’ve learned from the past not to do that sort of hunting around Google Drive over and over again because it’s really kind of slow. So instead I pulled in all the evidence files and built a javascript object with keys given by the file name and values given by the url. Then I could just get the new correct url by doing efiles[filename]. Very cool.

So I made a loop that went through each found link, found the correct url, and then updated the text in the original Google Doc. What was super cool about the Stack Overflow code was that the elements I was dealing with (searching, finding children, doing replacements) were live in the sense that if you made a change, it actually changed them in the original doc. Very cool.

When I first ran it I was super happy and I called another colleague to double check that all the new links were right. However, during that phone call I noted that a bunch hadn’t been fixed. All the ones that were part of bullet points were untouched! So I spent a half hour trying to understand what was different about them. Unfortunately I could have saved all that time if I’d just read the comments under the Stack Overflow code. It turns out that code assumes that all hyperlinks have unlinked characters at the end. The bullet point ones didn’t have that! A simple adjustment to one of the if/then arguments fixed it and I was done!

Upshot

So, a fun day. I got to solve two problems that needed solving and I got to do some fun coding, using two very different programming languages (Mathematica and javascript). I would argue that my particular skill set is an excellent one to add to a Dean’s office, as these sorts of problems show up a lot. Maybe I’ll try to document more of them here just to have a record of them. Off the top of my head, here’s some other projects I’ve used these skills on:

  • Student evaluation comparisons
  • Finding student paths through our curriculum
  • A neural network trying to understand what makes our students successful
  • Sending individualized Dean’s List notifications
  • Sending emails to students on our Early Alerts list and connecting them to proper resources
  • Collecting applications for general education requirements and assigning them to random members of the Undergraduate Curriculum Committee
  • Helping folks transition off of committees by changing the owner on thousands of documents
  • Making graphs of the enrollment trends for courses

Your thoughts?

Here are some starters for you:

  • I really like how you paired up the NSMs. What I liked the most was . . .
  • I don’t understand why you don’t just randomly assign the NSMs. Who cares about their preferences?
  • Can you tell me more about the triangle, especially the Campus Colleagues?
  • In your 7 year old post you talk about Nobel Prize work on the pairing problem and yet it seems you still haven’t read and applied their work. Jerk.
  • Why don’t you just print out all the evidence docs and put them in an awesome binder for the visitors?
  • Are you saying there should be some technical requirements to be an Associate Dean?
  • Why don’t you just write Ass Dean?
  • I have a pairing problem. Can you write an algorithm for me? It would have to be in python.
  • Google wants everyone to just use search to find everything. Why didn’t you just strip the hyperlinks and tell people to just search for any evidence they need?
  • Let me get this straight: Half this post is just riding on the coat tails of someone who wrote a Stack Overflow answer. How can you call yourself a programmer?
Posted in dean, programming, technology | 7 Comments

Doppler Drum Corps

One of my favorite oral exam questions to give students in introductory physics classes is to ask them whether marching bands should worry about tuning because of the Doppler Effect (lots of details below but the short version: if there’s relative motion between the sound source and listener, the two disagree about what pitch they hear). It often leads to a great conversation about who’s moving and how and who might hear any tuning problems. Sometimes I get a confident “no” from a student and I know immediately that they’ve got some marching band experience. Mostly they talk about how they’ve never noticed it so it must not be a problem. It’s fun to do some quick calculations to see how fast you have to walk for this to be a problem.

Enter my kid’s fun adventure these past few months. They’ve committed themselves to taking the next step from their high school marching band experience (when they won the state championship last summer!) and to set a goal to get into one of the elite Drum Corps International programs. Long story short: They made it! They tried out for a few corps to get a sense of their culture and are super excited to play with Phantom Regiment this summer.

So, the question we’ve been talking about lately is whether marching corps at the elite level should worry about the Doppler effect. They told me a really interesting detail about the training they go through: in order to optimize the tuning for various chords, some corps ask their members to perfect the process of being able to raise or lower their pitch by eight cents. What are cents? If you consider two consecutive keys on a piano, break up the pitch difference between them into 100 pieces. Each of those is a cent. Therefore eight cents is effectively eight percent of the way to the next note on the piano. When they told me about that I started to wonder if the Doppler shifts you’d get in normal Corps routines might be on par with those eight cents.

Structure of this post (for those that want to skip the derivations 🙂

  • Doppler effect discussion including a derivation for the Doppler shift at arbitrary orientations of the source and listener (though always with a stationary listener)
  • Pitch/frequency discussion including the Just Noticeable Difference
  • Issues around simulating the Doppler shift for brass instruments
  • Simulations of brass instruments doing simple maneuvers on the field and how that affects their perceived sound
  • Discussion of the ramifications (if any) for Drum Corps

Doppler Effect

For everything I’ll be talking about in this section, the listener will be stationary and only the source will be moving. There are two reasons for this. First, it’s what happens in Drum Corps. The only listeners who are moving are the players themselves and they’re not the target audience. Second, this derivation is much easier without the listener moving. See if you can spot why.

Below is a sketch of the generic situation. The source is L units “up” and x units to the side of the listener. The source is moving at speed v straight down. Note that this represents all possible orientations (see below for a proof of that).

Original orientation of the source and listener

In order to figure out the frequency that the listener hears, we’ll focus on the period of time between when the first and second wavefronts hit. The first is launched when the two are in the orientation shown above. The second is launched after the source moves down for one period as measured by the source as shown below.

orientation of the source and listener when the second wavefront is launched

The time it takes the first wavefront to get to the listener is given by the distance it has to travel divided by the speed it moves at, which is the speed of sound that we’ll label “c”.

t_1=\frac{\sqrt{L^2+x^2}}{c}

The time it takes for the second wavefront to get to the listener is the sum of the time that the source waits to launch it (also known as the period as determined by the source) and the time it takes to travel from that location to the listener:

t_2=T_1+\frac{\sqrt{\left(L-vT_1\right)^2+x^2}}{c}

where T_1 is the period according to the source. The difference in those times is the period that the listener would measure:

T_2=t_2-t_1=T_1+\frac{1}{c}\left(\sqrt{\left(L-vT_1\right)^2+x^2}-\sqrt{L^2+x^2}\right)

Typically we calculate the Doppler shift as the ratio of listener frequency to source frequency:

\frac{f_\text{listener}}{f_\text{source}}=\frac{f_2}{f_1}=\frac{T_1}{T_2}=\frac{T_1}{T_1+\frac{1}{c}\left(\sqrt{\left(L-vT_1\right)^2+x^2}-\sqrt{L^2+x^2}\right)}

This is ugly! We were hoping for some expression that would be independent of either frequency, since certainly that’s how the Doppler effect is typically derived. The way we get around that is to take the limit when T_1\rightarrow 0. Why do that? If we expect a common ratio to work it should work for all frequencies, and when the period goes to zero then the source has barely moved. Since most marchers move a lot slower than the speed of sound this seems to be a reasonable thing. Taking that limit forces you to go to the Hospital. Sorry, I meant to say L’Hopital. The upshot is:

\frac{f_\text{listener}}{f_\text{source}}=\frac{1}{1-\frac{v}{c}\frac{L}{\sqrt{L^2+x^2}}}

By the way, you get the same result if you take only the first term of a Taylor expansion of the full expression above. Also note that if x=0 you get the familiar:

\left.\frac{f_\text{listener}}{f_\text{source}}\right|_{x=0}=\frac{1}{1-\frac{v}{c}}=\frac{c}{c-v}

While this seems like quite a constrained situation, it turns out that any (stationary listener) scenario can be described with an appropriate L and x. The reason is that there are three physical points of interest: 1) the original location of the source, 2) the location of the listener, and 3) the location of the source when it launches the second wavefront. Those three points will always be on a plane together and so you can reorient them so that (3) is directly below (1) and then you can measure L and x from that. If you let \hat{v} represent the unit vector from 1 to 2 and if you label the three points \vec{r}_i you get the following for L and x.

L=\left(\vec{r}_3-\vec{r}_1\right)\cdot \hat{v}

x=\left|\left(\vec{r}_3-\vec{r}_1\right)-L\hat{v}\right|

To get a sense of how the Doppler shift changes as x goes away from zero (which is how it’s usually derived in introductory physics), here’s a plot of the perceived frequency when the source frequency is 440 Hz (common tuning note) with L=10 meters and x ranges from zero to 10 meters with the source moving at 1% of the speed of sound:

Doppler shift for 1% speed of sound with a transverse position given by x and a longitudinal distance of 10 meters

Another way of looking at is is to plot what you hear as a sound sources approaches and then passes you but misses you by, say 1 meter. This time I’ve set the speed to be 55mph.

Hearing pitch changes

I know, I know, you don’t care about all that derivation. You just want to know if people marching around causes enough of a problem for elite drum corps. Ok, fine, but now we have to talk about how much pitch shifting you, or more likely the DCI judges, can handle before things are either noticeable or bad (or perhaps those two are the same). Mathematically as soon as a pure tone gets mixed with one even one cent off you’ll hear beats, which is often the hallmark of things being out of tune, but even then you’d have to know if the two tones are being played long enough for you to even notice (one cent off at 440 Hz is 440.25 so the beat period would be 4 seconds). Secondly, there aren’t any pure tones in Drum Corps (no flutes allowed) so there’s always some things that aren’t perfectly in tune.

That takes me to another favorite thing to teach: You can’t possibly get all 12 notes in an octave (7 white keys and 5 black keys on the piano) to be in tune all at once. Speaking an an amateur piano tuner, that really sucks! What’s going on here? Western music has coalesced around 12-tone scales for centuries and now you’re learning that it’s an impossible task. Yep, them’s the breaks. The way I usually teach it is that if you declare a frequency ratio of 3 to 2 as pleasant sounding (it’s what we call a perfect 5th in music, so C and G together or, for us brass-leaning folks, Bflat and F), then you should be able to go all the way around the circle of fifths by raising the frequency by a factor of 1.5 (3 to 2 remember) and eventually come back to your original note, though of course several octaves higher. Ok, let’s do it. We’ll start at A 440 and then keep multiplying by 1.5 12 times to all the way around the circle of fifths and back to an A. The order is A, E, B, G flat, D flat, A flat, E flat, B flat, F, C, G, D, A. What do you get? 440\times 1.5^{12}=57088.38. Ok, big deal. But how many octaves up is that? 57088.38=440*2^n\rightarrow n=\log(57088.38/440)/\log(2)=7.02. Uh oh, that’s not a integer! See the problem? If it was a perfect 7 octaves we’d be in business, but it’s not.

So how does the music industry deal with that mistake? Well, there’s lots of ways. Each is called a “temperament” and there’s been lots of shifts to which one is dominant over the centuries. Some, like the “Pythagorean Temperament” sounds great for 5 or 6 of the notes but really can sound like crap for the rarely used accidentals in the key of the song. If you’re going to just use those 5 or 6, that’s the temperament for you. However, given how most instruments these days can play in any key (one key’s accidental is another key’s super important 5th) and given how composers often make use of key changes, the “equal Temperament” is basically the dominant player these days. Have a electronic tuner in your pocket right now? It uses the equal temperament. “Equal” sounds great, but what it really means is that only the As are perfect and the problem described above is spread around equally to all the other notes. That’s right, if you use a digital tuner, only your As are perfect! Don’t believe me? Well the E on a digital tuner is set to 659.255 Hz. That’s not the “perfect” 440×1.5=660 you were expecting, is it? The other temperaments spread that pain in different ways, some concentrating it on a single interval that people have called the “wolf fifth” because it sounds so horrible if you use it.

Ok, long tangent done, back to what “sounds bad” when it comes to pitch problems. There’s a few ways to describe this. One is the beats described above. If you’re tuning with a buddy who’s playing the same instrument, you can hear that you’re close to being in tune when you hear your combined sound to be throbbing (loud, soft, loud etc). That rate at which the beats happen tells you the difference in your frequencies, so if you can make that rate be zero, you’re in tune. That works best when you’re just a few Hertz off from each other, otherwise the beating is so fast you can’t really hear it.

Another way is to listen to one instrument and then another (so they’re not playing together) and determine who is sharper (or flatter) than the other. Humans can do this, but only down to a frequency separation of about 8 cents (remember those from above?). That’s what’s known as a Just Noticeable Difference. In other words, if two instruments are only off by 4 cents, say, you can’t tell them apart if they play one at a time, but you can hear the beats if they play together.

Simulating Doppler Shifted Brass

I wanted to be able to not just calculate how much the Doppler shift affects Drum Corps but also try to simulate the sounds a little. My first try was to see how to make a midi sound file be off by a little. Unfortunately Mathematica’s “SoundNote” function that can access midi samples only lets you play pitches on the equal temperament (but see below to see how I got around that). So my second try was to simulate a single instrument (a tuba, since that’s what my kid plays) with just a small collection of pure tones. I used the amplitudes and frequencies from this page to produce this sound of a single tuba:

Simulated tuba with just a small number of pure tones

The nice thing about just a small number of pure tones is that it’s easy, then, to apply the relevant Doppler shift. It definitely sounds like a low brass instrument, but I was hoping for better.

Luckily there’s a somewhat new function in Mathematica that gives me access to the raw time data for the midi samples. Using that, I can simulate the Doppler effect by resampling that time series and then playing it back at the original sample rate. I do that by interpolating the original data so that I can make an estimate of the pressure wave’s value in between the data points that were actually collected. Here’s what an unshifted low B flat from a tuba sounds like from the midi collection:

I guess it sounds a little better. Ok, so now I have all the tools to make some simulations.

Drum Corps simulations

Here’s the situation I’m going to simulate:

Phantom Regiment from the year before they won the championship.

Instead of having them play a series of notes, I’ll just simulate them playing a simple B-flat major chord with the contras on the low B-flat, the baritones on the D, the mellowphones (think marching French horns) on the F, and the trumpets on the high B-flat. I’m only using 12 players (3 each of the various horns all shuffled together spanning 50 yards). Here’s what the simulation sounds like when they’re standing still 10 meters away:

hornline playing B-flat chord 10 meters away standing still

Here’s them 10 meters away but jazz running towards us (they cover 5 yards every 6 steps at 160 beats per minute so 6 to 5 160).

Hornline 10 meters away coming towards us at 6 to 5 160

Here’s the same but 1 meter away (there’s a big sound difference here because the ones in the middle dominate the sound).

Same hornline only 1 meter away

And here’s the prior three all strung together one after another:

Previous 3 all strung together

Finally, in case none of those sound bad to you, here’s the 10 meters away version with them running at three times the speed.

10 meters away but going 3x as fast just to really show the distortion

To get a sense of how many cents sharp the players are in these simulations, here is a plot of the cents for everyone, first at 10 meters:

Cents for the hornline players when they’re 10 meters away

and then 1 m away:

cents for the hornline when they’re 1 meter away

What’s interesting about these two is that if there had been a player in the middle, their cents would have been the same for both, namely 10.3. But the biggest change as the horn line gets closer is that the ones towards the end have very small shifts.

Drum Corps Ramifications

So, can you hear the difference? Certainly the corps think that somewhere around 8 cents is worth worrying about, as my kid has been asked to try to reliably make that sort of change. But if they’re marching around, the Doppler effect brings about that level of pitch shift into play, so if you don’t correct for it, all the other things you’re doing at that level are washing out.

Of course, a lot of the big hits you think of in drum corps happen when they’re standing still. Certainly the youtube traffic seems to head toward clips like this one of them just warming up standing still:

(I love that video. I get goose bumps every time I listen to it).

So, if on the order of 8 cents is meaningful to you, it might be worth it to look at your drill (that’s their marching diagram/orders) and determine if there are times when making a light adjustment might make sense. I’m happy to help (but likely only for Phantom 🙂

What do you think?

  • I really liked this, especially the part where you . . . My question is . . .
  • I think this is dumb, especially the part where you . . . My snarky comment is . . .
  • I know for a fact that some judges move around while judging and so you’re a liar when you say that the target audience doesn’t move.
  • Why do you say any three points can share a plane? I’m sure there’s some crazy non-Euclidean geometry where that’s not true but I’m too furious with you right now to prove it.
  • Flutes don’t produce a single pure tone, you anti-woodwind-ite.
  • My Just Noticeable Difference for quality blog posts is telling me that this sucks.
  • I thought you said your kid got into Phantom Regiment. Why are you showing a Carolina Crown video?
  • Why didn’t you talk about the cents difference it takes to go from equal tuning to perfect 5:4 and 3:2 tuning which is surely what these corps do?
  • Once again you’ve forsaken Python to do all this in Mathematica. Loser.
Posted in fun, general physics, mathematica, parenting, physics | 4 Comments