Teaching driving is depressing

My son is learning to drive. I’m past the “oh crap” stage and the “but just . . .!” stage and squarely into the “it’s going . . . ok” stage. He takes his test next month and I’m pretty confident he’ll do fine. Here’s the thing, though: I suck at teaching someone to be a good driver. No, this isn’t a post about how you can’t teach your own kids anything, though that’s often true with mine🙂 It’s about realizing just how I got good at driving and how hard it is to teach that in just a few months.

My partner and I talk a lot about how our son is learning. We think he does a great job with some things and just an ok job at others. We’ve been with him in stressful situations and we’ve all made it through (even the car!). But as we reflect on what we would have done in those situations we start to realize just how much better we are as drivers than he is, or that he will be even after another few months of intensive training. We’ve had (cough cough) 30 years of practice, and now we’d say we’re pretty good at it.

What I’m depressed about is the realization that years of experience (or 10,000 hours, if you prefer) can’t be taught. I’m pretty sure that my son will become a great driver, but I don’t think there’s anything I could do to help that along very fast. I’m depressed because my profession is teaching physics, and all I ever get is four years with a student. For most of my students all I ever get is one semester. Trying to teach “physics maturity” (to borrow and slightly change a phrase from my mathematics buddies) in a semester is really hard. Maybe impossible. If I knew my students were going to go off and continue to think about physics and practice physics and model things like crazy throughout their life, I suppose I could take solace that they’d eventually become the experts I want them to be. But the students that do that are in a minority so small that it’s probably not worth it to count them.

I’m realizing that all I can do is set the table for them. I can try to make a course experience that gives them some tools and gives them glimpses of others. Just as I can’t make my son a great driver in just a few months, I can’t make an expert in physics in one course.

So I’m depressed, but super excited to be heading off to the AAPT conference today so I can get the usual pick-me-up that I get from all my friends there. Who knows, maybe when I get back I’ll have a post with a title like “Teaching physics is the greatest thing you can do” or something like that.

Your thoughts? Here are some starters for you:

  • Thanks for this. I’ve been thinking along similar lines . . .
  • This is dumb. Physics and driving teaching are nothing alike. What you should compare is . . .
  • You say “depressed” but you don’t really mean that. It’s a disservice to those who really do suffer from depression (sorry about that).
  • I taught my child to drive and in the process learned a ton from them. Things have really changed!
  • All that matters is that they know where the brake pedal is. Everything else is fluff.
  • Wait, you’re going to AAPT? Want to catch a meal?
Posted in teaching, Uncategorized | 5 Comments

Help us crowd-source our drums!

My students have been working hard this summer on a project I’ve talked about before. Here’s the gist:

  • Normal drums aren’t melodic. They have resonant frequencies but they aren’t in a pattern that we think sounds good. That’s why they’re used for percussion.
  • If you explore different shapes you can move the resonant frequencies quite a bit. We’re interested in finding cool shapes that might sound melodic.
  • If we can find a shape using simulation, we can then print it using a 3D printer

That sounds cool and all, but the details are proving to be tough. I’d like to brag a little about what we’ve been up to in this post (mostly so there’s a good record of it somewhere), but if you’re wondering about the title of this post, just go here where there’s a little explanation for what we need for you. Read on for more details.

Calculating resonant frequencies

This is actually the easy part. If you know the shape of the drum head you’re interested in (and can describe it mathematically — see below for that hassle) you just need a single command in Mathematica:

{frequencies, functions}=NDEigensystem[{-Laplacian[f[x, y], {x,y}],
    DirichletCondition[f[x,y], True]}, {f}, {x,y} \Element region, {10}]

where “region” is your mathematical description of the shape of the drum head. This command uses a Finite Element approach and returns the 10 lowest eigen frequencies. Note that you have to take the square root of the frequencies you get from this command to get the audio frequencies.

Here’s a sample of listening to various frequencies on a slowly changing shape:

Describing shapes

Simple shapes are easy: a circle? Disk[], a rectangle? ImplicitRegion[-1<=x<=1 && -2<=y<=2, {x,y}]. But what about crazy shapes? And what about shapes that Mathematica can programmatically shift around while it hunts for cool shapes that produce cool spectra?

What we’ve decided to do is to use control points around the edge that Mathematica can make slight adjustments to. When it does, it redraws a smooth, closed curve that includes all the points and it then uses a cool command that turns that border into a region:

region = BoundaryMeshRegion[controlpoints, Line[{1,2,3,4,6,1}]]

The problem is that you have to make sure that the control points are in the right order around the border (say, clockwise, for example). Luckily it turns out that the traveling salesperson problem comes to the rescue here. If you want to find the shortest path visiting all the points in a plane (and returning to the first one), that path will not cross itself and hence will be a proper region border. So:

fst = FindShortestTour[points];

comes to the rescue. So Mathematica does this:

  1. takes some random points in the  plane
  2. Find the shortest path around them
  3. Use BoundaryMeshRegion to make that border a region
  4. calculate the frequencies for that shape
  5. decide what small adjustments to make (see below).
  6. Make those adjustments to the locations of the control points (first described in (1) above)
  7. repeat 2 – 7 until a cool shape is found

Decide on adjustments

Ok, so let’s say you have six control points. Each one is an x and y value so you have a 12-dimensional optimization problem. What could we use? We’ve decided to use Mathematica’s implementation of an evolutionary algorithm (or genetic algorithm). Really it’s the same thing I was using when trying to see if Mathematica could learn to race around corners. Evolutionary approaches work well where there’s a humongous parameter space and you don’t really know any other way to explore it other than brute force.

The big problem (yes, I’m getting to the title of this post, hold your horses) is that a set of frequencies from a drum head (the result of step 4 above) needs to be converted to a single number that can be used to rank various drum heads in the evolutionary algorithm.

Single number

Ok, so we realized that we needed to be able to look at a spectrum from a drum head and rate it on the scale of “is it melodic?” We thought of some interesting approaches. Mostly they centered around measuring how close the frequency spectrum is to an evenly spaced one (which is what a stringed instrument gives you). We ran into lots of potential problems, though, not least was that orchestra chimes have a “missing fundamental” and still sound good.

We also realized that maybe we could handle mostly evenly spaced frequencies if we could determine where to thump the drum head to kill the offending non-evenly-spaced ones.

Thump predictions

Ok, so now we had to go back to Mathematica to determine where on a particular drum head you could thump it to control the relative amplitudes of the various frequencies (think about how a stringed instrument sounds very different depending on where you hit it.

Here’s an example of how the frequencies from the shape of Minnesota change their relative amplitudes if you thump in the center of every county in Minnesota (note that the find shortest tour command was used to do that):

Luckily the NDEigensystem gives us the resonant shapes for every resonant frequency so finding the relative amplitude for a given thump location (and shape) really just amounts to doing this integral:

\int_\text{region}\text{thump}(x,y)\,f_i(x,y)\,dxdy

where f_i(x,y) is just the ith resonant shape and thump(x,y) is the function that describes the thump shape (and location).

It’s taken us a while to find a good way to do this integral fast, but we’re getting there (right now we’re at one second per frequency per shape).

So now we can look for a good candidate of frequencies and then hope there’s a thump location that’ll shut off the bad ones (fingers crossed!).

Back to single number (Neural networks and you!)

So then we hit on the way we could pull all of this together (we hope). We’ve decided to let the crowd (you!) help us rate a collection of frequencies and relative amplitudes on a scale of 0 – 5 where 0 is like white noise and 5 is a pure tone. We figured that since we’re making drums for people we ought to let people determine the single number that our evolutionary algorithm needs.

One of the researchers in the math department this summer is working on an artificial neural network to recognize handwriting and my students realized that approach could work here. All we need is to train the network on what are good, bad, and medium sounding collections of frequencies and relative amplitudes.

Luckily Mathematica has recently built in some really powerful functions that implement the major algorithms in neural network theory. The one we’re planning on using is “Predict” which just needs a whole bunch of these:

{{216, 456, 786, 890, 1012}, {0.5, 0.3, 0.6, 0.7, 1}}->2

where the first list of numbers is the random frequencies and the second is the relative amplitudes. It then trains on whatever you give it and then it can be used on future untrained ones.

So, we need your help! Please go to our new site and score a few random sounds on our 5 point scale (decimals are welcome). It just takes 1 second per sound and we’d love to just get a ton to train the neural network. Then our workflow will look like this:

  1. set a generation of random control points
  2. find the region for all of them (using FindShortestPath)
  3. find the frequencies for all of them
  4. Check them against the neural network to determine goodness
  5. Make babies with better ones (evolution)
  6. monkey with the thumping (yeah, I know, this part isn’t as clear).

HTML 5 sounds

We started developing the training set using Mathematica to generate sounds. This is pretty easy (just use the Play command) but it was tedius and we weren’t generating enough. This notion of crowdsourcing came from my wonderful students so I decided to give it a try over this holiday weekend.

I knew making a database-driven website wouldn’t be a problem (I rail against Blackboard so much because I finally just wrote my own LMS). But I didn’t know how to generate the sounds. So, I decided to dig into the HTML5 audio standards. It turns out that just a few lines of javascript code will generate a sound with a controllable frequency and amplitude:

oscillator$key = context.createOscillator();
 gainNode$key = context.createGain();
 oscillator$key.frequency.value = $value;
 currentTime = context.currentTime;
 oscillator$key.connect(gainNode$key); // Connect sound source 2 to gain node 2
 gainNode$key.connect(context.destination); // Connect gain node 2 to output
 gainNode$key.gain.value = $amps[$key];
 oscillator$key.start(currentTime);
 oscillator$key.stop(currentTime + 1);

where $key is set up as the loop variable (goes from 1 to 5). Feel free to take a look at the html source of our page to see how it all goes together.

So thanks for any help you can give. We really hope we get enough data so that the training is robust.

Thoughts? Here are some starters for you:

  • I love this! Here’s something you could do in addition . . .
  • This is dumb. All of this has been done before and you’re just repeating what someone else did (insert reference here).
  • Why do you do all of this in Mathematica?
  • Are you going to sell these drums?
  • How are you mounting them to keep the tension constant in the crazy shapes?
  • Why don’t you just build tympani?
  • What are you talking about with the “missing fundamental” for orchestra chimes?
  • The web page doesn’t work for me, what’s the deal?
  • Can I have access to the training set?
  • Wait, you wrote your own LMS? That sounds cool, tell us about it!
  • I’m still confused how you’re going to do the thumping in the evolutionary program.
Posted in Uncategorized, mathematica, physics, research | Leave a comment

Relativistic Lagrangians

I’m a part of a cool group of folks interested in infusing computation into undergraduate physics curriculum. One of the projects is called “relativistic dynamics” and it really got me thinking. I thought I’d get my thoughts down here.

Lagrangian

I’ve used a Lagrangian approach a ton in my work with students and my posts here. It’s a great way to model the dynamics of a system because you just have to parametrize the kinetic and potential energy of the system and you’re off. No vectors, no free body diagrams, just fun🙂

Here’s the idea in a nutshell:

Hold a ball in your hand. In 2 seconds it needs to be back in your hand. What should you do with the ball during those two seconds to minimize the time integral of the kinetic energy minus the potential energy during the journey?

It’s a fun exercise to do with students. You’re asking them to minimize this integral over two seconds:

\int_0^2 \text{KE}(\vec{r}, t)-\text{PE}(\vec{r},t)\,dt

When I do this their first guess is to leave the ball in your hand. They like to define the gravitational potential energy there to be zero, and then know the kinetic energy is zero if it doesn’t move so they’ve found an easy way to get a total of zero for the integral. So I challenge them to find a path who’s answer would be negative! It’s a pretty fun exercise, especially if you actually calculate the integrals for their crazy ideas.

The point is that the winner is to throw the ball up so that it’s trajectory, responding simply to gravity, takes 2 seconds (ie throw it up 1.225 meters). The kinetic energy is positive during the whole journey (except for an instant at the top, of course) but the potential energy is positive during the whole journey too.

Calculus of variations teaches us that if you want to minimize an integral like this:

\int_\text{start}^\text{finish}f(x, \dot{x}, t)\,dt

(where \dot{x} is shorthand for the x-velocity) you really just need to integrate this differential equation over the same time integral:

\frac{\partial f}{\partial x}-\frac{d}{dt}\frac{\partial f}{\partial \dot{x}}=0

What’s cool is that if the function is KE-PE the equation above becomes Newton’s second law! That’s why this works. You use scalar energy expressions and you get the force equation for every component of motion! Now there are some other cool things like not needing to worry about constraint forces but I won’t worry about that in this post.

Relativity

Ok, so what happens when you consider relativistic speeds (ie close to the speed of light)? Well, the first thing I did (which, spoiler, didn’t work) was to wrack my brain for an expression for the kinetic energy and plug away. When teaching relativity you get to a point when you’re making the argument with your students that KE isn’t just 1/2 m v^2 anymore but is really mc^2(\gamma-1) where gamma is given by:

\gamma=\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}

If you take the limit of that expression for small v’s you get the usual expected result, and that’s certainly what we do right away with our students to make them feel better.

Ok, so I plugged it in and got a relativistic version of Newton’s 2nd law:

CodeCogsEqn

Note how the second term on the left side looks a little like “ma” while the right hand side is just the force from a conservative potential energy (U). The extra term on the left hand side is the weird stuff.

Without really thinking about whether that was the right equation, I modeled a constant force system and got this for the velocity

wrongrel

(I set the speed of light to 1). You can see that the speed is forced to obey the cosmic speed limit.

But here’s the problem. The equation above is wrong. That is not the correct relativistic Newton’s 2nd law equation.

So what happened? I plugged in the correct relativistic kinetic energy and the Lagrangian trick (minimizing KE-PE) gave a trajectory that doesn’t match what actually happens! So something’s wrong. Here’s a few possibilities (one is right, see if you can guess before reading the next paragraph):

  • I’m using the wrong expression for kinetic energy
  • The Lagrangian trick has some non-relativistic bias in it
  • I’m minimizing the wrong function

It turns out it’s the last one. It took me a while of digging around, but this wikipedia article set me straight. The gist of what’s talked about there is this:

  • Andy’s hard work above just doesn’t work
  • But we know the right relativistic expression for momentum (\gamma m v which, interestingly enough is a crazy thing that’s conserved in all frames of reference during collisions so we tell our students that since it’s conserved we should call it momentum).
  • Let’s differentiate that momentum to get what the force should be and then search for a functional (that’s what f above is) that works out in the calculus of variations

Yeah, weird, I know. It’s like “hey, I know what the answer in the back of the book is so I’m going to futz with my early equations until they give me the right answer. So what is the right functional to use? This:

-\frac{m c^2}{\gamma}

Yep, it’s negative. Yep, it’s not an expression you’ve ever seen before if you’ve studied special relativity. But, guess what, it works! When you plug it in and do the calculus of variations trick you get the right dynamics. Surprise, surprise, given that it was built to do just that.

Here’s the same graph as above but not comparing that prediction with the right dynamics (in red):

correctrel.png

It also asymptotes to the cosmic speed limit, just at a different rate.

So what’s being minimized?

That’s the question I was really wondering about. Luckily google came to the rescue with this great wikibook article that it found for me. It points out that the kinetic energy portion of the functional you use to make the relativistic dynamics work is really just proportional to the invariant space-time interval:

ds=\sqrt{c^2dt^2-dx^2}

This is an expression for the “distance” between two distinct events in space-time that is the same for all inertial observers. It’s really cool given all the weird time dilation and length contraction that can go on in the various inertial frames.

So basically the trajectories that actual things follow is designed to make the space-time “jumps” add up to the smallest number. That’s super cool

Your thoughts? Here are a few starters for you:

  • I like how you talk about teaching the Lagrangian. What I would add is . . .
  • I hate how you talk about teaching the Lagrangian. What I would rip, burn, and bury is . . .
  • Why would you even think that the Lagrangian formalism, which clearly treats space and time differently, could so easily be co-opted into a relativistic treatment?
  • Why is one of your equations an gif instead of WordPress’ built in \LaTeX?
  • I can tell you used Mathematica’s TeXForm command. You are really lazy.
  • You did a simple constant force. What would something connected to a spring do?
    • shmrel

      Non-relativistic (red) and relativistic (blue) mass on a spring

       

  • What do you mean when you say that “it’s conserved so let’s just call it momentum”?
  • Why didn’t you put that last question mark inside the quotation marks?
  • What planet were you on when you figured out the 1.225 meter throw?

Posted in physics, teaching, Uncategorized | Leave a comment

F is for midterm

We’re a little past midterm and I wanted to give an update on my optics course where I’m trying and SBG portfolio approach.

A quick refresher:

  • Every day is a different standard
    • “I can explain what plane waves are”
  • Each day I assign 3 rich problems (some from the book, some I make up)
  • Each day has a quiz on a random problem from the last 2 days
  • For the oral exams students bring in their portfolio of problems, I randomly select one and ask follow up questions on it.

Midterm grades weren’t great. The most common grade was an F. I feel like crap about that. I just wanted to write about what’s been going on to help me reflect.

First the good news: I like the structure. The three problems every day help me really flesh out what I think is important and provide focus for what we do in class. I like a lot of the book problems but it’s fun to make up my own at times to (I really did use the one about 3D movie glasses that I talked about in the other post). Students come to the oral exams with their portfolios and some have some really great work done on them.

So why so many F’s? Those of you who’ve dabbled with standards-based grading know where they come from: “I can always reassess later.” While I thought knowing that a quiz was upcoming would motivate the students to take an honest stab at the problems between each class, quite often it seems that few have spent much time on them before the quiz. They know they can bomb the quiz and still reassess later. It makes for some pretty depressing quiz scores. Combine that with little pressure to reassess early and you get a bunch of F’s for midterm.

The first set of oral exams (each student does three in a week) was very depressing as well. The most common grade was a zero, which they got if they didn’t have anything in their portfolio for the random problem selected. I made it clear they’d get an immediate zero but that we’d spend the time making sure they knew how to get started on the problem.

I just finished the second week of oral exams (separated from the first by four weeks) and saw many less zeros. I would ask what the chances of a zero were and very few said “zero chance, I’ve got something for every one.” With one student I joked that he was treating the oral exams like a casino. One student only had one he hadn’t done. That’s the number that came up😦

I talked with many of the students who got F’s and asked if they had a plan. Most had a lot of confidence that they’d pass the course but they realized they needed to start turning in reassessments much more often. While that’s great news, I also hope they start looking at the problems earlier so that the quizzes can be good enough scores to keep them from having to reassess every standard. I asked a lot of them if they were mad at me because of the F’s and no one admitted to that. Most said it was an honest assessment of their turned in work while from several I got the sense that they felt it was a far cry from their internal understanding of the material.

I know from my colleagues’ experience that most of these students will work hard if you give them a hard deadline. My only deadline is the two-week rule that says you have to get in at least a piece of crap for every standard within two weeks of it being activated (talked about in class) or else it’s a zero forever. Most standards have a quiz associated that takes care of that, but the randomness means there’s the occasional standard that doesn’t get quizzed. That’s still a pretty weak deadline compared to my colleagues’ teaching approaches. My dreamer response is that this is a lesson they should learn, but I don’t feel I’m being very successful attaining that goal.

Labs is another place where I’ve realized I have to provide a different style of support. Most labs involve up to an hour of planning, roughly an hour of data collection, and an hour devoted to analysis. What happens in practice often is an hour of planning, an hour of data collection, and everyone leaves. They know that they’ll have 2 weeks to get something in so why would they have to work on the analysis then? I think a few of the students have come to realize that I can be very useful to them during the analysis stage, but if they don’t stick around they’ll have to track me down later. One big mistake I made was to trust them to do the heavy lifting involved in getting up the Mathematica syntax learning curve to do the types of analysis I want (Montecarlo-based error propagation, curve fitting that’s responsive to variable error bars and that produces error estimates on all the fit parameters). Last week when I turned in the midterm grades I sat down and made much better support documents in Mathematica that will help them focus on the physics that needs to be studied in the lab. That’s already paid off quite nicely for a couple of students.

Well, that’s where I sit. I’m a little nervous that I’ve lost the students, though I was heartened by some good conversations with each of them this week. I think the final grades will be much better than the midterms but I’m nervous that their memory of the class will be dominated by the last few weeks of the semester when a bunch of them will be making screencasts 24 hours a day. We’ll see.

Your thoughts? Here are some starters for you:

  • I’m in this class and I gave up weeks ago. What would have really helped was . . .
  • I’m in this class and I see a clear path to success. Here’s how I’m going to do it . . .
  • Why do you put an apostrophe in “F’s”? It’s not possessive is it?
  • Why don’t you put more teeth into your quizzes? Here’s how I would do it . . .
  • Can’t you see that SBG just isn’t the way to go with this class? I can’t believe it’s taking you so long to figure that out.
  • If the students end up hating the class but learn the lesson about keeping up on their work that’s a win for me.
  • If you think that students hating a class could possibly be spun as a positive you’re a worse teacher than I thought you were.
  • Why do you do Montecarlo-based error propagation? It’s clearly getting them into a casino mentality that now you’re wasting our time complaining about.
Posted in syllabus creation, teaching | 4 Comments

Optimal race path

I ride my bike to work so I’m often thinking about the best path to take around corners. I know bike racers and car racers (and bob sledders) are often told to head into a corner wide, then cut the apex, and then exit wide again. Basically the gist is that you want to make your actual path have the largest turn radius possible so that you don’t slip out. The question I was thinking about recently was whether there was some compromise since typically the largest radius path (which allows the largest speed without slipping out) is also the longest path (and hence mitigates a little the fact that it’s the faster path). I also realized that in car racing, and to a limited degree bike racing, the speed is not held constant throughout the path, so I wondered how you could find the optimal path and the optimal speed adjustments throughout. That’s what this post is about.

First a quick story about go-karts. I was “racing” in one (against my friends) and I was trying to follow the wide/narrow/wide path through all the corners. But I was losing! I finally realized that the wheels had terrific grip and that I could floor the pedal and hug all the curves and never spin out. My friends knew this and by the time I figured it out it was too late.

So what’s the physics involved here? The key is to figure out why wheels start to slip in the sideways direction. They have a particular amount of grip and that force provides the instantaneous centripetal acceleration for the wheel. If you know what the grip force is, along with the instantaneous radius of curvature, you can find the fastest possible speed at that section of the road:

F_\text{grip}=\frac{m v^2}{R}

or

v_\text{max}=\sqrt{\frac{F_\text{grip} R}{m}}

So, if you know the path of the road, you should be able to figure out the maximum possible speed at every location. So how do you do that? Well, first let’s make sure we understand how we’re mathematically describing the path.

What I decided to do was just pick some random points in the plane. Then I interpolate a path that smoothly connects them all. Here’s the Mathematica syntax that does that:

pts = RandomReal[{-1, 1}, {5}];
intx = Interpolation[pts[[All, 1]], Method -> "Spline"];
inty = Interpolation[pts[[All, 2]], Method -> "Spline"];

So now we have two functions, intx and inty, that characterize what the path does. You can plot the path now using:

ParametricPlot[{intx[i], inty[i]}, {i, 1, 5}]

which give this:

racingmainpath

Main path considered in this post

I knew there was likely some cool differential geometry formula for finding the curvature at any point and I found it at this wikipedia page:

R=\frac{\left(x'[i]^2+y'[i]^2\right)^3}{\left(x'[i] y''[i] - y'[i] x''[i]\right)^2}

which I can calculate now that I have the interpolation functions from above. Cool, so now I can find the radius of curvature at every point:

racetrackcircs

This shows the instantaneous radius of curvature at every point along the curve.

So now I can use the equation above for the velocity at every point and figure out a trajectory, and more importantly, a time to traverse the path, which I’d love to minimize eventually.

To be clear, I pick an arbitrary grip force and then calculate the radius of curvature and hence the max speed everywhere and I figure out how long it would take to make the journey. I realized that I’d risk the occasional infinite speed for straight portions of the track so I decided to build in a cap on the speed, that I arbitrarily picked.

So how do I figure out the time once I know the speeds. Pretty easily, actually, as for every segment of the path the small time is determined by the distance, \sqrt{dx^2+dy^2} divided by the speed:

t=\int \frac{\sqrt{x'[i]^2 +y'[i]^2}}{v(i)}\,di

where again i is the parametrization that I used (it just basically counts the original random points) and the speed (v(i)) is calculated as above.

Ok, cool, so if you give me a path, I’ll tell you the fastest you could traverse it. But that doesn’t yet let me figure out better paths around corners. To do that I need to generate some other paths to test to see if they’re faster. Remember they might not be as tight of turns (and so likely faster at the curves) but they’re then going to be likely longer. The hope is that we can find an optimum.

How do I generate other test paths? Well, for each of the original random points, I perturb the path in a direction perpendicular to the original path (which I’ll start calling the middle of the road). If there’s 5 points, then at each the path will move a little left or right of the center, and I’ll use the spline interpolation again to get a smooth path that connects all those perturbations.

So now it’s a 5 dimensional optimization problem. In other words, what is the best combination of those 5 perturbations that yields a path that allows the car to make the whole journey faster. Luckily Mathematica‘s NMinimize function is totally built for a task like this. Here’s what it found:

racetrack2

The blue stripe is the road. The blue curve is the middle of the road. The red point travels along the blue curve as fast as it can without slipping. The green curve is the result of the optimization process. The green point moves along the green curve as fast as it can without slipping.

Note how in the last curve the red point has to significantly slow down, allowing the green point to win. Cool, huh?

Here’s another example that I didn’t have the patience to let NMinimize run (I let it run for 30 minutes before I gave up). It took so long because I used 10 original points, and so it was a 10 dimensional optimization problem. Luckily, just by running some random perturbations I found a significantly better path. Note how it accepts a really tight turn towards the end but it still ends up winning:

racetrack

10 dimensional optimization example

As a last note, I should mentions that making the animations took me a while to figure out. I knew the speed at every point (note, not the velocity!) but I needed to know the position (in 2D) at every point in time. I finally figured out how to do that (obviously). Here’s the command:

NDSolve[{D[intx[i[t]], t]^2 + D[inty[i[t]], t]^2 == bestvnew[i[t]]^2,
i[0] == num}, {i}, {t, 0, tmax}]

where tmax was how long the path takes. Basically I’m solving for how fast I should go from point 1 to the last point (i as a function of time). Then I can just plot the dots at the right location at {intx[i[t]], inty[i[t]]}. That worked like a charm.

Alrighty, that’s been my fun for the last few days. Thoughts? Here are some starters for you:

  • Wow, this is really cool. What I really like is the . . .
  • Wow, this totally blows. What really makes me mad is . . .
  • Can I get a copy of the Mathematica document?
  • Why do you set the initial condition on i to be at the last point instead of the first? (editors note: that took me a long time to get to work, luckily the paths calculated are time reversable)
  • What do you mean they’re time reversable?
  • I race for a living and these are way off. Instead what I do is . . .
  • I want to race for a living now that you’ve given me the tools to win. Where do I send my royalty checks?
  • It seems to me that the cap on the speed gives you discontinuities in your acceleration. Is that allowed?
  • I don’t get your NDSolve command at all. What is that differential equation?
Posted in mathematica, physics, Uncategorized | 5 Comments

Can a pendulum save you?

I’m so thankful to my friend Chija for pointing out this video for me:

Here’s her tweet

When I saw it I started to wonder if angular momentum was enough to explain it. So I set about trying to model it. Here’s my first try:

yoyodrop

Green ball is 20x the mass of the red. No contact or air friction.

It does a pretty good job showing how the fast rotation of the red ball produces enough tension in the line to slow and then later raise the green ball. Here’s a plot of the tension in the line as a function of time:

yoyodroptension

Tension in the line as a function of time. The green line is the strength of gravity. The reason everything is negative is a consequence of how I modeled the constraint (a Lagrange multiplier)

So how did I model it? I decided to use a Lagrange multiplier approach where the length of the rope needs to be held constant. Here’s a screenshot of the code:

yoyo_no_wrap_code.png

“ms” is a list of the masses. “cons” is the constraint.

You define the constraint, the kinetic and potential energies, and then just do a lagrangian differential equation for x and y of both particles:

\frac{\partial L}{\partial x}-\frac{d}{dt}\frac{\partial L}{\partial x'}+\lambda(t)\frac{\partial \text{cons}}{\partial x}=0

(note that in the screen shot above there’s actually some air resistance added as an extra term on the left hand side of the “el” command).

Very cool. But what about the notion that the rope wraps around the bar, effectively shortening the string? I thought about it for a while and realized I could approach the problem a little differently if I used radial coordinates. First here’s a code example of a particle tied to a string whose other side is tied to the post:

just_wrapping.png

“rad” is the radius of the bar. Note how the initial “velocities” of the variables need to be related through the constraint.

I’ve changed the constraint so that some of the rope is wrapped around the bar according to the angle of the particle. Here’s what that yields:

justwrap.gif

Ok, so then I wanted to feature wrapping in the code with both masses. Here’s that code:

drop_with_wrap.png

Note the negative sign before “l[2][t]” and the “\theta[2][t]” in the constraint.

And here’s the result, purposely starting the more massive object a little off from vertical:

yoyodropwrap

Fun times! Your thoughts? Here are some starters for you:

  • Why do you insist on using Mathematica for this? It would be much easier in python, here’s how . . .
  • Some of the animations don’t look quite right to me. Are you sure that . . .?
  • This is cool, do you plan to do this for your students soon?
  • What about contact friction between the rope and the bar? I would think that would be a major part.
  • In the video he just comes to a rest instead of bouncing up. Clearly you’ve done this all wrong.
Posted in mathematica, physics, twitter, Uncategorized | 3 Comments

Portfolio SBG

My last post talked about a way to have daily quizzes in my Standards-Based Grading (SBG) optics course. It (and the comments) got me thinking about how to do it even better and I think I’m closing in on a better plan.

The main idea is to have daily quizzes that are problems randomly selected from the previous day’s work. It reduces the amount of homework I have to grade, and tackles the cheating problem since it’s now a no-notes quiz. I liked it a lot in my fall class and I definitely want to keep those strengths. My suggestion was six problems per day that would act as the only contexts for any future assessments (quizzes, screencasts, oral exams, and office visits). One commenter noted that might be too much to ask the students to absorb from Tuesday to Thursday. Also, I wasn’t too happy about the double quiz I suggested on Tuesdays (one for the previous Thursday material and one to act as a re-assessment of week-old information). So, here’s my new thinking:

  1. Assign 3 problems per night
    1. Have them be substantial, covering various aspects of what we talk about in class.
  2. Each day do a quiz on a randomly selected problem from the previous 6 problems (three each from the last two days of new material).
  3. Have the students maintain a portfolio of all the problems so that they can act as context for all future assessments

Things I like about this:

  • Finding 3 solid problems sounds much more fruitful (and easy for me) than finding six every day.
  • I really like the portfolio idea. Want to come improve your standard score? Bring in your portfolio and I’ll randomly ask about one of those three problems. For each of the standards the students will (hopefully) be encouraged to really comprehend the issues around the three problems, especially given that they and I will be encouraged to “turn them inside out” for every assessment.
  • Before every quiz they should be touching up six problems in their portfolio. Admittedly if the quiz is on one they’re not ready for, they get a crappy grade but they can redo it via screencast, office visit, . . .
  • Something we’ll go over today might show up next time or the time after that, allowing for some cycling (we will likely discuss the context of the quiz beforehand and often the details of the quiz afterwards, especially if it seems people are unsure how to approach the problem).
  • Three problems times ~25 standards is a workable number of problems that the students need to master (especially considering that they are in groups of three with common ideas). Certainly it’s easier than six times 25.

Things I’m not sure about:

  • The students “only” have to know how to do three problems per day. Master those, and they’re guaranteed an A. I get student evals sometimes that say I need to do some sort of high stakes exam to make sure they really know it. I’ve tended not to heed such advice, but this has me thinking about that again.
  • There’s a chance that a standard might not ever be quizzed (25% chance, I guess). That means that they’ll need to submit something on their own. I guess I could use my old “one week rule” (here’s a post back when I called it the two week rule) or something. I could also weight the random selections differently to reduce that 25% to, I don’t know, 10% or something.
    • Hopefully the notion of keeping up a solid portfolio will lower the barrier to having them submit something.
    • If I had the quiz be on the last 9 problems, there’s an even greater chance that a standard doesn’t ever get quizzed (29.6%)
  • The days could devolve into “how do we do these three problems” instead of active learning around the content.
  • Students might want to do their own problems for the oral exams (that’s how I’ve tended to do it) instead of just coming with their portfolio ready.
    • A compromise could be that I’ll tell them which standard they’re going to be reassessed on and they can polish up those three problems, of which I’ll randomly select one to grill them on.
    • Another approach could be “bring your whole portfolio to the oral exam and I’ll randomly select anything in there.” I think that would really drive home the notion of keeping up a good portfolio but they might rebel.

So that’s where I’m at (for today🙂 Your thoughts? Here’s some starters for you:

  1. I think 3 is too many/few and that instead you should subtract/add x and here’s why…
  2. I’ve taught with a portfolio approach before and here’s where I think your system is going to fail . . . (this is a cue for my friend Bret to weigh in)
  3. You definitely should also have assessments that do completely different problems and here’s why . . .
  4. How would you teach the students to “turn a problem inside out?”
  5. Here’s how I’d solve the 25%-that-won’t-get-quizzed problem . . .
  6. I think for the oral exams you should limit what they’ll need to bone up on and here’s why . . .
  7. I think for the oral exams you should make everything on the table and here’s why…
  8. Why not have every quiz be a random selection from anything in the portfolio?
    1. Below is a histogram of running 1000 semesters and finding how many problems would never get quizzed using this approach. The average is just a little over the 25% that I get with my approach abovenumbernotquizzed.png
Posted in syllabus creation, teaching, Uncategorized | 8 Comments