F is for midterm

We’re a little past midterm and I wanted to give an update on my optics course where I’m trying and SBG portfolio approach.

A quick refresher:

  • Every day is a different standard
    • “I can explain what plane waves are”
  • Each day I assign 3 rich problems (some from the book, some I make up)
  • Each day has a quiz on a random problem from the last 2 days
  • For the oral exams students bring in their portfolio of problems, I randomly select one and ask follow up questions on it.

Midterm grades weren’t great. The most common grade was an F. I feel like crap about that. I just wanted to write about what’s been going on to help me reflect.

First the good news: I like the structure. The three problems every day help me really flesh out what I think is important and provide focus for what we do in class. I like a lot of the book problems but it’s fun to make up my own at times to (I really did use the one about 3D movie glasses that I talked about in the other post). Students come to the oral exams with their portfolios and some have some really great work done on them.

So why so many F’s? Those of you who’ve dabbled with standards-based grading know where they come from: “I can always reassess later.” While I thought knowing that a quiz was upcoming would motivate the students to take an honest stab at the problems between each class, quite often it seems that few have spent much time on them before the quiz. They know they can bomb the quiz and still reassess later. It makes for some pretty depressing quiz scores. Combine that with little pressure to reassess early and you get a bunch of F’s for midterm.

The first set of oral exams (each student does three in a week) was very depressing as well. The most common grade was a zero, which they got if they didn’t have anything in their portfolio for the random problem selected. I made it clear they’d get an immediate zero but that we’d spend the time making sure they knew how to get started on the problem.

I just finished the second week of oral exams (separated from the first by four weeks) and saw many less zeros. I would ask what the chances of a zero were and very few said “zero chance, I’ve got something for every one.” With one student I joked that he was treating the oral exams like a casino. One student only had one he hadn’t done. That’s the number that came upūüė¶

I talked with many of the students who got F’s and asked if they had a plan. Most had a lot of confidence that they’d pass the course but they realized they needed to start turning in reassessments much more often. While that’s great news, I also hope they start looking at the problems earlier so that the quizzes can be good enough scores to keep them from having to reassess every standard. I asked a lot of them if they were mad at me because of the F’s and no one admitted to that. Most said it was an honest assessment of their turned in work while from several I got the sense that they felt it was a far cry from their internal understanding of the material.

I know from my colleagues’ experience that most of these students will work hard if you give them a hard deadline. My only deadline is the two-week rule that says you have to get in at least a piece of crap for every standard within two weeks of it being activated (talked about in class) or else it’s a zero forever. Most standards have a quiz associated that takes care of that, but the randomness means there’s the occasional standard that doesn’t get quizzed. That’s still a pretty weak deadline compared to my colleagues’ teaching approaches. My dreamer response is that this is a lesson they should learn, but I don’t feel I’m being very successful attaining that goal.

Labs is another place where I’ve realized I have to provide a different style of support. Most labs involve up to an hour of planning, roughly an hour of data collection, and an hour devoted to analysis. What happens in practice often is an hour of planning, an hour of data collection, and everyone leaves. They know that they’ll have 2 weeks to get something in so why would they have to work on the analysis then? I think a few of the students have come to realize that I can be very useful to them during the analysis stage, but if they don’t stick around they’ll have to track me down later. One big mistake I made was to trust them to do the heavy lifting involved in getting up the Mathematica syntax learning curve to do the types of analysis I want (Montecarlo-based error propagation, curve fitting that’s responsive to variable error bars and that produces error estimates on all the fit parameters). Last week when I turned in the midterm grades I sat down and made much better support documents in Mathematica that will help them focus on the physics that needs to be studied in the lab. That’s already paid off quite nicely for a couple of students.

Well, that’s where I sit. I’m a little nervous that I’ve lost the students, though I was heartened by some good conversations with each of them this week. I think the final grades will be much better than the midterms but I’m nervous that their memory of the class will be dominated by the last few weeks of the semester when a bunch of them will be making screencasts 24 hours a day. We’ll see.

Your thoughts? Here are some starters for you:

  • I’m in this class and I gave up weeks ago. What would have really helped was . . .
  • I’m in this class and I see a clear path to success. Here’s how I’m going to do it . . .
  • Why do you put an apostrophe in “F’s”? It’s not possessive is it?
  • Why don’t you put more teeth into your quizzes? Here’s how I would do it . . .
  • Can’t you see that SBG just isn’t the way to go with this class? I can’t believe it’s taking you so long to figure that out.
  • If the students end up hating the class but learn the lesson about keeping up on their work that’s a win for me.
  • If you think that students hating a class could possibly be spun as a positive you’re a worse teacher than I thought you were.
  • Why do you do Montecarlo-based error propagation? It’s clearly getting them into a casino mentality that now you’re wasting our time complaining about.
Posted in syllabus creation, teaching | 4 Comments

Optimal race path

I ride my bike to work so I’m often thinking about the best path to take around corners. I know bike racers and car racers (and bob sledders) are often told to head into a corner wide, then cut the apex, and then exit wide again. Basically the gist is that you want to make your actual path have the largest turn radius possible so that you don’t slip out. The question I was thinking about recently was whether there was some compromise since typically the largest radius path (which allows the largest speed without slipping out) is also the longest path (and hence mitigates a little the fact that it’s the faster path). I also realized that in car racing, and to a limited degree bike racing, the speed is not held constant throughout the path, so I wondered how you could find the optimal path and the optimal speed adjustments throughout. That’s what this post is about.

First a quick story about go-karts. I was “racing” in one (against my friends) and I was trying to follow the wide/narrow/wide path through all the corners. But I was losing! I finally realized that the wheels had terrific grip and that I could floor the pedal and hug all the curves and never spin out. My friends knew this and by the time I figured it out it was too late.

So what’s the physics involved here? The key is to figure out why wheels start to slip in the sideways direction. They have a particular amount of grip and that force provides the instantaneous centripetal acceleration for the wheel. If you know what the grip force is, along with the instantaneous radius of curvature, you can find the fastest possible speed at that section of the road:

F_\text{grip}=\frac{m v^2}{R}


v_\text{max}=\sqrt{\frac{F_\text{grip} R}{m}}

So, if you know the path of the road, you should be able to figure out the maximum possible speed at every location. So how do you do that? Well, first let’s make sure we understand how we’re mathematically describing the path.

What I decided to do was just pick some random points in the plane. Then I interpolate a path that smoothly connects them all. Here’s the¬†Mathematica syntax that does that:

pts = RandomReal[{-1, 1}, {5}];
intx = Interpolation[pts[[All, 1]], Method -> "Spline"];
inty = Interpolation[pts[[All, 2]], Method -> "Spline"];

So now we have two functions, intx and inty, that characterize what the path does. You can plot the path now using:

ParametricPlot[{intx[i], inty[i]}, {i, 1, 5}]

which give this:


Main path considered in this post

I knew there was likely some cool differential geometry formula for finding the curvature at any point and I found it at this wikipedia page:

R=\frac{\left(x'[i]^2+y'[i]^2\right)^3}{\left(x'[i] y''[i] - y'[i] x''[i]\right)^2}

which I can calculate now that I have the interpolation functions from above. Cool, so now I can find the radius of curvature at every point:


This shows the instantaneous radius of curvature at every point along the curve.

So now I can use the equation above for the velocity at every point and figure out a trajectory, and more importantly, a time to traverse the path, which I’d love to minimize eventually.

To be clear, I pick an arbitrary grip force and then calculate the radius of curvature and hence the max speed everywhere and I figure out how long it would take to make the journey. I realized that I’d risk the occasional infinite speed for straight portions of the track so I decided to build in a cap on the speed, that I arbitrarily picked.

So how do I figure out the time once I know the speeds. Pretty easily, actually, as for every segment of the path the small time is determined by the distance, \sqrt{dx^2+dy^2} divided by the speed:

t=\int \frac{\sqrt{x'[i]^2 +y'[i]^2}}{v(i)}\,di

where again i is the parametrization that I used (it just basically counts the original random points) and the speed (v(i)) is calculated as above.

Ok, cool, so if you give me a path, I’ll tell you the fastest you could traverse it. But that doesn’t yet let me figure out better paths around corners. To do that I need to generate some other paths to test to see if they’re faster. Remember they might not be as tight of turns (and so likely faster at the curves) but they’re then going to be likely longer. The hope is that we can find an optimum.

How do I generate other test paths? Well, for each of the original random points, I perturb the path in a direction perpendicular to the original path (which I’ll start calling the middle of the road). If there’s 5 points, then at each the path will move a little left or right of the center, and I’ll use the spline interpolation again to get a smooth path that connects all those perturbations.

So now it’s a 5 dimensional optimization problem. In other words, what is the best combination of those 5 perturbations that yields a path that allows the car to make the whole journey faster. Luckily¬†Mathematica‘s NMinimize function is totally built for a task like this. Here’s what it found:


The blue stripe is the road. The blue curve is the middle of the road. The red point travels along the blue curve as fast as it can without slipping. The green curve is the result of the optimization process. The green point moves along the green curve as fast as it can without slipping.

Note how in the last curve the red point has to significantly slow down, allowing the green point to win. Cool, huh?

Here’s another example that I didn’t have the patience to let NMinimize run (I let it run for 30 minutes before I gave up). It took so long because I used 10 original points, and so it was a 10 dimensional optimization problem. Luckily, just by running some random perturbations I found a significantly better path. Note how it accepts a really tight turn towards the end but it still ends up winning:


10 dimensional optimization example

As a last note, I should mentions that making the animations took me a while to figure out. I knew the speed at every point (note, not the velocity!) but I needed to know the position (in 2D) at every point in time. I finally figured out how to do that (obviously). Here’s the command:

NDSolve[{D[intx[i[t]], t]^2 + D[inty[i[t]], t]^2 == bestvnew[i[t]]^2,
i[0] == num}, {i}, {t, 0, tmax}]

where tmax was how long the path takes. Basically I’m solving for how fast I should go from point 1 to the last point (i as a function of time). Then I can just plot the dots at the right location at {intx[i[t]], inty[i[t]]}. That worked like a charm.

Alrighty, that’s been my fun for the last few days. Thoughts? Here are some starters for you:

  • Wow, this is really cool. What I really like is the . . .
  • Wow, this totally blows. What really makes me mad is . . .
  • Can I get a copy of the Mathematica document?
  • Why do you set the initial condition on i to be at the last point instead of the first? (editors note: that took me a long time to get to work, luckily the paths calculated are time reversable)
  • What do you mean they’re time reversable?
  • I race for a living and these are way off. Instead what I do is . . .
  • I want to race for a living now that you’ve given me the tools to win. Where do I send my royalty checks?
  • It seems to me that the cap on the speed gives you discontinuities in your acceleration. Is that allowed?
  • I don’t get your NDSolve command at all. What is that differential equation?
Posted in mathematica, physics, Uncategorized | 4 Comments

Can a pendulum save you?

I’m so thankful to my friend Chija for pointing out this video for me:

Here’s her tweet

When I saw it I started to wonder if angular momentum was enough to explain it. So I set about trying to model it. Here’s my first try:


Green ball is 20x the mass of the red. No contact or air friction.

It does a pretty good job showing how the fast rotation of the red ball produces enough tension in the line to slow and then later raise the green ball. Here’s a plot of the tension in the line as a function of time:


Tension in the line as a function of time. The green line is the strength of gravity. The reason everything is negative is a consequence of how I modeled the constraint (a Lagrange multiplier)

So how did I model it? I decided to use a Lagrange multiplier approach where the length of the rope needs to be held constant. Here’s a screenshot of the code:


“ms” is a list of the masses. “cons” is the constraint.

You define the constraint, the kinetic and potential energies, and then just do a lagrangian differential equation for x and y of both particles:

\frac{\partial L}{\partial x}-\frac{d}{dt}\frac{\partial L}{\partial x'}+\lambda(t)\frac{\partial \text{cons}}{\partial x}=0

(note that in the screen shot above there’s actually some air resistance added as an extra term on the left hand side of the “el” command).

Very cool. But what about the notion that the rope wraps around the bar, effectively shortening the string? I thought about it for a while and realized I could approach the problem a little differently if I used radial coordinates. First here’s a code example of a particle tied to a string whose other side is tied to the post:


“rad” is the radius of the bar. Note how the initial “velocities” of the variables need to be related through the constraint.

I’ve changed the constraint so that some of the rope is wrapped around the bar according to the angle of the particle. Here’s what that yields:


Ok, so then I wanted to feature wrapping in the code with both masses. Here’s that code:


Note the negative sign before “l[2][t]” and the “\theta[2][t]” in the constraint.

And here’s the result, purposely starting the more massive object a little off from vertical:


Fun times! Your thoughts? Here are some starters for you:

  • Why do you insist on using Mathematica for this? It would be much easier in python, here’s how . . .
  • Some of the animations don’t look quite right to me. Are you sure that . . .?
  • This is cool, do you plan to do this for your students soon?
  • What about contact friction between the rope and the bar? I would think that would be a major part.
  • In the video he just comes to a rest instead of bouncing up. Clearly you’ve done this all wrong.
Posted in mathematica, physics, twitter, Uncategorized | 2 Comments

Portfolio SBG

My last post talked about a way to have daily quizzes in my Standards-Based Grading (SBG) optics course. It (and the comments) got me thinking about how to do it even better and I think I’m closing in on a better plan.

The main idea is to have daily quizzes that are problems randomly selected from the previous day’s work. It reduces the amount of homework I have to grade, and tackles the cheating problem since it’s now a no-notes quiz. I liked it a lot in my fall class and I definitely want to keep those strengths. My suggestion was six problems per day that would act as the only contexts for any future assessments (quizzes, screencasts, oral exams, and office visits). One commenter noted that might be too much to ask the students to absorb from Tuesday to Thursday. Also, I wasn’t too happy about the double quiz I suggested on Tuesdays (one for the previous Thursday material and one to act as a re-assessment of week-old information). So, here’s my new thinking:

  1. Assign 3 problems per night
    1. Have them be substantial, covering various aspects of what we talk about in class.
  2. Each day do a quiz on a randomly selected problem from the previous 6 problems (three each from the last two days of new material).
  3. Have the students maintain a portfolio of all the problems so that they can act as context for all future assessments

Things I like about this:

  • Finding 3 solid problems sounds much more fruitful (and easy for me) than finding six every day.
  • I really like the portfolio idea. Want to come improve your standard score? Bring in your portfolio and I’ll randomly ask about one of those three problems. For each of the standards the students will (hopefully) be encouraged to really comprehend the issues around the three problems, especially given that they and I will be encouraged to “turn them inside out” for every assessment.
  • Before every quiz they should be touching up six problems in their portfolio. Admittedly if the quiz is on one they’re not ready for, they get a crappy grade but they can redo it via screencast, office visit, . . .
  • Something we’ll go over today might show up next time or the time after that, allowing for some cycling (we will likely discuss the context of the quiz beforehand and often the details of the quiz afterwards, especially if it seems people are unsure how to approach the problem).
  • Three problems times ~25 standards is a workable number of problems that the students need to master (especially considering that they are in groups of three with common ideas). Certainly it’s easier than six times 25.

Things I’m not sure about:

  • The students “only” have to know how to do three problems per day. Master those, and they’re guaranteed an A. I get student evals sometimes that say I need to do some sort of high stakes exam to make sure they really know it. I’ve tended not to heed such advice, but this has me thinking about that again.
  • There’s a chance that a standard might not ever be quizzed (25% chance, I guess). That means that they’ll need to submit something on their own. I guess I could use my old “one week rule” (here’s a post back when I called it the two week rule) or something. I could also weight the random selections differently to reduce that 25% to, I don’t know, 10% or something.
    • Hopefully the notion of keeping up a solid portfolio will lower the barrier to having them submit something.
    • If I had the quiz be on the last 9 problems, there’s an even greater chance that a standard doesn’t ever get quizzed (29.6%)
  • The days could devolve into “how do we do these three problems” instead of active learning around the content.
  • Students might want to do their own problems for the oral exams (that’s how I’ve tended to do it) instead of just coming with their portfolio ready.
    • A compromise could be that I’ll tell them which standard they’re going to be reassessed on and they can polish up those three problems, of which I’ll randomly select one to grill them on.
    • Another¬†approach could be “bring your whole portfolio to the oral exam and I’ll randomly select anything in there.” I think that would really drive home the notion of keeping up a good portfolio but they might rebel.

So that’s where I’m at (for today:) Your thoughts? Here’s some starters for you:

  1. I think 3 is too many/few and that instead you should subtract/add x and here’s why…
  2. I’ve taught with a portfolio approach before and here’s where I think your system is going to fail . . . (this is a cue for my friend Bret to weigh in)
  3. You definitely should also have assessments that do completely different problems and here’s why . . .
  4. How would you teach the students to “turn a problem inside out?”
  5. Here’s how I’d solve the 25%-that-won’t-get-quizzed problem . . .
  6. I think for the oral exams you should limit what they’ll need to bone up on and here’s why . . .
  7. I think for the oral exams you should make everything on the table and here’s why…
  8. Why not have every quiz be a random selection from anything in the portfolio?
    1. Below is a histogram of running 1000 semesters and finding how many problems would never get quizzed using this approach. The average is just a little over the 25% that I get with my approach abovenumbernotquizzed.png
Posted in syllabus creation, teaching, Uncategorized | 8 Comments

Daily quiz help

I’m preparing my syllabus for my upcoming Physical Optics course and I’d love some feedback on a policy I’m polishing regarding daily quizzes. Here’s a post from last summer laying out what I did in a recent class (general physics 2).¬†For this upcoming class I don’t have 3 days per week, allowing for Mondays to be a reassessment day, so I was thinking of just doing a longer quiz on Tuesdays.

Here’s what I was thinking

  • every day assign 6 problems
  • randomly select one for the quiz on the next day
  • on Tuesdays additionally select a problem from two weeks prior

In addition I’m thinking that the assigned problems could be the context for both oral exams and office visits. In other words, it’s the only problems they’ll work on. Note, of course, that on all quizzes and exams the problems will be “turned inside out” so that they really represent a type of problem, instead of a specific problem.

Ok, first I realize that I have to be super careful selecting the six¬†problems each day. There really can’t be any fillers in there or super hard ones with fancy tricks that’ll only work in weird situations. I’m up for that challenge.

Here’s one question I have: In the past class I assigned all new problems for the review day so they really had 6 problems for every standard (4 on the day we “covered” the material and 2 for the review homework). Should I assign 6 every night for this Tuesday-Thursday class? Or should I go with 4 since it doesn’t seem too hard to tackle them from Tuesday to Thursday (admittedly Thursday to Tuesday is easier)?

Second question: If a problem is randomly selected, can it be selected again? If so, maybe I should never provide solution sets. I guess I’m leaning toward that already so that they’ll know to just really have a good handle on all the problems (since they could show up anywhere: quiz, oral exam, office visit, etc).

I guess I’m right now circling around 6 problems per class and repeats are fine with no solution sets. What are the downsides I’m not seeing?

Some starter comments for you:

  • I’m going to be in this class and I’m really excited about this. Here’s why . . .
  • I’m going to be in this class, where can I get a drop card?
  • I think x problems per class is the perfect number, here’s why
  • Why do you put “covered” in quotes?
  • If you’re just giving them the problems they have to do, they’re not going to learn since there’s never a surprising question on an exam. You need to assess their understanding, not their ability to refine a fixed set of problems.
  • Can you give some examples of “turning a problem inside out?”
Posted in syllabus creation, Uncategorized | 4 Comments

Unstable rotation (spinning handle in space)

First, watch this:

Cool, huh? My students found this last year when we were studying rigid body rotation. One of the things we did a lot was try to spin a tennis racquet about an axis in the plane of the head and perpendicular to the handle without it rotating about the handle. It turns out it’s pretty hard and the reason is the same as the explanation for the video above.

My friend Will posted that vid recently again and I sent him the an animation I made showing a similar result.


He asked for a blog post, so here you go. To make it a fun challenge, I wanted to see if I could do it “off the top of my head,” in other words I wanted to see if I could put together the calculation without checking my notes from last spring when I was teaching this stuff (and hence it was all at my fingertips).

I knew I couldn’t do all the inertia tensor stuff off the top of my head, so I thought I’d see if I could do it with a small number of masses so that the inertia tensor benefit wasn’t huge.

First, I laid out a few point masses to model the handle in the video. I put one at the screw, two at the handle ends, and one at the crossing point. I knew I needed to calculate the location of those points for any arbitrary Euler rotation, so I had to think about Euler rotations first. Basically these are the rotations you can do to an object to put it in any orientation (without changing the center of mass which I put at the origin). It reminded me of the discussions my students and I had about how to do that (before we’d read about Euler rotations) and I decided that sounded easiest:

  1. Rotate about the z-axis by \psi.
  2. Rotate about the y-axis by \theta.
  3. Rotate about the z-axis by \phi.

What that does is the usual theta and phi orientation for a direction from the origin and an additional phi rotation of the body around that direction. It’s not how Euler rotations are sometimes presented:

  1. Rotate about the z-axis by \phi.
  2. Rotate about the new y-axis by \theta.
  3. Rotate about the really new z-axis by \psi.

It just turns out that’s harder to do numerically since you have to find the new and really new axes. In Mathematica you can do my recipe by:

RotationMatrix[\psi, {0,0,1}].RotationMatrix[\theta, {0,1,0}].RotationMatrix[\phi, {0,0,1}].(points you care about)

The period is how Mathematica does matrix multiplication (including dot products).

Ok, so now I need to find the locations of my 4 points and then take time derivatives recognizing that my time-dependent variables are theta, phi, and psi. The time derivatives produce the velocities that I can use to calculate the kinetic energy as a function of the variables and their time derivatives. Then I’m in business because I can just do the euler-lagrange approach at that point. Here’s a screenshot of the code:flipping_handle_mma_code

The locs are the locations of the dots as described before with the handle screw part being 1 unit long and the handle width being that crazy square root of 3/2 + 0.01 which will make sense below. The m function is the rotation matrix described above. The newlocs function determines where all the points are at some arbitrary theta, phi, and psi and the ke is the kinetic energy (note the D used for derivative). The el function is the Euler-Lagrange operator and the sol command puts it all together, including some initial conditions set to rotate the handle as similar to the video as I could do (note that if you don’t set the psi variable to a little off zero you don’t see the instability). Here’s the result:


And here’s an animation looking at the path the screw takes (it’s animated just so the camera can sweep around)


I remembered from the intertia tensor analysis that the stable axes of rotation (among the 3 eigenaxes) are the ones with the highest and lowest eigenvalues. So I calculated those and found that when the length is sqrt(3/2) there is not one in the middle. Here’s a comparison with the length both 0.1 above and below that magic length:


Cool, huh? I hope Will’s happy.

Some starter comments for you:

  1. I was in that class and this really helped me understand . . .
  2. I was in that class and this was a complete waste of time because . . .
  3. I love this! What should I do with my antiquated vpython scripts that couldn’t possibly do this?
  4. I hate this! When I flip my tennis racquet it never rotates.
  5. What other initial conditions show (or don’t show) that instability?
  6. How did you calculate the eigenvalues off the top of your head. What you just happened to know what the eigenaxes were or something?
Posted in mathematica, physics, Uncategorized | 4 Comments

Best bingo board

My son is in the third grade and his math homework is to play games. The other night we played one that really got me thinking. Each player makes a 4×4 board and puts in any even number between 8 and 48 in every box (note there are more then 16 to choose from and that you can have repeats if you like). I just used the first 16 numbers (8-38) randomly on my grid. Then you roll 4 6-sided dice, add up the total, and then double it (so it’s testing low integer adding and doubling for the homework). You play until someone gets four in a row. As we played we both noticed that 28 kept coming up. I had it once on my board and he didn’t have it at all, so it really just kept extending the game. I told him that 28 would be expected to be the most common (avg roll is 3.5 and 3.5 x 4 x 2 = 28) so we got talking about whether next time we should try a board with all 28s. This post is all about what I learned when trying just that.

I decided to code up the game in Mathematica (this is the century of the decade of the year of the week of the hour of code after all). The low hanging fruit was to match an all-28 board against a board with random numbers on it without any repeats. It’s low-hanging because not having repeats means I don’t have to teach Mathematica how to make a choice when a repeated number is rolled (see below for my try at that). To simulate a roll I just produce 4 random integers between 1 and 6, add them, and double them. Here’s a plot of the probability of each roll:probabilitygraph

To check if a bingo (four in a row) happens, I just check the board after each roll for any possible bingos.

Instead of playing matches, I just calculated how many rolls it would take to get a bingo for each type of board. Here’s a histogram of 1000 runs for each type (each bin is the count of the runs that took that many rolls to get a bingo for both types of boards).


Yellow is for the board with all 28s, blue if a random, non-repeat board. Gray is where they overlap.

I was a little surprised by this result. The random boards beat the all 28s board by a fair margin (on average). Did it surprise you?


So then I started wondering about better boards. I realized that if I wanted to do boards with some repeats on them, I’d have to teach Mathematica an effective strategy for making decisions. For example, say you rolled a 22 and you had 3 22s on your board. How do you decide where to put your bingo marker?

[pause while the reader considers]

What I decided to go with was to go for the spots that help out as many potential bingos as possible. That means corners and the inner square are worth more than non-corner edges. What I mean is that a corner spot could be a part of 3 potential bingos (left-right, down-up, and diagonal). The same is true for the inner square. But the non-corner edge spots only have left-right and up-down. So, if given a choice, it’ll go with one of the better ones. If all choices are in the same sort of spot (either all good or all slightly-less-good) then just do it randomly. However, if any of the choices gives you a bingo, I go with that one.

First I tried boards with randomly selected possibilities on each space. This allowed for repeats, since each space re-ran the random selection. Then I made boards where the randomness just mentioned was weighted by the probability expectation seen above. Here’s a comparison of all 4 types of boards:fullbingocomparison

It’s really interesting to see that the all 28s board is the worst, on average, even though we expected it to be better based on our (very limited) experience. It’s also interesting to see that the average number of rolls for a bingo is half as much for the weighted random (with repeats) board.

So what’s the best board? I don’t know, but what I did was generate 100 weighted-random boards and play 100 games with each. I then looked for the one with the lowest average. Here’s the winning board:

26 40 24 22
18 36 34 18
38 30 36 26
32 20  26 34

And here’s a histogram of running that board 1000 times:bestboardhistogram

Note that once it got a bingo in four consecutive rolls! Also note that the board doesn’t have any 28s in it!

Ok, that’s my fun for the week/day/hour of code. I hope you enjoyed it. Thoughts? Here are some starters for you:

  • I’m in your son’s class, thanks! But I tried your best board and my friend beat me once. Therefore this is all wrong.
  • I’m your son’s teacher and I really wish you hadn’t posted this. Now every single time my students play they tie since they always use the same board.
  • I’m a lawyer at a Bingo ™ board manufacturer. I need your mailing address to send a cease and desist letter.
  • Here’s a better idea for an algorithm to deal with the choices that need to be made when you have a repeat board, because the one you used is dumb.
  • Thanks for this! Now I can quit school and stick it to the casinos!
  • Why did you only run 100 boards at the end. What, you didn’t want to stay up even later on a Friday night to let it run longer? Wimp.
  • I don’t believe this. The all 28s board should have trumped everything. You must have a mistake in your code.
Posted in fun, math, mathematica, parenting, Uncategorized | 9 Comments