For a number of years I’ve been working on finding ways to turn what looks like an unfair die to a fair one (see these posts). Recently I’ve made a lot of progress. This post shows how I’ve turned a 36-sided unfair die into a fair 3-, 4-, 5-, 6-, 7-, 8-, 9-, or 11-sided die.
The two big accomplishments since the last post were to 1) roll an unfair 36-sided die 100,000 times, and 2) figure out how to optimize contiguous groupings of sides to make the fair dice I alluded to above.
The original die
I’m not quite sure why I decided to settle on a 36 sided die, but I did I guess. I knew that each roll would take a few seconds and I knew I needed some good statistics. I have access to a VDI machine at work that can run all night long so I finally got around to leveraging that and getting some decent statistics on the rolls for the die (100,000 rolls that took 5 days of continuous calculations!):

A number of interesting things are represented in that graph. First, you can tell that the die is unfair because the red dots aren’t flat. The minimum probability is ~0.021 and the max is 0.035, or 67% bigger. The three other curves are all pretty similar, and certainly all 4 curves are correlated. But it’s interesting that in my conversations with folks over the last few years I’ve run into people (and web sites) that would claim that side area, side solid angle, or side volume (really the volume of the chunk from the center projected out to a side which is also proportional to the mass of that chunk) should accurately predict the probability. It’s interesting that none do!
Evolving dice
Ok, next came the challenge of finding contiguous groupings of sides that would yield (nearly) identical probabilities. I thought I had figured out how to find random groupings as shown in this image for a random die I used in my last post:

The trick to do that was the Mathematatica command “FindGraphPartition” where the graph in question has the faces as the nodes and connections exist between faces that touch. That command finds regions that are connected, trying to keep regions with strong connections together. It does that by looking at the edge weight between them (higher number means they’re “more connected”). So I just fed that function the graph of my polyhedron with random (positive) numbers for the edge weights (for a 36-sided die there are 54 edges).
So I could run that random weighting over and over again to try to find regions that just happened to be fair. This is hit-or-miss, of course, so I thought I’d try to make a genetic algorithm work.
A genetic algorithm, or really any of the evolutionary programming types, work really well when you have a huge parameter space (54 different parameters, in this case, that can each be any real positive number) and lots of potential local minima. What I wanted to do was to use the 54 parameters as continuous variables and to let the genetic algorithm test random “parents”, rank them, throw away the bottom half of the population, and then repopulate that bottom half with “children” made from the remaining parents. I pair two parents up, take the first, say, 20 parameters from one and the last 34 parameters from the other and vice versa to make two kids. Then I “mutate” some of the kids “genes” by adding a random number to one or more of the parameters. Then I run them through the fitness test and the next generation repeats.
In this case the fitness test I used was the max probability (of one of the contiguous groupings) minus the min probability. I got the probabilities by adding the side probabilities involved in each contiguous region. Those I got from the 100,000 rolls that I did earlier. If the max-min goes to zero, then all the probabilities are equal and I’ve got a perfectly fair die. That never seems to happen, but after 1000 generations I tended to get decent results
Before talking about what I mean by “decent” here’s a pic of my fairest 5 sided die followed by a panorama of all my dice so far (again, they’re all made up of the same 36-sided die, just with different groupings of sides painted)


bottom row: 7-, 8-, 9-, and 11-sided fair dice
For all the die is shown on the bottom and the sides are broken out.
What is “decent” or close-enough to fair?
Since none of the genetic algorithm runs ever ended with a fitness of zero, none of the dice in the image above are strictly fair. But what’s fair enough? My kids and I decided that no one would really notice if after a few hundred rolls it seemed like the sides were roughly fair. That can be quantized a lot better, of course, but that’s the gist of what I did here.
Lets say you rolled a fair 7-sided die a bunch of times. What would the histogram of the rolls look like? If you rolled it an infinite number of times every side would come up 1/7th of the time. But you’re not going to roll it that often. If you only roll it 100 times, you might expect each side to be rolled 14 times with two of them being rolled 15 times. But you don’t usually find that. Instead you get more variability that you expect (or at least than some of us would expect). Counting statistics (or Poisson distributions if you like) would suggest that the typical variation after 100 rolls for each side would be the square root of 100 divided by 7 or roughly 3.8. In other words, most of the time the sides would be off from the expected 14 by 3 or 4 (either high or low – obviously the sum of the sides would be 100 still).
Ok, so what if you are suspicious that it’s not fair? Well, you can roll it a bunch of times and check the result against what the fair statistics would suggest. If you do it 100 times and you get a bunch of results within 3 or 4 of 14 you’d have to admit that it still seems fair. Of course if you’re patient you could roll it 1000 times. Then you’d expect each die to roll 142 or 143 with a typical spread of 12. Don’t freak out that 12 is bigger than 3 or 4. What really matters is the relative spread. 12 divided by 142 is smaller than 3 or 4 divided by 14.
So what I did was look to see how many rolls you’d have to roll my not-quite-fair dice to see results that start to look suspicious. The very worst of my dice would need over 500 rolls for that to happen. I guess I think that’s “good enough.”
Of course there are much more formal ways to do this. Mathematica provides DistributionFitTest for just this purpose. You can use it by providing a set of rolls of a die and ask the chance that the rolls came from a perfect die. It returns a p-value that can be interpreted as exactly that chance. Of course every 1000 rolls is different, so if you rerun the command with a different set of rolls you get a different p-value. That’s why what I’ve got below are histograms for each die where I rolled 1000 rolls 1000 times each. The x-axis is the p-value it found and the y-axis is the probability.

Note that doing the same thing with perfect rolls yield a graph similar to the 7-sided curve. Really the only one that’s not great on this scale is the 11-sided die. Note also that if I do this for an un-optimized contiguous side set you nearly always get a p-value less that 5%. This shows that I really needed that genetic algorithm.
Next steps
I’d love to 3D print my 36-sided die a few times and paint the sides. I bet I’d find some D&D folks interested in buying them.
I’d also like to figure out why I can’t do 10-sided. Mathematica crashes every time I run the genetic algorithm. I really can’t figure out what’s going on.
I’d love to figure out if you could predict the side probabilities from some physical measure of the die. Obviously side area, side volume, and side solid angle don’t do it, but maybe some other measure does. One thing I’ll try checking is looking at the effective depth of the potential energy dip for each side. What I mean is: to topple from one side to a neighbor, you have to make the die go up on an edge. This has a gravitational potential energy cost. Each side is a triangle and so you could get three measures of that for each side. Wouldn’t it be cool if the actual rolling probabilities tracked with that measure! Then I wouldn’t have to spend a week calculating those probabilities and I could really do some fun stuff.
Your thoughts?
Here are some starters for you
- I think this is really cool. What I especially liked was . . .
- This is really dumb. Here’s why . . .
- Why do you call it a “fitness function” when you’re clearly minimizing. Try calling it a cost function, jerk.
- I ran FindGraphPartition and occasionally got partitions that aren’t contiguous! What sort of magic did you do to fix that? (answer: updated the fitness function to make sure those got huge costs – used ConnectedGraphQ on a graph subset)
- If you can’t make perfect dice, I’d never pay for them. You should say that more clearly at the top of this dumb post
- Why do you bother with contiguous groupings? Couldn’t you just use random groupings and print them with the appropriate number in each triangle? (answer: I do get better p-value results this way, but my kids think contiguous is cooler)
- I think you’ve succumbed to p-value abuse. You clearly run it over and over again until you get what you want. Hence the histograms.
- I think you idea about the potential energy measure of a side has merit. Here’s what I’d do . . .
- I think I know a different physical measure that will predict the probabilities. Here’s what it is . . .
- There is no physical measure that will work. You’ve got to really roll them. I think you should pick a random shape, 3D print it, roll it 100,000 times yourself, look at the probabilities, then make a tiny change and repeat.
- I don’t think a genetic algorithm was the best choice here. Instead I would . . .
- All the Mathematica advertising gets old. I’m pretty sure I could do all this with my TI-84.
This is cool. You do more cool, research-y things as an administrator than many professors!
Thanks Bret! You’ve actually inspired me to do more blogging with the work you’ve been doing lately, so thanks!
Glad you’re back!
thanks!