Boltzmann to Blackbody to Electoral College

Ok, I know that’s a weird title, but bear with me, this has some fun stuff in it, including some things I still need help with.

The basic idea is that Planck’s solution to Blackbody radiation is an interesting way to view the quantum problem with the electoral college. We’ll have some fun tangents along the way.

Ultraviolet Catastrophe

Blackbody radiation is all about describing the (mostly infrared) radiation coming from a hot thing. I’ve written a little about it before but really there’s only a few things you need to know:

  • Hot things give off radiation
  • A hot thing with a cavity inside with a small hole to the outside is the easiest to model
  • There were two 19th century physics ideas that most people brought to the analysis.
    • 1. We know how to count how many standing waves can exist inside a cavity (really the number between some very tiny range of frequencies)
    • 2. We assume that all modes of energy (including standing waves) play well together so they all end up with the same average energy, namely something proportional to temperature (the proportionality constant is called the Boltzmann constant and we traditionally use ‘k’ for it).
  • Putting both of those ideas together led to something very strange. Together they predict that there’s an infinite amount of energy down into the ultraviolet part of the spectrum.
  • Since that’s not found experimentally, Planck reasoned that one or both of the 19th century ideas had to be wrong. He decided to go after the second one with a very simple (but very strange at the time) idea. Namely that standing waves couldn’t have any amount of energy. Each one could only have an integer amount of some base energy that happened to be proportional to its frequency: E=nh\omega where n could only be integers and h was eventually named Planck’s constant.
    • While this sounds like a fun mathematical approach, it’s interesting to note just how weird it is. It means that when you’re playing jumprope, you can only set the height of the main peak (where the jumper is) to a set of possible amplitues. Weird.

So how did Planck’s approach avoid catastrophe? Well, the answer is what eventually gets us to the electoral college, so thanks for bearing with me. The higher frequency standing waves couldn’t have an average of kT energy in them because that’s not even enough for the integer n to be 1. Basically if you gave them the lowest (non-zero) energy they’re allowed to have, they’d screw up the average. So they get frozen out and don’t get to play. If they can’t play, they don’t cause the catastrophe.

What’s the connection to the electoral college? As things stand now, each electoral vote does not represent the same number of people. The reason is that our election system can’t tolerate the “freezing out” approach and so instead rounds things up to the nearest integer. Basically all the states are considered to be the same type of standing wave, but their base energy (or base population in this analogy) is set so that it’s the whole US population divided by 538 (the total number of senators and members in the house of representatives). These days that’s roughly 600,000 people. The problem is that some states don’t have that many (actually 3x that many since they all get 2 senators and at least one representative). So they round up, and that means those states get a larger impact on the vote. Actually the fact that each state gets 2 for free from their senators already screws things up, but one solution is to change the base count to be much lower so that California gets a ton more and tiny (in population) states keep their 3.

The rest of this post details some of the strange things I ran into when trying to simulate some of this. If all you care about is the electoral college stuff, there’s not a lot more below. However, if you’re into teaching things like quantum physics and Blackbody radiation, read on because I need some help!

Boltzmann Distribution

The second 19th-century bullet above was a really cool thing when people put it together. The derivation involves a pretty nasty integral (really the ratio of two ugly integrals) but ends up with the amazing result that all energy modes share the same aveage energy: kT. Amazing. But as I was thinking about this post and thinking about doing some simulations, I figured I wouldn’t need to explain the nasty integrals as I likely could just show some fun simulations showing the average energy working out.

That’s when I hit a snag!

I figured I could do some early statistical tests of my simulations by checking that they followed the Boltzmann distribution. What’s that, you ask? Consider a system of lots of particles, each of which can be in a random energy state, except that the sum of their energies needs to be a constant, the total energy in the system is fixed. If you reach in and grab a particle, Boltzmann tells you the probability of finding that particle to have energy E: it’ll be proportional to e^{-E/kT}. The proportionalilty constant is found by ensuring the total probability of any energy is 100%, hence the second nasty integral I mentioned above (for normalization).

So a great test of a simulation of particles with random energy (where you fix the total energy all the particles to be a constant) is to make sure that the lowest energy states are the most probable and that their probability distribution is (roughly) exponential decaying.

Well, when I tried to put such a system together, I found that most approaches didn’t follow the Boltzmann distribution!

Random microstates

Ok, so it’s your job to put together a collection of particles with random amounts of energy so that their total energy adds up to a fixed constant. How do you do it? Note that while you can also tackle this where you let the particles have any energy value, we’re jumping right into the quantum approach where the total energy is a (large) integer and each particle has to have an integer level of energy. Note also that I’ll likely switch back and forth between particles and energy on the one hand and buckets and balls on the other.

Here are the 4 ways I’ve tried to solve this problem:

Method 1: Stars and bars: Imaging laying out the total energy like an array of cells. Now choose N-1 cell borders randomly. Feel free to choose the end points. But note that there’s always one on each end, so that really it’s N+1 boundaries. Between any two successive borders is the energy of a particle. With N+1 boundaries that gives you N particles, all of which have an integer number of cells (or energy) in them.

Method 1. Note 5-1=4 added boundaries leading to 5 particles with different energies, all adding to the total (represented by the length of the original line)

Method 2: For each unit of energy, randomly select a particle to go to (or ball to a bucket). Then just look at each particle (in each bucket) to see how many are in there.

Method 2: Each unit of energy (delimited by marks on the horizontal line) is randoly put into a particle.

Method 3: Grab a particle, and randomly give it energy ranging from zero to the max energy. Then move the next particle and give it a random amount from zero to whatever’s left after the first particle. Then repeat until you’re out of energy. If the number of particles considered by that point is less than the total number, just set the remainder to zero. If the number considered is greater than the number, start over.

Method 3: first come, first served until you run out.

Method 4: For each particle generate a random integer between zero and the total energy possible. Then add up all the energies. If the total is the total energy allowed, keep it. If not, try again (this one is really slow).

Method 4: Sorry, if one of you has 47 most of the rest need to have zero.

Which one do you like? I’ve been having some fun conversations with folks on twitter about this along with looking up suggestions on various pages online. Google searching seems to run into method 1 a lot, while most of my physics buds like method 2 the best (Thanks To my friend Gillian for first suggesting this way – I felt dumb that I’d spent so much time on method 1 before moving on to that one).

For me I think I like method 2 the best. It seems to be the most random, and it runs nice and fast, though you do have to do some tallying. Method one has a great visual, and is called stars and bars because people have been typing things like |**||***|*|***| for a long time when teaching about probabilities. Method 3 felt like a way to avoid the immense waste of time that Method 4 represents.

So, which follows Boltzmann? That was my big question. Honestly my guess was “all of them!” but, well, I was wrong:

Shows the results of making 100,000 versions of each method for 6 particles and 8 units of energy. Only Method 4 shows the Boltzmann distribution

Here’s the code for each method:

def method1(buckets, balls):
  # bars and stars method
  edges=np.concatenate(([0],np.sort(random.randint(0,balls+1,buckets-1)),[balls]));
  return np.diff(edges)
def method2(buckets,balls):
  # assign each ball a random bucket
  ballassign=random.randint(1,buckets+1,balls);
  return np.array([np.count_nonzero(ballassign == i) for i in range(1,buckets+1)])
def method3(buckets,balls):
  # randomly put some balls in first bucket, move on until you run out
  curballs=balls
  cur=np.array([random.randint(0,curballs+1)])
  curballs=balls-np.sum(cur)
  while curballs>0:
    cur = np.append(cur,random.randint(0,curballs+1))
    curballs=balls-np.sum(cur)
  if (cur.size<buckets):
    return np.pad(cur,(0,buckets-cur.size),mode='constant',constant_values=0)
  if (cur.size>buckets):
    return method3(buckets,balls)
  return cur 
def method4(buckets,balls):
  # try buckets random balls until sum is correct balls
    t=random.randint(0,balls+1,buckets)
    while np.sum(t)!=balls:
      t=random.randint(0,balls+1,buckets)
    return t 

What the actual heck?! Why don’t they follow Boltzmann? Only method 4, the slowest (by far!) does it. Most of the rest of them way undercount zeros (meaning that if you randomly grab a particle after running this 100,000 times you find zero less often than you should). Lots of my twitter and fb buds have lots of explanations. Most have to do with counting microstates but not multiplicities (for you real statistics nerds). Here’s an example: Consider method 1 with 5 units of energy and only 3 particles. There are lots of possibilities, but lets only consider these four (where the number is where the two (N-1, remember) boundaries were randomly placed): [0,0], [5,0], [0,5], [5,5]. When you remember the two boundaries that are always added and remember that you actually have to sort the random numbers before doing the differences you get [0,0,0,5], [0,0,5,5], [0,0,5,5] and [0,5,5,5]. That leads to particles with 0,0,5; 0,5,0; 0,5,0; and 5,0,0. Do you see how you’re over-generating (0,5,0)? Yeah, that sucks.

Two fixes my friends have told me about:

  1. Fix the stars and bars (method 1) thusly: Make a bag of N-1 bars and Etotal stars. Then randomly draw things out of the bag. Then do the work above. To see the difference, consider a system with 10 particles and only 1 unit of energy. That would mean 9 bars and one star. Method 1 would generate all the bars only on 0 or 1, leading to the one star being somewhere in the middle. In fact, as you can see above in the [0,5] conversation, you’d quite unlikely to find the particle in the first or last bin. But with this correction since the bars and stars are all jumbled together, you’re just as likely to get the star at any location. My brain still hurts about this one but I really appreciate my buddy Craig for helping me see it.
  2. Fix method 2 by just making copies of any state you find. The number of copies you need to make is the multiplicity you’d expect from the permutations among all the particles. An easy example: if you you get (0,0,5) make 3 copies of it so that you’ll get the same number of zeros as you would with all the permutations: (0,0,5), (0,5,0), and (0,0,5). I don’t particularly like this solution as you’re making a weird manual correction but I believe it produces the Bolzmann distribution.

What does nature do?

To me this is the big question. All the methods discussed are ways to produce viable states that make sure to have the right energy. The typical derivation talks about how we assume that any natural system will randomly access all the possible states with equal probability, leading fairly directly to the Boltzmann distribution (or at least an approximation that gets better the bigger your system is). But here’s where I’m stuck. If a “real” system has some distribution of energy into the various particles at one moment of time, what’s the best model to come up with a different distribution in the near future? Honestly for me it’s method 2. You just let every particle go find a new home! But that doesn’t have the right distribution and so wouldn’t lead to all the normal statistical mechanics results we expect.

Method 4? That’s weird. That would be saying that every particle takes on random energies and it all get locks in only when the total energy is right.

Correction 1? I guess that works for me, but I still feel Method 2 makes more sense physically.

Correction 2? That’s just weird. All the energy quanta go find a new particle home and then somehow the system rapidly cycles through all the permutations.

My brain hurts on this one. I’d love some suggestions below.

Modeling a Blackbody

Ok, I’m not modeling the whole thing. Really I wanted to try modeling a system made up of particles who have different minimum energy spacings. Some can take on 1,2,3,… units of energy while others are limited to 0,4,8,12,… or 0,3,6,9,… or 0,117, 234, 351,… units of energy.

What am I hoping to see? What I’d love to see is an approximation of Planck’s prediction of average energy per type of particle. That’s given by:

E_\text{avg}\propto\frac{\Delta E}{e^{\Delta E/kT}-1}

So how can I model that? Well, here’s what I tried: I randomly assign each unit of energy to a particle. Then I round the energy in each particle down to the nearest level it can actually handle. Then I repeat with the leftover energy. I keep doing that until all the energy is used up. Here’s the code:

import numpy as np
from matplotlib import pyplot as plt
from numpy import random
from scipy.optimize import curve_fit
# this gives a list of quanta in each bucket, not balls. 
# so [1,2,3] for buckets that can hold [1,2,3] energy means [1, 4, 9] energy in each
def redistribute(current_quantas, current_energy,buckets,total_energy):
  # randomly assign all balls to a bucket
  nbuckets=len(buckets)
  assign=random.randint(0,nbuckets,current_energy)
  # find out how many balls in each bucket
  t,_=np.histogram(assign,range(nbuckets+1))
  #round down for each to the nearest full quanta in each bucket
  rounded=t//buckets
  # add to the current quantas in each bucket
  newbuckets=current_quantas+rounded
  # find the leftover energy
  leftover=total_energy-np.dot(newbuckets,buckets)
  # repeat until all energy is distributed
  if leftover==0:
    returnvalue=newbuckets
  else:
    returnvalue=redistribute(newbuckets,leftover,buckets,total_energy)
  return returnvalue
def f(x,a,b):
  return 0.1*a*x/(np.exp(x/b)-1)
def f2(x,a,b):
  return a*np.exp(-x/b)
def wholething(num_buckets,num_balls,max_quanta,loops):
  buckets=random.randint(1,max_quanta+1,num_buckets)
  #print(buckets)
  test=np.array([redistribute(np.full(num_buckets,0),num_balls,buckets,num_balls) for i in np.arange(loops)])
  yvals=buckets*np.mean(test,axis=0)/(num_balls/num_buckets)
  #print(yvals)
  planckfit,_=curve_fit(f,buckets,yvals)
  x=np.linspace(1,max_quanta,100)
  plt.plot(buckets,yvals,'o')
  plt.plot(x,f(x,*planckfit),label='Planck')
  plt.legend(loc="upper right")
  plt.ylim(bottom=0)

and here’s a few examples looking at the average energy per mode with a Planck fit (everything is scaled so that 1 is the expected kT average energy)

100 particles, 100 total energy. You can see that particles that can only take on a minimum of ~4 or more units of energy are frozen out.
100 particles, 1000 total energy. Not as good of a fit but not bad.
100 particles, 10,000 units of energy. This one is a little fake because I didn’t use particles with quantum spacing bigger than 10, but it shows the approach to the classical expectation of a flat line.

Ok, that’s cool and all, but here’s the weird thing. This is basically based on Method 2, which I’ve shown above doesn’t follow the Boltzmann distribution! My brain hurts yet again. I’ve love some collaborators who could help me reason this out.

Your thoughts?

You made it! This was a long one with lots of twists and turns but it helped my brain to write this up. I’m really hoping folks can help me with some of the loose threads. Here are some starters for you:

  • I love this. My favorite method is . . .
  • This is dumb. Why don’t you just read . . .
  • You have to do method ___ or this is all crap.
  • You’ve got the electoral college wrong. A better way to think about it is . . .
  • So what temperature is our electoral system?
  • Wait, you used python for this. Are you feeling ok?
  • You mixed quantum system has a major flaw . . .
  • I love your mixed quantum system. Can you also . . . ?
  • Did you just draw those images on your phone while also typing all of this and doing the python calculations on your phone? Wow, everything looks like a nail for your hammer doesn’t it?

About Andy Rundquist

Professor of physics at Hamline University in St. Paul, MN
This entry was posted in physics, programming, teaching and tagged , , . Bookmark the permalink.

1 Response to Boltzmann to Blackbody to Electoral College

  1. bretbenesh says:

    I just love that you made a connection to the electoral college.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s