I’m teaching a class in the fall called “Web App Development with Google Apps Script” that I think I want to write my own book for. I started doing some of that using wikibooks, but I was frustrated at some of the limitations that platform has, namely that you have to click around a lot as you’re going back and forth among sections and it’s really hard to add images, as most are considered to be copyrighted unless you jump through a bunch of hoops.
So I thought it might be fun to see if I could make a book editor in Google Apps Script. That’s pretty meta, huh?
For my dashboard project I figured out how to upload images to google drive and how to determine the url that can be put into an img tag. So I just reused that code, but augmenting it to let me put in a description that’s the default alt text to be used. I also made it so that you can browse the images you’ve already uploaded if you want to reuse them elsewhere in the book.
The other cool thing I learned how to do was to populate something in the browser’s clipboard on demand. Very cool.
I used to love the shortcuts I could create in documents. Things like \qm for “Quantum Mechanics” etc. I realized I could do that in this project too, but at first I just hard coded them into the rendering portion of the code. Basically I did a bunch of:
However, I realized I could just put the pattern (/<<gm>>/g) and the replacement (Quantum Mechanics) into spreadsheet columns and then just run through as many as the user wants to add.
So now I just edit my spreadsheet with new shortcuts (I called them filters but the idea is the same) and the next time I load the editor those shortcuts are available.
GAS is really great for user authentication. This command gets you the email of the current user:
var email = Session.getActiveUser().getEmail();
and you can do whatever you want with that, including allowing editing vs limiting the user to viewing your book. When you deploy a Web App you can say that it executes as you but that it’s available to the world. When someone outside of your domain goes to your web site, the command above returns an empty string. But when someone in your domain goes to it, you get their email. So you could imagine lots of editors, for example.
GAS has lots of limitations. It’s not particularly fast, though once you pass your data to the user it’s really fast then. Sending data back and forth to the google spreadsheet usually takes a couple of seconds, which isn’t the end of the world given all the other features you get (for free!).
I pass to the user all the chapter and section names but not the detailed text of the sections, only sending that for the chosen section. So each time you go to a new section, it has to send an asynchronous request to google to get it. Again, ~2 seconds.
I’m assuming my book will get long enough that sending the whole text of the book will be problematic, so that’s why I only ever have the text from one section in memory.
Since it’s all being saved in a google spreadsheet, you have some fundamental limits to length. There are some conflicting sources out there, but there’s agreement that you can’t have more than 5,000,000 cells in the spreadsheet. That’s a lot of chapters and sections. There’s some sources that say no single cell can have more than 50,000 characters, but it seems that not everyone agrees. Assuming an average word length of, say, 8 characters that would mean that sections of the book would have to be less than 6,000 words. Since none of my blog posts have ever been that long, I don’t think I’m worried about that.
What’s it good for?
Of course I built this to write a book for my class, but since it’s all contained in a spreadsheet, it’s super easy to make copies! If you go to this spreadsheet and make your own copy, you too can write a book. All you’d have to do is:
Clear out all the data (but not the top rows) in each tab. Note that column E in the “sections” tab is hidden, you’ll want to delete those too.
Update your chapters and section numbers (watch the end of the vid linked above to see how that works)
Go to tools->script editor
In the script editor, update the top 2 lines of the globals.gs file
Note that you’ll want to make a new folder for your images and set it to viewed by anyone
Go to Deploy in the upper right
Click on “New Deployment”
Choose “Web App”
Follow the instructions and deploy. You’ll be shown your new URL
I think I’ll probably use it for a lot of things. Even in classes where I’m not writing a book, I could still use it for organizing additional resources for my students.
I also think it might be really great for making manuals on how to do things.
Here are some starters for you:
I love this, especially the part about . . .
I hate this. Why don’t you just use . . .
Wait, this is for the fall term? Don’t you have some final projects to grade?
I think you should teach a class that teaches people how to make this tool that lets them write a book for a class on how to teach the class. That would be more meta.
I think it would be cool if you could add . . .
Markdown is just watered down . Why not use that instead?
I’ve got a great idea for what I’d use this for . . .
I’ve got a great idea for how you should never use this . . .
I almost titled this “I hate ‘input’ and ‘print'” but that’s not really true.
I’m teaching a course called “Introduction to Computational Data Science” this semester, just like I did last spring, and even with only two days under my belt I’m reminded how much I struggle with ‘input’ and ‘print’ commands. I think it has to do with the years I’ve spent programming Mathematica but using Jupyter and/or Colab feel quite similar.
So what am I referring to? The first assignment in this class (which has a programming class as a prerequisite) is for the students to make a video walking me through a python function they’ve made that takes an value and returns a list that’s dependent on whether the value is even or odd. It’s also supposed to throw an error if the value is not an integer. It’s really a quick test of their programming ability, and it lets me diagnose things quickly for those who might need a little help.
What’s the big deal, right? Well, nearly all the students do something like this:
x=int(input("please give me a number, you wonderful stranger"))
for i in range(x//2+1):
for i in range(3*x):
Why, you may ask? Because that’s how a lot of their text last semester encouraged them to do things and even the text I’m using, which has a lot of basic python chapters that I only use as reference, basically encourages that sort of function.
But I hate it! Ok, that’s, again, too strong. But I do have some problems with it, and it has to do with who the audience is.
Often in introductory programming courses you’re encouraged to think about a fictional client that you’re writing code for. Hence the “you wonderful stranger” joke above. And for clients like that, printing nice messages or values is quite a reasonable thing to do (hence the ‘print’ commands).
But for computational data work, my audience is often (always?), well, me! For me, just making a function that takes an argument and then later calling it with whatever argument I want works just fine:
if type(x) != int:
return [x**2 for i in range(x//2+1)]
return [x**3 for i in range(3*x)]
Lots to unpack there:
The function takes an argument instead of relying on “input”. That means that it can be used in a larger program
The function always returns a list, though if it’s not an integer it’s an empty list. This should help it fit into larger program
No print statements! This is a workhorse little function that can be called a bunch of times and it won’t clutter up your workspace.
Beautiful list comprehensions instead of clunky for loops. Often if I’m looping through something I’m creating a new list and that’s what list comprehensions are built for
So if you’re slowly building a tool set that might let you gather and analyze big data, I don’t think you should be using “input” or “print” commands, at least not very much. They’re for debugging, sure, but if you’re using Jupyter or Colab, just start a new line of code to check stuff. Plus if you’re using those you can tell the story of what your code is doing so much better than if you use Spyder or some other IDE.
Ok, rant over. Your thoughts? Here’s some starters for you:
I’m in this class right now and I need to go back and change my homework submission.
I’m in this class right now and I need to know how I drop.
I like this. While we’re at it, let’s try to keep students from using . . .
I hate this. Don’t you realize how powerful “input” and “print” are?! For example . . .
I like the two-audiences approach you’re taking. What I would add is . . .
If you’re not writing code for someone else to use you can’t call yourself a programmer.
If you’re writing code for someone else to use you can’t call yourself a programmer.
I love Jupyter/Colab for these reasons and more . . .
I turned my homework in last night and I assumed you’d be grading it instead of writing this drivel
The catenary is the shape of a hanging chain supported at both ends in a constant gravitational field (ie normal life). Recently Rhett Allain has been doing some great work using both python and analytical results to show how you can calculate and simulate a catenary.
His work reminded me that I had never finished an approach to this problem that I hatched several years ago. I wanted to see if I could use Lagrange multipliers to ensure that the spacing between the beads (I’m modeling a beaded string much like Rhett) stays constant. I wanted to start the string in some initial configuration with the two ends fixed and let it then evolve over time, with a fair amount of friction added, so that it would settle into the final shape, ie the catenary. The problem was that I was stuck on how best (or at all!) to set up the initial configuration such that all the spacers between the beads was fixed and it stretched from one fixed point to the other.
With Rhett’s inspiration, however, I figured out a way to do it. I think I was stuck on some sort of evenly distributed setup where the beads zigzagged up and down with just the right angle so that the string would make it to the other end without having to stretch any of the spacers. But I found an easier way.
I pick one of the beads towards the middle (actually almost any bead will work with the possible exception of the beads closest to the fixed points) and find where I can put it so that the string bends only at that bead. In other words, the beads form a straight line from the first fixed point to that chosen bead and then a different straight line to the second fixed point.
At first I thought this would be lucky if I could get it to work but after drawing a bunch of circles I convinced myself that you can (nearly) always find a location for that chosen bead that works.
With that, the initial conditions match all the constraints and I can get to the calculation!
As I note at the bottom of this post, I can model this as a bunch of seemingly free particles (so model x(t) and y(t) for every bead) that are exposed to gravity along with an unknown Lagrange multiplier for every spacer constraint. So that’s what I did. Here’s the Mathematica code:
Ok, here’s what I have so far:
It works pretty well! One cool thing about doing Lagrange Multipliers is that they tend to tell you about the forces required to maintain the constraint, namely making sure that all successive beads are held a fixed distance apart. Here’s a plot of those forces for the second animation above:
So, thanks to Rhett’s great work I finally got back around to this. I really like his simulation approach, which basically puts really strong springs in as the spacers. But I’ve always wanted to see if you could use Lagrange Multpliers to enforce the contraints without resorting to those springs.
What do you think? Here are some starters for you:
I like this, but I like Rhett’s way better. Maybe you could …
I think this is dumb, I never fire up Mathematica when I hang things.
I’m in class with you tomorrow and I don’t see how this has anything to do with Computational Data Science.
Let me know when you’ve done this in vpython.
I don’t understand why you take two time derivatives of the constraint equations. I thought you said you don’t have to do this anymore
Why can’t you just reduce the dimensionality of this problem and just do a bunch of angles? Then you could figure out the constraint forces by finding the accelerations of all the beads that isn’t provided by gravity.
Can you model it with one of the ends moving? (Answer: oddly no! I tried that and the last spacing wasn’t constrained. Not sure what’s going on there)
I teach again in just a week and have set a goal for myself to make an app that I can use in my synchronous meetings. As of this weekend, I think it’s working (see github repo here)! You can read a lot of my background philosophy here, here, and here. The basic gist is that video is a bandwidth and screen space hog and so I think there might be better tools to put front and center for students to interact with.
My new synchronous dashboard app is a major upgrade from the old one. Here’s a screenshot:
You can see a walk through of the app in this Loom video. The major features are:
I’ll try to hit the highlights of each of those below.
Jitsi is an open-sourced competitor for Zoom and Google Meets. It has basically all the same tools those have, but it also has an api I can use to make my own interface, so that’s what I’ve done.
For all rooms I connect the right students with audio only. I only give them a mute button.
Jitsi also provides a data channel that lets all the participants pass data back and forth. That’s what I’m leveraging to send all the information about the other interactive elements (chat, understanding checks, etc). I used to do that using Pusher but this is cleaner.
Jitsi lets you use their public server, but they don’t really guarantee good connections. Instead I’ve installed it on a new digital ocean droplet and my quick testing suggests that the $5/month it’ll cost me provides enough bandwidth for my class (~25 students meeting 3 times a week for an hour).
I’ve had a growing wish list for breakouts. Here’s the features I’ve been able to build in:
Pretty easy to assign students to new breakouts
Automatically logs them out of the main room and logs them into the breakout room. They don’t have to press anything.
Chat and whiteboard dedicated to each breakout
When they’re in a breakout they can still see the main room chat and whiteboard, though if those are updated by the instructor during the breakout they won’t see the changes. This is especially useful for when students can’t remember what they’re supposed to be doing.
The instructor is “in” every breakout, though they start with no sound (in or out) to cut down on the cacophony. They can interact with chat and whiteboard right away and can rejoin with sound if they (or the students) want.
When students are back in the main room they can still go see the chat and whiteboard of the breakout. The instructor can also share all breakout room boards to everyone if they want.
Things that are still on the wish list:
Easy way to save who has been assigned to breakout groups in the past to easily replicate
Easier way to have the instructor talk to the students without having to rejoin
I spent a lot of time last year learning how to manipulate html canvas elements, including figuring out how to capture where a pen has gone so that I could send those coordinates to everyone. The problem is that the work I did just scratched the surface of what I wanted. I realized that lots of smart people have tackled online whiteboards and maybe I could just dump a useful one in an iframe on my page. Well, yep, that’s exactly what I did.
Mine is a google school, meaning that email@example.com is really a gmail account. That means I can leverage the google infrastructure for user authentication (built in already) and for generating and sharing various documents. That includes the very handy Google Drawings! Yes google also owns and suggests using jamboard for online collaboration, but you can’t (yet?) embed those in iframes. But Google Drawings are nearly as useful, including the ability to put in hyperlinks, and doesn’t mind at all being in an iframe.
Let’s say we’re all in the main room and I want to share a screenshot of the code we’re developing. Here’s what happens:
I (as the instructor of the course) hit the “whiteboard” button.
A request is sent to the google server asking it to make a copy of my blank drawing template, save it in the google drawings folder of the class (which is shared with everyone in the class), and return the url of it.
Now everyone is staring at the whiteboard on the page (they don’t have to go anywhere else!) and they can interact with it.
Because it’s saved in a folder they have access to (with a handy name indicating what class, room, and date it was used in/on) they can always go back, even outside of class time, to look at it.
If the instructor repeats the process listed above, the iframe currently displayed is set to “style.display=none” and another is generated with the new url as the source. The students can flip back and forth among any of the whiteboards that are launched this way. If the instructor wants to make sure everyone is looking at the same one they can force that. If a student joins late, this process works for them seamlessly as well (in other words they see any that the instructor either hits “see mine” or “new whiteboard” after they’ve joined).
Whiteboards that are used in breakout rooms can be sent to everyone in the main room by the instructor as well.
Raise hand queues
I’ve talked a lot about this before. I just directly lifted the code from my old version. It goes beyond a normal hand-raise queue (that might, for example, show the names in chronological order) by having two queues: one for follow ups to the current topic and another for new topics. Everyone can see who is in either queue and they can transfer their hands to and from either queue.
To save bandwidth and complexity I no longer store this information on the server for analysis later. I can always add that back in if it seems like it would be useful.
Note this functionality only works in the “main” room.
I really dislike how Zoom and Google Meet privilege video over the chat window. My app makes sure that chat is always front and center.
Students can also initiate 1-on-1 chats with anyone else in the same room as them (recall that the instructor is always in all the rooms). I really think this is important as often people would rather get a quick clarification of something from a friend/colleague/classmate than ask the whole class. I’ve seen some folks talking about the loss they and their students feel when they realize that they don’t have this tool, at least not easily.
I’ve made sure to make all chats visible so the users don’t have to click a pulldown to see their various chats. This should dramatically reduce the number of times someone sends a text to the wrong person.
When there are breakouts going on, the instructor can send messages to individual breakout rooms or to all of them at once.
I really like using quick polling, whether that’s for Think-Pair-Share/Peer Instruction polls or just to check something quick, like “should we do an open-book test?”
I’ve built in a very simple polling system for the moment. The 4 (for now but easily changeable) choices are checkboxes always on the screen for the student. If I ask a question I’ll just say something like “(1) is for ice cream, (2) is for donuts, and (3) is for broccoli.” The results show up on the fly for the instructor who can then just tell the class the result.
Eventually passing the results to everyone is doable, but I’m not in a rush, as the way I’ve always done peer instruction is exactly as I’ve built it.
In my old synchronous dashboard I was proud of the various buttons I put up. Things like yes, no, confused, laughing, cat’s-on-my-computer were, I thought, a fun way to foster interaction. However, after using them for teaching and for meetings with colleagues, I noticed that people very rarely used them at times other than when I asked for a quick poll. So I figured the polling above would be a better solution.
However, there’s something I do in teaching face-to-face all the time that I wanted to replicate here if I could. Quite often I’ll say to students that I want to get feedback from them on a particular scale, like “confidence you can get the Twitter api to work.” Instead of seeking a binary answer, I tell them to use their hand height to indicate their confidence. Putting your hand on (or even below!) your desk indicates a great lack of confidence, while raising it high above your head shows great confidence. I’ve really liked those moments, though sometimes I think people are nervous everyone is looking at them.
So for online I thought I’d use an input type=”range” or slider to accomplish this. I call it an analog slider but it’s really only got 100 steps in it (0-100). Students can set it when I ask such a question and I (as the instructor) immediately see the class average.
I plan to use this a lot in class by asking for “understanding checks” or possibly “confidence checks.” I’m really excited about it!
Well, what do you think? I’d love to get some feedback, especially in this last week before class starts.
Here are some starters for you:
I’m in a class with you next week and you said I should come read this post before class starts. Ok, done. Do I get points now?
Where do I go to drop your class?
I’m going to be in this class and this sound really cool. What I’m most excited about is …
This really sucks. The worst part is …
… and I think you should update your online store so that people are warned about the danger of these particular cucumbers…. oops, I thought I was typing in the comment section of a different tab
How can cucumbers be dangerous?
Let me tell you my cucumber story …
Between jamboard and google drawings I think the most interesting differences are …
This is blatantly stealing from ….
I love video. How are you planning to do attendance checks if you can’t see their smiling faces?
I hate video, thanks! However there’s one thing I think I’d miss …
I’ve started a project that brings me joy. I’m hoping to help spread that around!
I was looking around for ways that I could support physics teachers who were working so hard to teach during this pandemic. I was reflecting on how I miss the interactions and feedback I used to be a part of during the Global Physics Department heydays and I settled on trying to get a little taste of our old “submit a video of your teaching and we’ll give you feedback.”
So for over a month now I have committed to spending a part of every week(work) day making a reaction video to a physics teacher’s video they’ve made public. You can see the full playlist (37 long as of this morning) here. I look for videos made by teachers that don’t have students in them (privacy reasons even if they’re public) that are lectures, homework solutions, or worked examples. I don’t tend to react to “welcome to my class videos.” My guiding principles are:
Lift teachers up
Share interesting/funny anecdotes about my teaching and physics in general
Open up opportunities for fun conversations about teaching
For the first one, I will often re-read one of my favorite posts about academic bullying. There I talk about how hard newer teachers have it when they run into online folks who seem to have it all figured out. They can be quite intimidated and find sharing their struggles to be difficult. So I figured that if I’m nearly uniformly positive and supportive of their work I can be helpful. I guess you can judge for yourself.
The second bullet comes pretty naturally. These awesome teachers show me cool solutions to problems I’ve had in the past and I’m happy to share funny stories about lessons I’ve learned. I also find that I often tie in ideas of how physics is used/seen in the wild because the teachers prime me for that.
This post is really about the third bullet above. While I’ve had a little interaction on twitter and youtube comments, I would love to talk with folks about teaching. I tend to seed each video with questions I still have about different ways to approach things and I’d love to hear more about what folks think.
So I thought I’d try a slightly different approach. In addition to randomly searching youtube for vids to react to, I thought I could let people volunteer themselves, both to have me react to them but also to be willing to debrief with up to four other physics teachers who I’ve reacted to. A twitter friend, @TadThurston, did exactly that and we have since had several conversations, both on twitter and through a google meet call, that has been so fun.
So I’m proposing that folks use this google form to volunteer and once I react to them I’ll reach out to them along with the four other people I react to in the same week to try to schedule a “physics debrief” where we can talk about physics teaching and lift each other up.
Thoughts? Here are some starters for you:
You reacted to me and I thought it was great. What I especially liked was . . .
You reacted to me and I sent you a cease and desist order, did you get it yet?
This is cool, but I think you should also consider . . .
This is a brazen rip off of . . .
Can you handle vids where I walk students through how to use python? (yes)
I’ve watched a few of these and your breathing is really loud
I really like it when you notice things like the tech we’re using
Get a green screen, will you!
This doesn’t sound like work, stop using work time to do it (I know what your office looks like)
Can I request you to react to several vids? (sure!)
Ok, I know that’s a weird title, but bear with me, this has some fun stuff in it, including some things I still need help with.
The basic idea is that Planck’s solution to Blackbody radiation is an interesting way to view the quantum problem with the electoral college. We’ll have some fun tangents along the way.
Blackbody radiation is all about describing the (mostly infrared) radiation coming from a hot thing. I’ve written a little about it before but really there’s only a few things you need to know:
Hot things give off radiation
A hot thing with a cavity inside with a small hole to the outside is the easiest to model
There were two 19th century physics ideas that most people brought to the analysis.
1. We know how to count how many standing waves can exist inside a cavity (really the number between some very tiny range of frequencies)
2. We assume that all modes of energy (including standing waves) play well together so they all end up with the same average energy, namely something proportional to temperature (the proportionality constant is called the Boltzmann constant and we traditionally use ‘k’ for it).
Putting both of those ideas together led to something very strange. Together they predict that there’s an infinite amount of energy down into the ultraviolet part of the spectrum.
Since that’s not found experimentally, Planck reasoned that one or both of the 19th century ideas had to be wrong. He decided to go after the second one with a very simple (but very strange at the time) idea. Namely that standing waves couldn’t have any amount of energy. Each one could only have an integer amount of some base energy that happened to be proportional to its frequency: where n could only be integers and h was eventually named Planck’s constant.
While this sounds like a fun mathematical approach, it’s interesting to note just how weird it is. It means that when you’re playing jumprope, you can only set the height of the main peak (where the jumper is) to a set of possible amplitues. Weird.
So how did Planck’s approach avoid catastrophe? Well, the answer is what eventually gets us to the electoral college, so thanks for bearing with me. The higher frequency standing waves couldn’t have an average of kT energy in them because that’s not even enough for the integer n to be 1. Basically if you gave them the lowest (non-zero) energy they’re allowed to have, they’d screw up the average. So they get frozen out and don’t get to play. If they can’t play, they don’t cause the catastrophe.
What’s the connection to the electoral college? As things stand now, each electoral vote does not represent the same number of people. The reason is that our election system can’t tolerate the “freezing out” approach and so instead rounds things up to the nearest integer. Basically all the states are considered to be the same type of standing wave, but their base energy (or base population in this analogy) is set so that it’s the whole US population divided by 538 (the total number of senators and members in the house of representatives). These days that’s roughly 600,000 people. The problem is that some states don’t have that many (actually 3x that many since they all get 2 senators and at least one representative). So they round up, and that means those states get a larger impact on the vote. Actually the fact that each state gets 2 for free from their senators already screws things up, but one solution is to change the base count to be much lower so that California gets a ton more and tiny (in population) states keep their 3.
The rest of this post details some of the strange things I ran into when trying to simulate some of this. If all you care about is the electoral college stuff, there’s not a lot more below. However, if you’re into teaching things like quantum physics and Blackbody radiation, read on because I need some help!
The second 19th-century bullet above was a really cool thing when people put it together. The derivation involves a pretty nasty integral (really the ratio of two ugly integrals) but ends up with the amazing result that all energy modes share the same aveage energy: kT. Amazing. But as I was thinking about this post and thinking about doing some simulations, I figured I wouldn’t need to explain the nasty integrals as I likely could just show some fun simulations showing the average energy working out.
That’s when I hit a snag!
I figured I could do some early statistical tests of my simulations by checking that they followed the Boltzmann distribution. What’s that, you ask? Consider a system of lots of particles, each of which can be in a random energy state, except that the sum of their energies needs to be a constant, the total energy in the system is fixed. If you reach in and grab a particle, Boltzmann tells you the probability of finding that particle to have energy E: it’ll be proportional to . The proportionalilty constant is found by ensuring the total probability of any energy is 100%, hence the second nasty integral I mentioned above (for normalization).
So a great test of a simulation of particles with random energy (where you fix the total energy all the particles to be a constant) is to make sure that the lowest energy states are the most probable and that their probability distribution is (roughly) exponential decaying.
Well, when I tried to put such a system together, I found that most approaches didn’t follow the Boltzmann distribution!
Ok, so it’s your job to put together a collection of particles with random amounts of energy so that their total energy adds up to a fixed constant. How do you do it? Note that while you can also tackle this where you let the particles have any energy value, we’re jumping right into the quantum approach where the total energy is a (large) integer and each particle has to have an integer level of energy. Note also that I’ll likely switch back and forth between particles and energy on the one hand and buckets and balls on the other.
Here are the 4 ways I’ve tried to solve this problem:
Method 1: Stars and bars: Imaging laying out the total energy like an array of cells. Now choose N-1 cell borders randomly. Feel free to choose the end points. But note that there’s always one on each end, so that really it’s N+1 boundaries. Between any two successive borders is the energy of a particle. With N+1 boundaries that gives you N particles, all of which have an integer number of cells (or energy) in them.
Method 2: For each unit of energy, randomly select a particle to go to (or ball to a bucket). Then just look at each particle (in each bucket) to see how many are in there.
Method 3: Grab a particle, and randomly give it energy ranging from zero to the max energy. Then move the next particle and give it a random amount from zero to whatever’s left after the first particle. Then repeat until you’re out of energy. If the number of particles considered by that point is less than the total number, just set the remainder to zero. If the number considered is greater than the number, start over.
Method 4: For each particle generate a random integer between zero and the total energy possible. Then add up all the energies. If the total is the total energy allowed, keep it. If not, try again (this one is really slow).
Which one do you like? I’ve been having some fun conversations with folks on twitter about this along with looking up suggestions on various pages online. Google searching seems to run into method 1 a lot, while most of my physics buds like method 2 the best (Thanks To my friend Gillian for first suggesting this way – I felt dumb that I’d spent so much time on method 1 before moving on to that one).
For me I think I like method 2 the best. It seems to be the most random, and it runs nice and fast, though you do have to do some tallying. Method one has a great visual, and is called stars and bars because people have been typing things like |**||***|*|***| for a long time when teaching about probabilities. Method 3 felt like a way to avoid the immense waste of time that Method 4 represents.
So, which follows Boltzmann? That was my big question. Honestly my guess was “all of them!” but, well, I was wrong:
Here’s the code for each method:
def method1(buckets, balls):
# bars and stars method
# assign each ball a random bucket
return np.array([np.count_nonzero(ballassign == i) for i in range(1,buckets+1)])
# randomly put some balls in first bucket, move on until you run out
cur = np.append(cur,random.randint(0,curballs+1))
# try buckets random balls until sum is correct balls
What the actual heck?! Why don’t they follow Boltzmann? Only method 4, the slowest (by far!) does it. Most of the rest of them way undercount zeros (meaning that if you randomly grab a particle after running this 100,000 times you find zero less often than you should). Lots of my twitter and fb buds have lots of explanations. Most have to do with counting microstates but not multiplicities (for you real statistics nerds). Here’s an example: Consider method 1 with 5 units of energy and only 3 particles. There are lots of possibilities, but lets only consider these four (where the number is where the two (N-1, remember) boundaries were randomly placed): [0,0], [5,0], [0,5], [5,5]. When you remember the two boundaries that are always added and remember that you actually have to sort the random numbers before doing the differences you get [0,0,0,5], [0,0,5,5], [0,0,5,5] and [0,5,5,5]. That leads to particles with 0,0,5; 0,5,0; 0,5,0; and 5,0,0. Do you see how you’re over-generating (0,5,0)? Yeah, that sucks.
Two fixes my friends have told me about:
Fix the stars and bars (method 1) thusly: Make a bag of N-1 bars and Etotal stars. Then randomly draw things out of the bag. Then do the work above. To see the difference, consider a system with 10 particles and only 1 unit of energy. That would mean 9 bars and one star. Method 1 would generate all the bars only on 0 or 1, leading to the one star being somewhere in the middle. In fact, as you can see above in the [0,5] conversation, you’d quite unlikely to find the particle in the first or last bin. But with this correction since the bars and stars are all jumbled together, you’re just as likely to get the star at any location. My brain still hurts about this one but I really appreciate my buddy Craig for helping me see it.
Fix method 2 by just making copies of any state you find. The number of copies you need to make is the multiplicity you’d expect from the permutations among all the particles. An easy example: if you you get (0,0,5) make 3 copies of it so that you’ll get the same number of zeros as you would with all the permutations: (0,0,5), (0,5,0), and (0,0,5). I don’t particularly like this solution as you’re making a weird manual correction but I believe it produces the Bolzmann distribution.
What does nature do?
To me this is the big question. All the methods discussed are ways to produce viable states that make sure to have the right energy. The typical derivation talks about how we assume that any natural system will randomly access all the possible states with equal probability, leading fairly directly to the Boltzmann distribution (or at least an approximation that gets better the bigger your system is). But here’s where I’m stuck. If a “real” system has some distribution of energy into the various particles at one moment of time, what’s the best model to come up with a different distribution in the near future? Honestly for me it’s method 2. You just let every particle go find a new home! But that doesn’t have the right distribution and so wouldn’t lead to all the normal statistical mechanics results we expect.
Method 4? That’s weird. That would be saying that every particle takes on random energies and it all get locks in only when the total energy is right.
Correction 1? I guess that works for me, but I still feel Method 2 makes more sense physically.
Correction 2? That’s just weird. All the energy quanta go find a new particle home and then somehow the system rapidly cycles through all the permutations.
My brain hurts on this one. I’d love some suggestions below.
Modeling a Blackbody
Ok, I’m not modeling the whole thing. Really I wanted to try modeling a system made up of particles who have different minimum energy spacings. Some can take on 1,2,3,… units of energy while others are limited to 0,4,8,12,… or 0,3,6,9,… or 0,117, 234, 351,… units of energy.
What am I hoping to see? What I’d love to see is an approximation of Planck’s prediction of average energy per type of particle. That’s given by:
So how can I model that? Well, here’s what I tried: I randomly assign each unit of energy to a particle. Then I round the energy in each particle down to the nearest level it can actually handle. Then I repeat with the leftover energy. I keep doing that until all the energy is used up. Here’s the code:
import numpy as np
from matplotlib import pyplot as plt
from numpy import random
from scipy.optimize import curve_fit
# this gives a list of quanta in each bucket, not balls.
# so [1,2,3] for buckets that can hold [1,2,3] energy means [1, 4, 9] energy in each
def redistribute(current_quantas, current_energy,buckets,total_energy):
# randomly assign all balls to a bucket
# find out how many balls in each bucket
#round down for each to the nearest full quanta in each bucket
# add to the current quantas in each bucket
# find the leftover energy
# repeat until all energy is distributed
test=np.array([redistribute(np.full(num_buckets,0),num_balls,buckets,num_balls) for i in np.arange(loops)])
and here’s a few examples looking at the average energy per mode with a Planck fit (everything is scaled so that 1 is the expected kT average energy)
Ok, that’s cool and all, but here’s the weird thing. This is basically based on Method 2, which I’ve shown above doesn’t follow the Boltzmann distribution! My brain hurts yet again. I’ve love some collaborators who could help me reason this out.
You made it! This was a long one with lots of twists and turns but it helped my brain to write this up. I’m really hoping folks can help me with some of the loose threads. Here are some starters for you:
I love this. My favorite method is . . .
This is dumb. Why don’t you just read . . .
You have to do method ___ or this is all crap.
You’ve got the electoral college wrong. A better way to think about it is . . .
So what temperature is our electoral system?
Wait, you used python for this. Are you feeling ok?
You mixed quantum system has a major flaw . . .
I love your mixed quantum system. Can you also . . . ?
Did you just draw those images on your phone while also typing all of this and doing the python calculations on your phone? Wow, everything looks like a nail for your hammer doesn’t it?
I’ve been thinking a lot about back channels in meetings and classes lately. Some of my thinking has been seeded by some fun and interesting experiences recently and some has been due to some new tech I’ve seen. The upshot: I love them, but only if the facilitator/teacher/presenter can control themselves.
My first real experience was way back with the Global Physics Department, when the chat was where most of the awesomeness happened (including arguments about what’s the best time zone). One fond memory is trying (sometimes in vain) to convince the guests to ignore the chat, lest they get distracted. In the chat people would provide all kinds of information that was tangentially related to what the speaker was talking about. Some of it was joking, some was really great links to fantastic resources. I loved it and I’ve found that I try to use chat in similar ways in the meetings/classes I’m in now. Of course some meetings aren’t really set up for such side banter, so I wanted to try to get my thoughts down here about the best times to use back channels and how you should think about setting them up (or not) and supporting them (or not).
The presenter is privileged
One thing I’ve noticed quite a bit is the very different role the speaker/facilitator/presenter plays in the chat. Above I mentioned how they can get distracted, and true side banter really doesn’t have the presenter in mind as the audience. I know lots of people who don’t really get distracted so much as they pride themselves in paying attention to the chat and responding to it. The problem can be, however, that if the (privileged) presenter answers all the questions, the students/participants don’t really develop a supportive community.
An example might be someone asking whether the technique being presented works well in classrooms with fixed furniture. Likely they’re asking because they hope someone has tried it and just wants to hear it from someone who is at their same stage but with different logistics. But if the speaker answers, the answer feels authoritative, especially if the vibe is that the presenter answers all the questions.
Of course not paying attention to chat can have bad ideas propogate, but if the participants are really hoping to ask and share with each other, I think that’s a win, even if some of the ideas go off the rails a little.
That leads me to my new favorite tech: Google Meets Q&A. It works in parallel to chat and allows important questions to not get lost in the banter. Plus the participants can up-vote the questions! Very cool.
I think in my next class I’m going to let the students know that I’ll pay close attention to the Q&A (though I’ll let them take some time to do some up voting) but that I’ll just generally ignore the chat unless they ask me to pay attention. Of course if other students are presenting or talking or whatever, I’m sure I’ll fall back into my joking approach in chat, but I think that’s ok. I also might just add my own questions to the Q&A.
One really cool thing about the Q&A in Google Meet is that after the meeting you get a report detailing all the questions, whether they were answered, who asked it, and how many upvotes it got. Awesome.
It’s too formal
Sometimes you’re in a meeting that feels too formal to start up some banter/tangential info. That happened to me today in our faculty meeting. What I’d love is to have a completely separate back channel, but it seems like you have to convince a bunch of folks to jump onto something else. At my google school I’m super intrigued by having a google room (that new annoying thing in your gmail window) with folks that I’d like to do some back channeling with. We’ll see.
What do you think. Do you like backchannels? Here are some starters for you:
Why do you sometimes put a space in backchannel?
I love back channels. My favorite thing to do is …
I hate backchannels. I especially hate it when …
Being in a meeting with you is fun, I especially like it when …
Being in a meeting with you is terrible. What I especially hate is …
How do I get Q&A to work in my Google Meet?
Why don’t you talk about zoom?
I use _____ for an external backchannel. It works great but I wish …
I hate it when presenters get distracted. The funniest was when …
I love distracting presenters. What’s wrong with you?
I feel that if the students are typing in the chat, they’re not paying attention to what we’re doing. How do you deal with that?
Would you like students using a backchannel in you in-person classes (answer: yes!)
I love google rooms!
I wish I could just pause the whole meeting and add in my banter without distraction. Kind of like the crazy dude in these awesome physics reactions videos!
I, like most people, go to a lot of meetings. I’ve developed a style that I like to use when running meetings but I realize that I can always get better. I thought I’d put down some of the things that I like and some things that I struggle with and see what you wonderful folks think about them.
Years ago I learned from some 3M folks that a social check in at the beginning of a meeting can help the team develop and can help people look forward to the meeting. I always really liked them and so if I run a meeting I always start with one. Typical prompts include:
Favorite way to jump into a swimming pool
If you were a boat, what kind of boat would you be?
a skill you would like to develop
If you’ve got a large group, it’s usually best to go with a binary choice like “raking vs shoveling” or “0 degrees or 100 degrees”. If it’s a smaller group, you can do longer things, but this is where I hit my first stumbling block: some people really think these are a waste of time. In my typical meetings these take about 5 minutes, so I agree it’s a big chunk of time. I enjoy them so much, though, that I always schedule them. My question: how can I be more sensitive to the people who don’t like taking the time?
I like to make sure that there are opportunities to make changes to the agenda. This starts with posting the agendas early enough to let people think about it. I tend to try for 2 days but I’m not great at it. Then in the meeting I like to make sure people can make changes to the agenda early and with some democratic approaches.
One thing I’ve noticed is that often folks want to just do their new agenda item right then, as opposed to finding room for it in the normal agenda. I’m not sure why this bothers me so much, though I would guess it’s because I’m worried about time.
Action item check in
When I write my agendas I try to go back to my notes to see what action items were assigned at the last meeting. I then put them in the agenda, even if it’s clear to everyone that they’re done. My thinking is that the accountability is always clear and that we can celebrate the things that got done. Of course I’m also making sure things don’t get lost. I try not to do any shaming when someone hasn’t done something, but I like everyone to know what still needs to get done.
The pitfalls here are that the mini reports can really take some unexpected time, but they’re often topical and timely. I think sometimes folks feel like they’re calling people out, so I’d love some thoughts on how to soften that.
If you pair this with sending the agenda out a couple days in advance, I’ve noticed that a lot gets done during those two days. Certainly that’s true for my action items!
This started a while ago for me and now I’m hooked. Before getting to the meat of the agenda, I do the “moments” section. For now I use 4 different moments:
How do I
This is great
Really I just started with “Oh shit” because that particular group would often have some emergencies come up that the whole group could help with. I’ve found that if people know those are there, they know they can bring up their quick things without having to necessarily add something to the agenda.
Too profane? I had to do “Oh shoot” for one group I was in.
One big drawback is that these can take up some serious time, but my opinion is that if they’re that pressing, they likely need the time. Your thoughts?
Then I get to the meat of the meeting. When I remember, I try to put a time estimate for each. That tends to help the group stay on track, but I know sometimes folks get crabby if the time estimate is clearly too low to get anything decent done.
Some things that bother me about normal agenda items are ones that get the group doing things that aren’t efficient. My favorite pet peeve is group wordsmithing. I used to also dislike group editing, but that to me is much preferrable to wordsmithing. I think it’s better to just make clear the goals of the passage and then to assign someone to write it. I assume that those that like/want to do wordsmithing just want to get it done, but it’s rare that I enjoy the experience. It’s also interesting to see what happens when people with very different typing speeds work on a collaborative document.
Action item round up
I’m terrible at this (though I tend to take decent notes) but I want to try to do a better job at the end of meetings making it clear what has been decided about next steps. Using the “assign to” feature in google docs works great when I’m taking minutes, but I think it’s probably good for everyone to hear what they’ve committed to before the meeting ends.
Set the next agenda
I’m terrible at this. I almost never do it. But I think I’d like to try getting better at it.
I’ll admit it: I mostly wanted to try out the wordpress app on my phone now that I have my nexdock so I can treat my phone like a laptop. But this is a topic I’ve wanted to get down for a while, so it was a good excuse.
So, some thoughts? Here are some starters for you:
I love going to meetings with you. I just wish that you . . .
When I see you’re going to be there, I make up excuses not to go.
Here’s some ideas for check ins . . .
Here’s some things to avoid with check ins . . .
Wait, your phone is powering a laptop?
This all feels way too rigid. You need to relax and just let things flow!
My meetings are all dominated by “oh shit”. Why do you even bother scheduling anything else?
You should have crowdsourced the wordsmithing of this post.
I hate action item roundups. I know what I’m supposed to do and I don’t like getting called out.
Do your current online meetings change any of this?
This post describes a way to calculate tunneling probabilities for one dimensional quantum barriers. This method is easy to code up, and is very fast.
Consider the following barrier. If your energy is less than 3 eV, you’ll just reflect off. But above 3, weird things happen. How do you calculate the reflection and transmission coefficients?
Quantum tunneling is a favorite conceptual topic for students. It is a notion of something so very different from what is expected classically that can be described so easily by invoking memories of throwing balls at walls. Students are encouraged to find connections with frustrated total internal reflection in order to further cement their understanding of matter as waves. Both in optics and quantum mechanics instructors can note how the continuity of the wave function and its derivative across boundaries implies that the wave cannot abruptly go to zero. This enables students to see why a (typically small) portion of the wave can tunnel through a barrier.
On the other hand, the quantitative aspects of tunneling are a different story for students. As usual very simple situations can be calculated by hand like the square barrier but more typical barriers found in the lab require the use of a computer and an algorithm that can apply the conceptual physics they have learned (the boundary conditions) in an iterative fashion.
In this post I’ll talk about a different algorithm to calculate the tunneling probability of a particle with known energy through an arbitrary one-dimensional potential barrier. It is both fast and accurate but it also uses tools that most students are familiar with in their studies of the Schroedinger equation. Specifically it involves the direct integration of the Schroedinger equation in a manner very similar to the shooting method employed to find the eigenstates of an arbitrary potential well. However, instead of needing to adjust parameters to find a particular eigenstate, students can directly inspect the results for any given particle energy and determine both the tunneling probability and the shape and nature of the wavefunction inside the barrier.
The most common approach to calculating tunneling probabilities is to consider the barrier to be a collection of square barriers. In the WKB approach, only the exponentially decaying portion of the wavefunction is kept and integrated through all the slices (Simmons 2007). In the matrix transfer method, the boundary conditions among all the slices are carefully calculated (Alexpoulos 2007, Mendez 1994, Morelhao 2007 (pdf), Probst 2002, Zhang 2000). Specifically, at every boundary between the square slices the wavefunction and its slope are continuous. In each slice the wavefunction is composed of two components: either a right and left traveling wave with a wavelength determined from the kinetic energy (the difference between the total energy and the barrier height); or a growing and decaying exponential whose growth rate is determined from the (negative) kinetic energy. Often these boundary condition equations are described in a matrix formalism as they are simple linear equations relating the incoming and outgoing wavefunctions along with the barrier heights of the slices. The effect on the incoming wave by the barrier is then modeled by a single matrix that can used to solve for the tunneling probability.
There are also some approaches in the literature that have more directly integrated the Schroedinger equation, but all do a forward propagation as opposed to the backward one described below (Ban 2000, Yunpeng 1996). These approaches use both numeric and analytical methods to determine the phase of the incoming wave that enables solely a right-traveling wave in the transmission region. The method below does not require such adjustments and simply gives both the wavefunction in the tunneling region and the tunneling probability after a single direct integration.
Consider a tunneling situation as laid out in the figure at the top of this post. The first and third regions have a constant potential while the middle region can have any form, including discontinuities and regions where the particle is classically allowed. Region I can have right- and left-traveling waves
while Region III only has a right traveling wave:
Using a fourth-order Runge-Kutta technique I numerically integrate the real and imaginary parts of the Schrödinger equation from the right edge of Region II (x=L) to the left edge (x=0). Note that since the Schrödinger equation does not have any single derivatives in it a Numeroff approach can also be used. Note also that in Mathematica you can integrate complex numbers with just one call to the Runge-Kutta solver (NDSolve). Since both the wavefunction and its slope will be the same on both sides of the boundary between Regions II and III, the initial conditions are determined by arbitrarily setting F=1 and using the form from Region III:
To determine the transmission probability, T, we need to find the value of A:
This is done by investigating the value of the wavefunction and its slope at where, according to Eq. (1):
Once again we have used the equality of the value and slope of the wavefunction across a boundary.
Once the Schrödinger equation has been numerically integrated, the transmission probability is easily calculated.
This method employs many techniques used when teaching the numerical solution of eigenstates for arbitrary barriers. In those situations students are taught to employ the shooting method to find energies that produce physically allowable wavefunctions. (like I’ve posted about before). The major difference in this new application is that both the real and imaginary parts need to be integrated, as is illustrated in the figure below. If you only do the real part (as is often done in the shooting method application) you are unable to calculate the transmission coefficient as seen in Eq. (8) above.
Examples and Comparisons
The transmission coefficient () as a function of particle energy for the potential shown in Figure (1) is given in Figure (3) below. The top curve is the result of the current method while the lower lines use the transfer matrix method with varying number of slices of Region~II. Ultimately both approaches converge to the same result at every energy.
It is interesting to compare the transfer matrix method with the new method where the number of slices is compared with the number of steps that the Runge-Kutta method employs. The transmission probability versus energy for the arbitrary barrier shown in Figure (1) is given for the total step number ranging from 5 to 12 in the inset of Figure (3) above. The curve with 300 steps is also shown. It is clear that the number of steps needed for the Runge-Kutta method is far less than the number of slices needed in the transfer matrix method to achieve the same accuracy. Note, however, that one should really compare the number of calculations involved when doing these comparisons. A fairer comparison would need to multiply the number of Runge-Kutta steps by four, though this still shows that the current approach compares favorably to the transfer matrix method.
As an example to show the pedagogical uses of the current method, I consider resonant tunneling. Specifically I compare the wavefunction in the barrier region to the eigenstates expected for a simply-shaped barrier.
Consider the potential shown below.
This parabolic potential barrier is parabola centered at x=1 but chopped off at x=0 and 2. The analogous potential well that is not truncated has resonant energies at
The transmission probability as a function of energy is shown here:
The resonance peaks shown correspond very nearly with the eigenenergies of a parabolic well. At lower energies where the resonance peaks are very sharp the energies are the same as the eigen energies. As the peaks become broader, the resonant energies become larger than the eigenenergies by as much as 20% for n=11.
The reason the resonant energies grow larger than the eigenenergies as the energy increases is due to the boundary conditions that the wavefunction has to match at x=0 and x=1. This can be seen by comparing the resulting wavefunctions (both the tunneling wavefunction and the eigenfunction for the parabola) as seen here:
Close inspection of the wavefunctions near the boundaries indicates the differences between the tunneling wavefunction and the eigenstate. While the eigenstate is decaying to zero in all cases, the tunneling wavefunction is forced to match the boundary condition at the right edge. At low energies there is little difference as the exponential rise is very steep but at higher energies there is a higher bend needed that explains the rise in the energy compared to the analogous eigenenergy.
I have discussed a new method for calculating both transmission probabilities and wavefunctions for a particle tunneling through an arbitrary one-dimensional barrier. The approach is applicable at the undergraduate level as it uses common tools related to the shooting method for finding potential well eigenstates. It is fast and accurate and enables the study of complex phenomena like resonant tunneling.
Here are some starters for you:
This is really useful. I plan to use it in . . .
This is dumb, I’d never use it and here’s why . . .
This reads like an article you wrote for the American Journal of Physics that got denied with a reviewer saying since it was so easy to code up it wasn’t worth publishing.
Wait, so you integrate from right to left, I didn’t think that was allowed!
So you just assume that something makes it through and work back to see what could have caused it? Weird.
I’ve been thinking a lot about ways to make virtual classes and meetings as useful as possible. Certainly that’s what’s behind all my work with my synchronous dashboard (see here and here). These days I’m part of a team helping people prepare their fall classes (in person, hybrid, and online types) and I’m on a team planning a fully virtual New Faculty Workshop for Physics and Astronomy this fall. I’m also super excited to be meeting with an informal team put together by Stephanie Chasteen looking at virtual professional development. In this post I want to try to organize my thoughts around what I’ve been calling “privileging video” in virtual meetings.
I’ve been hearing and learning about a lot of really cool digital tools people can use in virtual meetings. But there’s always the thought that creeps into those conversations about how hard it is to both see people and interact with those tools. Certainly some people have multiple monitors and don’t have issues, but that’s not the majority of people that I interact with. That’s what I mean by “privileging video.”
Seeing people is great! You can tell if they’re really engaged and you can see the normal unspoken signs of confusion, amusement, frustration, etc. It’s the main reason colleagues of mine don’t like using my dashboard tool (especially when I force them to). It’s also the easiest way to take true attendance (as opposed to just seeing someone has logged in).
But the problem is that video takes up so much space on your screen, it crowds other tools out. Certainly most video meeting software venders allow for tools other than video (chat, hand raising, polling, etc) but it’s very clear that all of them privilege video. Just look how chatting in Zoom or Google Meet has to take an active click from the user. Or how the chat window can so easily be lost or covered whereas they take great pains to ensure that the videos have primacy, or at least a clearly protected region of the screen.
Compare that to audio: If you’re using an online audio tool (or just using the audio of a video meeting) that tab doesn’t even have to be front and center. It doesn’t take up any screen space.
Not sure if an audio-only conference can be productive? Spend 5 minutes some time watching teenagers using Discord to solve problems in a video game. You’ll see that they are both engaging with each other and solving problems in the stuff that is taking over their screen. Discord provides easy audio, chat, and emoticons. And that’s it! They assume you’re using your screen for something else. That’s why it got built in the first place. I happen to live with three of my own children who do this all the time. In fact, as I was developing my dashboard they kept telling me to just use Discord. They were probably right.
Sometimes folks will share their screen, shoving the vids of the participants to the side. That’s a better use of screen space, but it still severely limits collaboration. The rest of the folks can only watch and hope to occasionally interrupt the presenter. It’s interesting to look at the difference between when a presenter shares their screen showing a google doc and when, instead, everyone just logs into the google doc. Depending on what the group is trying to accomplish, each approach has its merits. The latter, however, gives much more agency to all the participants.
When thinking about teaching, it’s interesting to note that while teachers are used to seeing everyone’s face, students really aren’t. They see the teacher and perhaps their small group members (I’m talking about in person here) but they don’t normally have the ability to stare at the faces of all their classmates.
So I think I’m a little down on “privileging video” but I wanted to get my thoughts out there so, as usual, I can refine my thinking by bouncing some ideas off you.
Here are some starters for you:
I think I’m down on privileging video too. My biggest issue is . . .
I love privileging video. What you’ve forgotten about is . . .
Why do you sometimes not capitalize google?
I love your dashboard tool. Can you help me with it?
I hate your dashboard tool. When are you out of the dean’s office so I won’t have to use it any more?
Wait, your school is going to have in person classes?
Wait, your school is going to have online classes?
I think sharing my screen does give the rest of the participants agency. Here’s how . . .
If it weren’t for cool Zoom backgrounds I’d stop doing video meetings right now
Wait you mentioned Zoom, so can we use it?
I’ve used Discord and you’re right about . . .
I’ve used Discord and you’re way off base. What you don’t seem to realize is . . .