Maxwell to Snell

Today was a really interesting day in my optics class. We’re doing chapter 9 in our text and I wanted to make sure that I motivated the material well. What’s weird about this text is that it waits until chapter 9 to do ray optics. This chapter has the lens equation, curved surfaces, etc. You know, all the stuff you can do in general physics. So why does it wait until chapter 9? Well, because we’re trying to ground everything in Maxwell’s equations for electric and magnetic fields. Go back and look at your general physics book and you’ll likely find that there’s a big gap between things like Ampere’s law and Snell’s law. I learned so much in class today that I wanted to make sure I wrote it down, so here you go.

We did Snell’s law back in chapter 3. I love that derivation, because we show that the boundary conditions that Maxwell’s equations enforce lead directly to three things:

\omega_i=\omega_r=\omega_t

\theta_i=\theta_r

n_i \sin\theta_i=n_t \sin\theta_t

where i stands for incident, r for reflected, and t for transmitted. It’s a really cool argument that comes down to this: If:

Ae^{i a \xi}+Be^{i b \xi}=Ce^{i c \xi}

then

a=b=c

as long as \xi can take on any value. If \xi is time, then you get the \omega equation above, which is the same as saying that the color going in is equal to the color going out. If the interface lies in a plane (which is crucial to this argument), then \xi can represent one of the spatial dimensions in the plane and you get the law of reflection and Snell’s law. Very cool! A direct path from Maxwell to Snell.

But there’s a problem. If you don’t consider plane waves, you can’t do it! The math is critically dependent on both the idea that the wave is in the form of a plane and that the interface is a plane. Break down either of those, and you lose some of the valuable math tools we’ve put together.

So today was all about looking at non-plane waves. First we considered the notion of a plane wave interacting with a non-plane interface. Going into the interface is fine, but the reflected and transmitted waves would definitely not be plane waves, so we can’t assume angle in equals angle out and Snell’s law. One thing we talked about was the notion that any curved interface can just be zoomed in upon until the portion of the interface you’re considering looks flat. The thinking goes that then your picture looks like the plane wave one and everything’s good. But the big problem is that \xi, acting as one of the spatial variables that span that mini-plane can’t take on any value, since the plane does not extend forever. Therefore we can’t use that way of thinking to say that reflection and refraction work there.

So what to do? Well, let’s go back to the beginning. For plane waves, we assumed that the field was of the form:

\vec{E}(\vec{r},t)=\vec{E}_0 e^{i(\vec{k}\cdot\vec{r}-\omega t)}

and we checked to see what conditions would be in place if we enforced Maxwell’s boundary conditions at the (planar) interface. Now we need to try a more general version of the electric field. The plane (given by the k\cdot r part of the equation) represents the locus of points that all have the same phase. If we want to consider a more interesting, curvy surface, we should put an expression for that into the exponent and see what happens:

\vec{E}(\vec{r},t)=\vec{E}_0(\vec{r})e^{i(k R(\vec{r})-\omega t)}

Where R(r) is a surface with contours representing lines of constant phase.

What we did was plug that into the wave equation (which comes, of course, from Maxwell’s equations) and made some assumptions of the small-ness of the wavelength compared to any changes of the material. After a lot of work (see section 9.1 of the text) we get to the Eikonal equation:

\left| \vec{\nabla}R\right|=n(\vec{r})

This finally tells us that to find the next contour, you should take a step perpendicular to your current contour, with your step size being proportional to 1/n (ie, the steeper the hill, the closer the contours are). So now at least we can start seeing how the wave evolves.

But that still doesn’t get us to pencil-thin rays (like lasers) propagating and hitting things like lenses etc. To get there, we need to do a little more math. We can derive Fermat’s idea of least time by playing around with the Eikonal equation. We find that the path a ray would take between two points is the one that would take the least amount of time. Then, from that idea, we can get to Snell’s law by asking how light would navigate going from one point in one medium to another in a different medium (this is a pretty standard Fermat-based homework problem).

Aha! Finally! We get Snell’s law (and, very similarly, the law of reflection) for zoomed in flat things that don’t have to extend to infinity. With that in hand, we can go off and do lenses and curved mirrors and have some fun (especially with ABCD matrices!).

So, here’s the path: Maxwell -> wave equation -> general form for a non-planar wave front -> assume small wavelengths -> Eikonal equation -> Fermat -> Snell. Awesome (at least I thought so).

What do you think? Here’s some starters for you:

  1. I’m in this class and I thought this was cool. Connecting everything back to Maxwell has really connected some ideas for me.
  2. I’m in this class and I thought this was a waste of time. I knew all this stuff worked because we’ve been using lasers, not plane waves, in lab all semester long.
  3. I’m in this class but wasn’t able to make it today. Can I print this out and turn it in for a standard?
  4. Could you, I don’t know, do some screencasts to fill in some of the gaps in the text?
  5. I like this. I’ve always been frustrated with the typical approach in general physics because . . .
  6. I think this is dumb. You’re making statements about the general derivation that aren’t true. We can get all these results quite easily and generally by . . .
  7. Why do you do the Eikonal stuff? You’re going to be hitting abrupt interfaces like lenses that break the assumptions built in!
  8. I like this because everything should be tied to Maxwell. How would you do that for . . .
Posted in Uncategorized | 2 Comments

Group digital lab books

Last summer I bought a LiveScribe Sky pen for my lab group, with the hope that we’d connect it to a group Evernote account that we’d use as a group lab notebook. Unfortunately, it didn’t work very well. The Evernote part worked pretty well, but the LiveScribe connection never really worked reliably, so it wasn’t really our go-to way of collecting information. I really wish I’d read a few more product forums about the Sky pen. It seems that nearly everyone was disappointed with it. Oh well, live and learn, I guess.

So this summer I want to try a different, though similar, experiment. Now, dear reader, you should prepare yourself, for I will be making a suggestion that I and my group should use a Microsoft product. Specifically, I’m going to be promoting OneNote in this post.

Here’s what I want (numbered for reference, but not necessarily in priority order):

  1. Easy access by me and all my students on whatever devices we want to use.
    1. for me that means a windows desktop/laptop and an Android phone
    2. my students use lots of different things, though I don’t think I have a student who uses Linux at the moment.
  2. Digital handwriting
    1. including annotating digital artifacts
  3. Organized
  4. Easy to connect images, video, mathematica files, etc.
  5. easy way to share notebook with future students/collaborators

Last summer’s approach did 1 and 5 great, 3 if we worked at it, 4 once we decided on google drive for our other docs, but not 2 at all because the sky pen didn’t work as advertised. Note, if it had, I think I’d still be using that system and not thinking about a new one. It’s interesting to note that LiveScribe now sells a pen that only works with Apple products.

A couple weeks ago I used some IT “sandbox” money to purchase and test out a Microsoft Surface Pro 2. I really like it, especially how well the digitizing pen works. What’s been interesting to me is to find that the best software on it for leveraging that pen seems to be OneNote 2013, which, as it happens, Microsoft has recently decided will be free for windows and macs. It also has connections on the ipad/ipod and android, so that takes care of number 1 above for me.

If you’re a fan/user of Evernote, you won’t be surprised by the organization set up in OneNote. You probably also wouldn’t be surprised by the usual set of things it can hook into its notes. However, I was pleasantly surprised with how well it can take in digital handwriting. Not only does it look great, but it’s searchable! In other words, it does some internal OCR on it but you don’t have to have it replace your handwriting with text. The reason I don’t do the latter is because I really like the flow you can get with handwriting, including arrows, underlines, circles, etc. Converting to text seems to screw that up, and, if it’s searchable (and legible), who cares?

Also, it can do the LiveScribe thing where chunks of handwriting can be indexed in a larger audio recording, so it’ll work for group meetings too.

So how do I envision using it? I’ll have all the students get a free microsoft account so that it’ll be easy to share the notebook with them. They’ll all have full editing capabilities. Last summer I was disappointed to learn that we needed to share an evernote account to pull that off, though I’ll admit that I’m going by what my students said since I had them research that. OneNote lets me set different sharing for each notebook inside my one account.

Now that I have my Surface, I’ll donate my Wacom Bamboo tablet to the lab so that they can enter handwritten notes if they’d like. If they like it, I’ll buy a couple more for the lab. I like how the ~$70 bamboo turns a non-touch desktop (which we have lots of in our labs) into a touch-sensitive device. It’s certainly how I’ve done all my annotations prior to my Surface. My Surface will be portable and can be used in group meetings, but they’ll have OneNote and the bamboo in whatever lab they’ll be working. My guess is that they’ll want to type most end-of-day thoughts, but I know how useful sketching a plot of things can be. Also, equations are much easier in handwriting than anything else.

So they’ll take pics/vids/do screenshots of linked Mathematica files, link and annotate journal articles all in a single place where we can all stay up to speed. I only work ~20 hours/week in the summer so I’ll be able to give them some feedback to keep them on task even when I’m not there.

I did a not-very-thorough search to see what other ways people do this. It seems to breakdown into people doing evernote/onenote and people doing specialized software. None of the specialized ones did handwriting very well, though I’d be pleased if I was wrong about that. In many places I’ve found lots of people saying that onenote trumps evernote for handwriting **for the moment**. We’ll see how things go down the road. For me each summer is a pretty well contained research event, so I really only have to commit to doing it this summer. Soon I’ll experiment with the sharing with one of my students who already downloaded the free onenote to her mac. Hopefully all goes well.

One last thing: Every other year we bring in a 3M corporate attorney to talk to our students about patent law careers. This year she took some time to tell us about the repercussions of the USA’s new “first to file” law. The US used to be “first to invent” which was why you had to be so careful with your notebooks. Policies ranged from having to have every page signed by a supervisor, to signing across the border of any taped in page. Everything, even in this digital age, needed to be put into the notebook, and I for one always thought it was a great big hassle. Well, now with the “first to file” law, we don’t have to do any of that any more. And that’s not just me saying that, it’s a corporate attorney from 3M of all places. She tells me they’re slowing relaxing their policies in the company, but she sees a lot of things being easier with this new ruling.

So what do you think? Here’s some starters for you:

  1. I’m working with Andy this summer and I think this is great. I’m especially excited about …
  2. I’m set to work with Andy this summer but after reading this I’m going to try to get out of it. Here’s why . . .
  3. I think you haven’t given Evernote enough of a try. It’s totally better than OneNote and here’s why (beyond the, you know, not being Microsoft stuff) . . .
  4. I use _____ for this and I think it’s great. It’s way better than your proposed solution because . . .
  5. I think you should have your students keep their own pen-and-paper notebook. It’s a mistake to go all digital. Maybe a hybrid? Here’s how I’d do it . . .
Posted in lab, research | 2 Comments

Relativistic explosions

Earlier this week I got an email from a friend who has been working with his students modeling how momentum works in a situation where two carts start connected and then explode apart. I’m not sure, but I think that he might be using something like the spring-loaded Pasco carts. You connect them, set the spring, and then press the release. He then has his students measure the final speeds of the carts. He has them change the relative mass of the carts in an effort to see what the commonalities are.

He suggests that they plot the velocity ratio v_2/v_1 versus the mass ratio m_1/m_2 (note the change in indices). I think this is a very cool way to approach this lab. If things work the way we expect them to, that graph should be linear, with a slope of 1.

What my friend asked was what would happen as the speeds got to a relativistic regime (v_i\lesssim c). He wanted to let his students know that if they could take more data, eventually the points would not fall on the line. What he asked me about was which way it would miss.

I realized that it’s a complex question. Relativistic momentum (p=mv/\sqrt{1-v^2}) has that nasty radical in it, making the analysis difficult. So, as usual, I opened up Mathematica to play around. The first thing I tried was to solve the energy and momentum equations in 1D for this explosion process:

(m_1+m_2)c^2+E_\text{explosion}=\frac{m_1 c^2}{\sqrt{1-\frac{v_1^2}{c^2}}}+\frac{m_2 c^2}{\sqrt{1-\frac{v_2^2}{c^2}}}

\frac{m_1 v_1}{\sqrt{1-\frac{v_1^2}{c^2}}}=\frac{m_2 v_2}{\sqrt{1-\frac{v_2^2}{c^2}}}

However, Mathematica failed (or at least ran until I got tired of waiting). I had a suspicion that it would do better if I put in a few numbers (like the masses and energy) and, sure enough, it’s able to figure out the velocity ratio when I do that. I’m still not quite sure why it fails with generic algebraic values for the masses and energy, so if you have some good ideas, pass them along!

But, doing it numerically was still useful. Here’s a plot of the velocity ratios vs the mass ratios for an energy that’s a few orders of magnitude less than the mass energies of the particles:

velocity ratio to mass ratio for a low energy explosion. The pink link is the classical expectation

velocity ratio to mass ratio for a low energy explosion. The pink link is the classical expectation

You can see that as the mass ratio gets large, the expected velocity ratio is less than what is expected classically. Here’s another with a much larger energy:

velocity ratio vs mass ratio for larger energy

velocity ratio vs mass ratio for larger energy

Here we see that it deviates quite a bit from the classical expectation.

When I showed this to my friend, he thought it was interesting that it looks like it’s approaching an asymptote, musing that it must have something to do with the universal speed limit (nothing can go faster than c, so that must affect the limits of the speed ratio of this experiment).

So I began playing around some more, and I found that it goes towards an asymptote no matter what the energy is (though for some really low energies, you have to go out to huge mass ratios to see it). I wondered if I could predict where that asymptote would be, and I got half-way there. For low energies, I can do it, but for high energies, I’m stuck and would love some help.

So first the low energies. Note that doing all of this in \LaTeX is fine, but I wanted to show off the handwriting capability of my new Microsoft Surface Pro 2:

derivation of the low energy asymptote

derivation of the low energy asymptote

I assume that for any energy (\alpha m_1 c^2) you can find a mass ratio large enough that \alpha m_1/m_2 is much larger than 1. That jives with the notion that for low energies I have to go way out on the mass ratio axis to see the asymptote:

velocity ratio vs mass ratio for low energy with alpha=0.005

velocity ratio vs mass ratio for low energy with alpha=0.005

Note how it is approaching an asymptote at v2/v1=200 as expected for \alpha = 0.005.

Unfortunately, I’ve really struggled trying to do the same thing with larger energies. Certainly I can make plots of it, but I’m unable to find an analytic expression. Please help if you can.

One last thing. I was curious what would happen at very large energies (larger than the mass energies of the surviving particles). What I found was a velocity ratio that tended to one (just a little larger than one, of course). At first I was confused, but then I realized that if the particles have to carry away a lot of energy, even the heavy one likely has to get moving, and if they’re both moving quite fast, they’re both running up against the speed limit. Very cool.

So, can you help me out? Here’s some starters for you?

  1. I’m confused, why do you care about this?
  2. I don’t like the assumptions you’re making in your low energy derivation, I’d prefer to see it done . . .
  3. I know exactly how to do the high energy case, it’s trivial!
  4. Why don’t you use awesome interactive graphs like Rhett Allain does?
  5. I would have read this post, but you mentioned Mathematica and I stopped.
  6. How do you like your Surface Pro? Blog post in the future?
  7. I much prefer LaTeX to handwriting, please don’t ever do that again.
  8. I know why Mathematica choked on the generic solve. All you have to do is . . .
Posted in physics, teaching | 1 Comment

Taking it up a notch: nail beds

About a month ago, I had an extraordinary experience:

Bill Nye standing on me

Bill Nye standing on me

It was Bill Nye standing on me while I laid on a nail bed. Lots of fun, for sure, and I pointed out to the audience that it was the one shot I wanted from the whole gig. The way we set that up was to first have him stand on my chest without the nail bed, but we cautioned that it would be much safer to use the plywood on top. We talked about how we’d be able to spread Bill’s weight over several contact points on me: my thighs, my chest, and my hands. That’s when we decided to take it up a notch with the nail bed.

Now I’ve been using the nail bed for over 10 years and it’s usually a big hit. Sometimes I do a decent job of talking about the physics, but I’ll be honest that sometimes the show’s running long on time and we just do it (and the breaking of the cinder block) for show. What I wanted to talk about in this post was an idea I had about talking even more about the physics of distributing force. What’s funny is that I’ve talked to a few different people about this (including my partner and my sons) and, while they agree that it adds to the physics, they don’t think it’s appropriate for the show. So I’m looking for your second opinion. Thanks in advance :)

The reason the nail bed works (and why I can handle Bill (we’re on a first name basis now, you see) standing on my chest) is NOT that I’m reducing the total force on my body, but rather than I’m distributing that force over a larger area. No single nail has enough force to puncture my skin, for example. This got me thinking about pillows.

When I was in school, I was always a little confused about pillows. My free body diagrams kept telling me that it didn’t matter what I rested my head on, something had to provide a force equal and opposite to my head’s weight to keep me from smashing my head on my bedroom floor. This made sense to me from a free body diagram perspective, and certainly I could do all the homework, but it always bothered me how that perspective didn’t seem to explain the softness of pillows. What I mean is that feather pillows seem much more comfortable than, say, cement pillows but they both must be providing the same force.

I know now that really pillows deform to your head, providing a very large surface area to spread the force out. What I figured on doing for my show was to concentrate on adding more and more pillowy-type substances between me and the plywood while people are standing up there. We’d slowly go from all the weight on my thighs/chest/hands all the way up to the weight on every part of my upward-facing body, using foam or something.

The reason my family and friends think this isn’t a good idea is that people would say “oh, he just has a pillow up there, of course he’s ok” instead of “ahh, I see, the pillow disperses the weight so that no single part of his body has to support a lot.”

So what do you think?

Some comment starters for you:

  1. I’m a student in this class and, wait, nevermind
  2. I saw that show and Bill was great.
  3. I’m not sure I follow what you mean by the free body diagram. Would you, by chance, happen to have a new Surface Pro that would allow you to draw a nifty picture?
  4. Wait, I thought feather pillows were better than cement pillows because . . .
  5. Wait, they make cement pillows?
  6. Sure, do this in your show, I’d love it if you got crushed.
  7. Don’t put this in your show, but for this reason instead . . .

 

Posted in fun, physics, teaching | 6 Comments

Leaving a gaping hole

This past week in my optics class I think I made a mistake. We were talking about how light interacts with a system with multiple parallel interfaces, and we started with analyzing a single interface that didn’t happen to be in the plane z=0 (which is how the previous chapter did it). I asked the students to re-derive the work from the previous chapter, just with a different location for the origin. I had primed this pump by asked a student just this question during an oral exam last week. His standard for the exam was “I can derive the laws of reflection and refraction (angle in=angle out and Snell’s law)” and he, as expected, did it with an interface at z=0. So in this class they knew how to get started.

The first thing I asked them to do was to find where in the derivation the z=0 idea was made use of. That didn’t take long, so then they could focus on how the next few steps would go. My favorite part of the class was when they realized that they could still relatively easily show that the color shouldn’t change upon reflection and refraction, that angle in equals angle out for reflection, and that Snell’s law or refraction (n_i \sin\theta_i=n_t\sin\theta_t) is true.

The next part was the hard part. Could they figure out the ratios of outgoing light to incoming light for both reflection and refraction (these are typically called the Fresnel Equations)? The added complexity of z not being zero was making a mess of the pretty equations in the previous chapter. But here we talked about how they didn’t really expect any differences for the ratios given that we hadn’t really changed the physics of the situation, just the math approach.

Here’s a quick analogy: when analyzing a falling object from an energy approach, you are free to choose the zero point of potential energy. The two most useful choices are where you drop the thing and where it lands. However, you can pick any height and still get the same results for the time of the fall, the speed at the bottom, etc. Now, to be sure, the intermediate calculations to get there will be different, but the “physics” should be the same.

So my students became relatively convinced that eventually the math should work out to give the same ratios. Only they couldn’t see how to get rid of the extra ugliness. Ok, the table’s set. Now I step in and make my mistake.

I gave them three choices:

  1. Figuring out this “gaping hole” in our understanding would be the standard for the day. Something like “I can fill the gaping hole that we dug today.”
  2. We could move on without filling the hole (I wanted to talk about how further interfaces changed things) and I’d fill the hole with a screencast outside of class.
  3. I could fill the hole right now (in class).

It led to a good conversation, with my pushing how number 1 might lead to the most learning of this topic. They got that idea, but I think they were nervous they wouldn’t be able to do it. I cautioned that they might see the fix and think it was “a trick I’d never think of!” (I really like to avoid that line coming out of my students).

In the end we all settled on number 2. We moved on to the case of multiple interfaces, making note that each new interface would provide two more boundary condition equations (each of which would have the new ugliness due to the fact that they weren’t at z=0) and showing how the number of equations would always match the number of unknowns (typically the +z and -z traveling waves in each region).

Why do I call this a mistake? At the end of class I asked one student about the vote. He admitted that he was nervous about number 1, but that if we weren’t going to do number 1, I should have done number 3, since he was invested in the problem at that point and was ready to hear/explore the solution. Moving on and pushing the “filling of the hole” to outside of class wasted an opportunity to have the students contribute to the solution.

So, what do you think? Would you put it to a vote? Would you add more options to the vote? What would you vote for? as a student? as a teacher? Has this happened to you?

Here are some comment starters for you:

  1. I’m in this class and I liked the whole process because . . .
  2. I’m in this class and I hated this whole process because . . .
  3. I would definitely do the vote, just with this small change . . .
  4. I would definitely not do the vote, because it irreparably damages the students in this way . . .
  5. I would add this to the choices . . .
  6. I would vote very differently as student or instructor and here’s why . . .
  7. Boy you’re brave! I would never admit to such a terrible mistake.
  8. Your analogy is dumb. Energy approaches don’t tell you about the time of travel!
  9. Here’s a better analogy for you . . .
  10. I love it when my students say “it’s a trick I’d never think of!” It shows how smart I am and gives them something to shoot for.
Posted in physics, teaching | 2 Comments

Finding grains

My colleague asked me to help him out with this image:

Grains in an SEM image (78 pixels per micron)

Grains in an SEM image (78 pixels per micron)

He needs to know the grain size distribution, and they’ve been having trouble automating this. He knew I’d been doing some work with Mathematica’s image analysis capabilities so he thought maybe I could make some headway. This post shows my current progress.

My first idea was do use EdgeDetect to find the boundaries:

grainsedges

This is the result of using EdgeDetect in Mathematica on the original image

This seems to isolate most of the grains, but I need Mathematica to isolate the areas that the edges separate. What I decided to do was to darken up the edges by dilating them and subtracting them from the original image:

original image with edges darkened and the rest set to white

original image with edges darkened and the rest set to white

Now I use the very cool MorphologicalComponents command to get this:

overlay of the identified grains on the original image. If it's colored, it's identified.

overlay of the identified grains on the original image. If it’s colored, it’s identified.

Here’s an animation that slowly identifies the grains:

Slowly reveals the grains (the last frame is shown in the previous image)

Slowly reveals the grains (the last frame is shown in the previous image)

Cool, huh? I thought so. I’m waiting to hear from my colleague to see if this is the sort of identification he needs. My guess is that he wants something like a histogram of the areas of the grains. With ComponentMeasurements, that’s super easy:

Histogram of the areas of the grains (in square microns)

Histogram of the areas of the grains (in square microns)

Ok, so now I admit to you my ignorance. I have no idea how people do this, though I did find this standard (paywall) at the American Society for Testing and Materials (ASTM). I’m hoping some of you can help me out with refining this technique. It’s really fast in Mathematica to run it, and I think it’s pretty robust, but it does clearly miss a few grains and inadvertently joins a few.

Thoughts? Here are some starters:

  1. This is cool, but I’d love to know the exact Mathematica commands.
  2. This is dumb, there’s a much better way to do it, and here’s how . . .
  3. This is cool, can I send you all my data so that I can graduate sooner?
  4. This is dumb, we pay grad students to do this by hand so that they learn to hate. Don’t make this available.
  5. This is cool, but I bet it would struggle with . . .
  6. This is dumb, it only worked this time because . . .

 

Posted in mathematica, physics | 4 Comments

Human loop speed

Rhett Allain’s post about a human running around a loop has really got me (and him!) thinking (click through to see the video). I wondered if there was a more sophisticated way to do the calculation for the minimum speed needed. While Rhett tweeted out an approach based on integrating the “fake” forces involved, I wanted to see if I could do it more generally for a body with any moment of inertia.

My approach was to figure out the speed that an object with a moment of inertia (as measured about its center of mass) would have to hit the loop with so that it would have enough speed at the top to not lose contact with the surface.

Just like Rhett, I found it easier to think of the angular frequency, \omega, instead of the speed, at least at first. After playing around with this for a while, I’ve convinced myself that the angular speed necessary at the top is independent of the moment of inertia and is, in fact, the same as what you get with Rhett’s initial calculation:

\omega_\text{top}\geq\sqrt{\frac{g}{r_\text{cm}}}

How can you determine that? Well the centripetal force at the top has to be at least as big as the gravitational weight force, so that the normal force of the loop is at least zero (floors push, they can’t pull).

So if the rotation speed at the top is independent of the moment of inertia, is that the whole story? No, for two reasons: 1) what does it mean to say “how fast do you have to run?” and 2) do you slow down from the bottom up to the top due to the increase of potential energy?

First number 1: from the angular speed above, you can certainly figure out the linear speed that the center of mass has. However, that’s not where your feet are, and that’s probably a better place to measure your speed.

Now 2: if you were to do this with an ice loop with skates, you could just get a lot of speed at the bottom and coast through the ramp. That’s basically what I assumed for the rest of this post. If you’re running, I’m not sure if you’d be able to keep your speed up while running around, I guess I assume you’d likely slow down in a similar fashion.

So, given the rotational speed at the top, can we figure out the speed you’d have to enter the loop so that you’ll have that rotation at the top? Sure! All I did was figure out the total energy at the top and assume you have that same (total) energy at the bottom. If you have less potential energy at the bottom, you’d have to move faster. The nice thing is that the speed of your center of mass at the bottom is the same as your feet (assuming you’re approaching the loop on flat land).

E_\text{total}^\text{top}=\frac{1}{2} I_\text{cm} \omega_\text{top}^2+\frac{1}{2} M r_\text{cm}^2 \omega_\text{top}^2+2 M g r_\text{cm}

where the first term is the rotational kinetic energy around the center of mass, the second term is the translational kinetic energy of the body, and the third term is the additional gravitational potential energy compared with when the body enters the loop.

To find the speed at the bottom, we plug in the expression for \omega_\text{top} and set that energy equal to the translational kinetic energy at the bottom 1/2 M v^2, and solve:

v_\text{bottom}=\sqrt{\left(r_\text{cm}^2+\frac{I_\text{cm}}{M}\right)\frac{g}{r_\text{cm}}+4 g r_\text{cm}}

Note how you get the well known v=\sqrt{5 g r} if the moment of inertia about the center of mass is zero (which would mean that the r becomes the radius of the loop).

But that’s not the whole story, since I’d rather express it in terms of these variables:

Schematic showing an extended body going around a loop

Schematic showing an extended body going around a loop

(sorry for the crappy drawing, I was in a hurry :) Note that now r_\text{cm}=R-h/2. With that we get:

v=\sqrt{\left(R^2-Rh+\frac{5 h^2}{12}\right)\frac{g}{R-\frac{h}{2}}+4 g \left(R-\frac{h}{2}\right)}

Ugly, right? But still interesting. Here’s a plot of the speed at the bottom for an R=1.5m loop for people ranging from 1 to 2 meters in height (note that I’m modeling a human as a rectangular bar with I_\text{cm}=\frac{1}{12} M h^2):

minimum running speed versus person height for R=1.5m

minimum running speed versus person height for R=1.5m

So what’s the upshot? Raise your hands when trying to run around the loop and you won’t have to run as fast.

Thoughts? Here’s some starters for you:

  • I’m not convinced that the moment of inertia doesn’t affect the angular speed at the top. Prove it!
  • I tried this after reading this post and now I’m in the hospital. What’s the name of your lawyer?
  • Whatever Rhett says is law. You haven’t contradicted him have you?
  • Don’t you have a real job?
Posted in fun, physics, twitter | 3 Comments