So I’ve decided to see if I can search for some. This post lays down things I’ve discovered as I’ve geared up to a huge optimization run [SPOILER: I haven’t run it yet]. Here’s the list:

- There is an angle you can drop a stick (without rotating) so that it’ll hit twice and bounce right back up without rotating.
- Figuring out the correct response to a rigid body bounce on a hard surface was more complicated than I’d thought it would be.
- Using Mathematica’s ConvexHullMesh was a cool way to make random multi-sided dice.
- Getting Mathematica to use “WhenEvent” to determine impacts and then implement point 2 above was frustratingly difficult.
- Doing bounces like I have in the past (treating the ground as a stiff half-spring) works well and lets me do some energy loss mechanisms that I couldn’t figure out in point 4 above.
- Soft versus hard surfaces affect the roll (defined as the side that ends up down).
- My kids have helped me a ton:
- They think adding in translations in addition to rotations and bounces is important. I’m not sold, but it makes for cool animations.
- They think hard surfaces look better.
- They think most of the energy loss happens at the bounces (my first approach was to use a lot of air resistance and have the bounces be pure – especially when doing point 4 above).

- Determining which side is “down” was trickier than I thought – though I’m open to other suggestions.
- Ultimately I’d like to get feedback on the assumptions I’m building in so that when I run the optimization I can trust the results.

Given that it’s late on a Sunday night, I think I’ll just show some of my results (graphics and animations below) and see what sorts of questions folks have:

Rebound velocity of stick: This shows the rebound velocity of the center of mass of a stick that has symmetric masses on it. (think —0—-0— or 0———–0 or -0———-0- etc) where the position of the masses as measured from the center is the x-axis. Note that where the velocity crosses zero is the point where the stick would only rotate after the first bounce and therefore the motion would be totally symmetric (it would ultimately bounce right back up with the opposite angle with respect to the horizontal but without any rotational energy).

This shows a dumb mistake I was making: I was assuming that the center of mass of a polygon was equal to the center of mass of the vertices. As you can see below, that’s true for triangles but not for other polygons (COM_reg means center of mass of the region whereas COM_pts is the center of mass of the vertices).

Success bouncing a 2D shape. After finally giving up on the WhenEvent approach in Mathematica, I just said that any vertex that goes below the ground needs to experience an upward constand force. The horizontal line is the original height of the center of mass.

Showing that you really do get different rolls with a triangle. Note that now I’ve added in some friction.

Histogram of 100 rolls for that triangle. You can tell than 100 rolls isn’t good enough since surely eventually side 2 should land more often.

A 4-sided shape and accompanying histogram after 1000 rolls (note that each roll takes 0.5 seconds).

Combing through the data I found one of the rare ones where it lands on side 1:

Ok, I got 3D bouncing working!

And I can do lots of sides!

Here’s a 16-sider with some nice color added.

Here’s a 6-sider comparison showing a soft (top) and hard (bottom) floor. Note the difference in the ultimate side that ends up down.

Here’s a plot of the side that’s down as a function of time for the animation above. The blue curve is the soft surface and the red curve is the hard surface.

Ok, so I know I haven’t put much detail in yet, but I wanted to at least get down some of what I’ve been working on. The next step is to check some of my assumptions with you fine readers and then to go ahead and do some long optimization runs. I was thinking of doing a genetic algorithm, but then I’d need a ton of runs (keeping in mind, of course, that each roll takes a while (0.5 seconds in 2D, 2 seconds in 3D).

So some questions for you:

- 2D is way faster, is it worth my time to explore it?
- I do friction in the following way (does it bother you?):
- if it’s not in contact with the floor, there’s no energy loss
- If it’s in contact with the floor, there’s a contribution to the force that’s proportional to the velocity for those variables related to the vertex that’s in the floor.
- Note that means I’m not doing sliding friction but rather viscous friction.

- By adding in translation, I have 2 more variables to keep track of (I’m already doing z(t) of the center of mass and the 3 Euler rotation angles). Worth it?
- Doing a 1-d minimization would be much faster than something like a genetic algorithm. What would that variable be for, say, a 6 sided die?
- What do you mean by a fair die? How much variation after, say, 1000 rolls is small enough for you? I’ve asked my kids what it would take for them to be suspicious of any dice they own (big D&D players) and they’ve got some interesting opinions.

So, can you help? Here are some starters for you:

- I’m in this class and . . . wait . . . nevermind
- Seriously, you haven’t posted since February and this is all you can come up with??!!
- I think this is cool. I think you should . . .
- I think this is dumb, I think you should . . .
- Once again that promise to switch to python wasn’t worth the breath it took to say it, I see.
- Here’s my address where you can send your new 3D printed dice to me.
- Even 1000 roles isn’t enough. This will never be done to my satisfaction.
- What about dice where a few sides almost never come up but the rest have near equal probability?
- Without even looking at your code I know what your WhenEvent problem is. Here’s how to fix it . . .
- I would have thought it would be obvious that a rigid body should bounce off an infinitely massive floor such that the local velocity of the part that hits the ground reverses its z-component and that there’s only one solution for the speed of the center of mass and the rate of change of the Euler angles that both conserves kinetic energy and correctly accounts for the expected change in angular momentum!
- I don’t understand why you say it’s hard to figure out which side is down. Just look at it!

I was in a pre-conference workshop that was about critical thinking. This is a hot topic in higher ed, especially ever since “Academically Adrift” was published, indicating research that many college student’s critical thinking skills actually got worse during their college years. Someone mentioned something that really has me thinking:

Students struggle to understand how two people who are both thinking critically can come to different conclusions.

Not surprisingly a few people in the workshop muttered about politics in the US to give a bunch of examples. For me, though, I realized that my main discipline doesn’t really suffer from this problem. If there’s a disagreement between two physicists, it usually means there’s just not enough evidence in yet. The two can be arguing about which theory best describes reality, but if they’re really arguing, it’s usually because both of their theories match all the available data. The reason they don’t fully agree is that the two theories make predictions about things that haven’t been detected yet.

This actually was a large part of my masters work. I was in a group that was really trying to understand how ultrashort pulse lasers work. [brief aside: These are lasers that blink. They’re only “on” for 0.0000000000001 s, then they wait for a few microseconds before they repeat.] My group had one theory, and another prominent group had another to describe exactly how these lasers developed the intense electric fields involved. The problem was that the standard measurement that people were able to do at the time did not distinguish the two theories. It was interesting to go to conferences and be involved in what felt like heated debates. But really they were just hopeful debates. Both sides wanted to be right, but both realized that at most only one of them was. My masters thesis was all about the development of a new measurement technique that could clearly distinguish the two theories.

So that really has me thinking. Does this approach to natural science distinguish how it employs “critical thinking” from other disciplines? I’ve begun to explore the political arguments I’ve been involved in, especially those where I feel like both sides are “thinking critically.” We have access to the same facts, but we feel that the best moves for the future are nearly diametrically opposed. It seems to me this happens for at least two reasons:

- We disagree on base assumptions about something.
- We prioritize particular future events differently.

I’ve been a participant in arguments that have fizzled because both parties have realized that either 1 or 2 above is what’s happening. That fizzle takes the form of “oh, well then I guess we just disagree then” or “well, then you’re just an uncaring SOB, I guess.”

In physics, the arguments come to an end when new data comes to light. People can be disappointed that their theory wasn’t right, but they don’t kick themselves for being wrong. Their theory matched the data that was known. They just bet wrong. Moving forward they’re happy to use the correct theory.

One interesting physics example is how to interpret quantum mechanics. There’s tons of disagreement about what’s real, what a measurement does, how many universes there are, etc. However, to participate in the argument, you have to back a theory that matches all the data. When pseudoscience folks try to join or say things like “well anything goes”, they’re usually pretty easily shot down when their theories are shown to not match particular measurements. The argument is really about what reality is, not how to make calculations or predictions.

So what do you think? Here are some starters for you:

- This is interesting. Here’s an argument that happens in my field . . .
- Have you ever heard of reading?! Here are 10 things you should read before blathering on like this . . .
- I think you’re not being entirely honest here. After reading your paper I see that your new technique vindicated your group’s theory. Seems fishy to me.
- I remember when FROG was invented. It opened up so many new ways to think about our ultrashort lasers, thanks!
- If someone disagrees with me, it’s clearly because they’re not thinking critically.
- Quantum mechanics is just a spherical-earth conspiracy.
- What arguments do you see your students having in lab?
- Are you saying that if disagreement lingers the participants aren’t thinking critically?
- I wrote “Academically Adrift” and now I think I should go and write a whole new chapter about how the critical thinking abilities of bloggers goes downhill.

Principals have been realizing lately how hard it is to get their teachers to do good work. Too many of them have been spending time talking to each other to find better ways to teach. That takes away from the time that they could be interacting with their students. Now a few schools around the country have started to use a new approach: locking each teacher away from all others.

When the teachers get to school, they can be seen smiling and chatting with their friends in the parking lot. But when they get to the school, they’re met with a phalanx of administrators just inside the door who put all the teachers in bags and cart them off to their classrooms where they’re locked in. At the end of the day they are brought back out, and only then are they able to interact with their co-workers.

“It really sucks because it’s usually so great to get good ideas from people in similar situations that I can incorporate into my teaching,” said one teacher, clearly pining for her friends. Another added “I can’t believe that they’re painting us all with the same brush, assuming that all we want to do it talk with our friends instead of teaching. That’s not fair. I love to teach!”

But the principals have come to understand that any sort of access to professional development that might bring in new ideas can only take away from the tried and true approaches to working with students. While their teachers are complaining, the principals are sure this is the right approach.

Some brave teachers have pointed out that the lack of access to their professional coworkers has changed their behavior. Now if they can’t figure out how to work with a student, they just keep trying other things that they come up with off the top of their head. “Before I would find a friend at lunch to brainstorm ways to help, but now I have way more time to really focus on the problem.”

The bag and lock technology is being pioneered by BeAlone, whose founder realized just how bad things were when he asked how his daughter was doing in school and only got a response after the teacher checked in with all his daughter’s teachers. “I couldn’t believe how long that took! I just wanted to know if she was getting an A+, not whether she was developing lifelong learning strategies.” The company’s clients also include comedians and musicians who like to make sure their audience is only listening to them. The cost is $200 per bag or schools can rent them for $30 per teacher for the school year.

Your thoughts? Here are some starters for you

- That’s weird, I listened to NPR on the way home today and I didn’t hear that story.
- Wait, I feel like you’re being sarcastic, but I can’t put my finger on it.
- You forgot about my favorite part: …
- This is dumb. Principals who do this just don’t realize how creative teachers can be when they can work together.
- This is great. I think we should seek to have this in all schools!

My first thought was to see if there were ways to simplify the differential equation solving approach. What John was doing was the full 3D version assuming the sun’s mass stays significantly above the mass of the planet. So he’s already doing some simplifications because he’s not bothering with the center of mass frame. I think he’s right to do that because after a page or so of notes, I’ve realized that the center of mass frame gets pretty ugly when one of the participants is losing mass.

So what else can be simplified? The great thing about central force problems is that you can reduce it all the way down from six variable to one (in fact that’s one of my standards when I teach Theoretical Mechanics):

- Each of the two masses has 3 variables (x, y, and z) so you start with six.
- The center of mass approach lets you model the problem as a fictitious mass (=m1m2/(m1+m2)) that’s the same distance from a fictitious force center as the two actual masses are from each other. Now you’re down to three.
- If it’s a central force, the angular momentum is conserved. That means the fictitious particle has to stay in a plane. Now you’re down to two variables.
- If the angular momentum is conserved, you can treat the rotational part of the kinetic energy as an effective potential energy, leaving only the radius variable. Now you’re down to one.

You can model the complex 6-dimensional problem as a single (reduced) mass experiencing a potential energy function given by:

where is the reduced mass and U is the potential energy that is only a function of r. To get the force this fictitious one dimensional particle feels, you just need to take a derivative (and add a negative sign).

So I gave it a try. The first thing I did was try to see how far into the future I could integrate using Mathematica. It turns out I could go quite a ways! Here’s a plot of the radius as a function of time.

As you can see, I was able to go out several billion seconds of integration time. This turned out to be around a billion “years” given the simple parameters I chose. If the radius grows, we expect the years to take longer. Here’s a plot of the instantaneous “year time” over the simulation:

as expected!

So it seems that a very slow loss of the sun’s mass would just slowly increase both the circle radius and the year time of the planet.

Another question that John asked was whether there might be an analytical solution to this. I quickly tried DSolve instead of NDSolve in Mathematica and got no joy (I wasn’t overly hopeful). I did ask a good friend of mine who’s a real expert in differential equations whether he knew of a particular decay function I could use that might have an analytical solution. He couldn’t think of one, but did point out that if you had the mass changes be discrete you could “easily” build up the solution since in between the mass changes you’ve got a simple inverse square law orbit that does have an analytical solution.

What he means is that if you start with, say, a circular orbit, you can predict exactly where the planet will be and what direction it’s traveling (and what speed) when the first mass change happens. When it does, you now have initial conditions for a slightly different inverse square law problem. Because the sun has lost mass, it doesn’t pull as hard as before so the circular orbit becomes an ellipse. That ellipse is fully analytical and you can figure out everything you need to know about the planet at the next mass drop. Repeat this to your hearts content and you’ve got a piecewise analytical solution.

This sounded intriguing, but I started to wonder what the physical differences would be between the two approaches. I figured I could check on a relatively small time scale and look for differences. So I coded up both a continuous and a discrete mass loss model, where they connect with each other after each mass loss. Here’s a plot of both mass loss functions:

Here’s the animated result (the blue dots are the points of the mass change for the discrete (blue) model:

As you can see, there’s a pretty noticeable difference in the orbits. Admittedly this is only because the mass jumps are pretty big, but it still makes me nervous.

Here’s a plot of the radius function for both:

Note how the continuous one seems to never get any close to the sun while the blue one is clearly showing more of an elliptical motion (it gets closer and further from the sun during every orbit). To see that more clearly, here’s a plot of r'[t] or the rate of change of the radius:

It sure looks like the orange line doesn’t go negative (indicating the planet never gets closer to the sun). Here’s a zoom in of the orange one to see it better:

Yep, never negative!

Your thoughts? Here are some starters for you

- This is cool, but I’d like to hear more about . . .
- This is dumb. Why didn’t you talk about this instead: . . .
- It looks like you were doing a Hamiltonian approach in your notes. I thought you hated the Hamiltonian approach!
- You do know that the perturbations due to Jupiter alone would totally wash out these small effects, right?
- As soon as I saw that you don’t bother to label your axes, I stopped reading. Thanks for saving me some time.
- I thought you said you were going to try to do everything in python from now on? Liar!
- Why didn’t you set up your constants so that a year takes a year? Seems obvious to me.
- Can you share your code?
- How well do your students do on the 6->3->2->1 standard?
- In the piecewise analytical approach, could you look at the solution when the gap time between mass changes goes to the limit of zero?

I asked him if he thought there was an infinite number of cells in the human body. That launched us into talking about all of these:

- Air molecules on earth
- houses
- homes
- books
- gallons of milk
- hairs on your head
- cups in the world
- heaps of sand

Some were easy: houses, gallons of milk, cups. Some got us really talking, especially “books” as we started to interpret those as fiction books.

Here are some of the thoughts that occurred to us as we argued around the table:

- If you don’t know where the end is, you can’t say you’re halfway done.
- Once it’s done, there’s a halfway point if you count pages or words, but half a story or half a plot is harder.
- We talked a lot about how the Harry Pottter books cram a lot in the last 100 pages or so, for example

- If you have a heap of sand and take a grain out, it’s still a heap. If you repeat, at some point it’s no longer a heap, but it’s never a fractional heap.
- So maybe integers are used for things that can’t be split up? If you can split them up, you should use reals or decimals or rationals or something.

- My partner is a writer and she talks about how many of her writing friends are heavy outliners. They know where the half-way point of their story is.
- Houses are measured with real numbers, but homes are like heaps: they’re a home until they’re not. Half a home doesn’t make sense.
- Human cells are interesting. They “divide” to reproduce, but my argument was that right up until it actually splits, it’s one cell, and once it splits, it’s two.

I’ve been thinking about this all day. I’m coming around to the notion that we often say something is integral (or is counted by integers) when really we should use real numbers and admit that it just works out that they’re often things like 2.00000 . . . etc (like houses, or gallons of milk, or cups, but not homes, books (maybe?), and air molecules). I think we use rational things (fractions) when maybe we shouldn’t. Maybe when someone says they’re halfway done with a story they’re really saying they are still at zero stories but will soon be at 1 story. They might be measuring time, or words, or pages, but that’s a proxy, using things that can’t be measured with integers.

One interesting thing was the different approaches of my kids. L was interested but admitted he was confused at times (now we’re a little nervous about tomorrow’s test – I joked that I should send this post to his teacher). C (10th grade) really felt that if you couldn’t clearly see the end of something, figuring out fractions didn’t make sense. A half gallon makes sense because we know what a full gallon looks like, but a half story is tough to make sense of. B (12th grade) felt that you can convince yourself that you have less than 1 of lots of things (like books), but even if you can’t figure out what the fraction actually is, if there’s a way to think about it being less than one, you can’t say it’s described by integers. Mostly that argument was on the book side, not air molecules or hairs on your head.

Overall it was a fun conversation. I love seeing the #tmwyk hashtag on twitter (talk math with your kids) but it’s often hit or miss with my own kids. This was fun mostly (I think) because I was really trying to wrap my own brain around it, and not just trying to teach them something.

So what do you think? Here are some starters for you:

- This is great. I think another great thing to talk about to see if it’s integral is . . .
- Why don’t you use the word “quantized” for this? What, are you scared of physics or something?
- This is dumb, everything is countable and split-able. I can’t believe I even read half of this post.
- #tmwyk can work great even if you’re “just” teaching them something, here’s 7.5 examples . . .
- What did you have for breakfast?
- I’m a fiction author and I’m really bothered by what you say. I often take 3/4 of one book and put it together with 1/4 of another to get a new book I can publish.

If you click through you’ll see lots of great ideas. I’m not sure what the right answer is, so feel free to weigh in below in the comments.

What actually made me decide to blog about it was that I realized that I asked the wrong question. I really wanted to know what would cause the repetitive pattern, so I think really I was thinking about what would cause the frequency of the wave.

Now, I think everyone who replied on twitter recognized one of the fundamental relationships about waves when answering my question:

and really just jumped to physical descriptions of what might cause that frequency. In other words, they realized that the car was moving and basically leaving behind a trail of snow blasts at a particular frequency. Spatially that all works together to leave a record with a measurable wavelength.

As I thought about both my question and the answers throughout the day, it hit me that it’s one of those things that might lose students, especially early on before they’ve really internalized the relationship above. If you ask students to engage with the image or even the Hyundai commercial it comes from, they’ll engage and come up with all kinds of interesting questions, it seems to me. But if you ask about the wavelength like I did, it might shut them down, because then they’re not going with their gut and instead are trying to remember the relationship between wavelength and frequency (or possibly period).

I guess what I’m saying is that I knew my audience and I figured I could ask the question any way I wanted to. And it worked! But as I think about using this in class, I think I would have to be more careful. I think that’s a cautionary tale for me. It reminds me of times I’ll ask about something I think they’ll have experience with, or maybe some cool insights about, but I’ll ask it using vocabulary that’s still too new for them. I think instead I should just show them something and ask “what do you see?” or “what do you think is going on here?” or “Is there anything interesting going on?”

Your thoughts? Here are some starters for you:

- This is interesting. It reminds me of . . .
- This is really dumb. What you should have asked instead was . . .
- This is really cool. I think I’m going to buy a Hyundai now.
- This is really a waste of my time. I already have a car.
- Why didn’t you post a link to the video instead of a crappy screen grab you clearly took while pausing the tv during a really exciting Manchester Derby?
- Here’s a better question to ask students about this pic . . .
- I was the driver in this commercial and here’s what actually caused that . . .
- I was the camera person in this commercial and here’s why the driver really doesn’t understand physics.
- Here’s my crazy explanation for that snow pattern.
- It’s not a wave, you should stop saying that.

]]>

In my role as director of the first year seminar I was involved in some of the planning (I need to be clear here and heap praise on the C2C team – they did all the work and deserve all the credit for the great day). Specifically I was involved with planning how the overflow room should work. We hold the event in a neighboring church that can only seat something like 500. We like to have a satellite location that can simulcast the event. In early planning, I expressed how it would be interesting to do something different in that room. I thought it would be great to brainstorm activities people could do, while listening, that could raise the engagement of the audience. What we decided on was to crowd-source the prioritization of the questions we’d ask.

We thought it would be great to encourage the audience (only in the satellite room) to use internet-connected devices to submit and vote on potential questions for the speaker.

We picked the Q&A feature of Google Slides to do this. We made a simple one-page Google Slides document and turned on Q&A when the event started. We made sure the url was clearly displayed in the room.

I invited the first year seminar faculty to bring their classes, with a limit of 3 classes, and talked to a few other faculty about it as well.

We told people we’d be the first three questions asked in the church since I promised to text our questions to a plant (from the C2C committee) in the church.

Only two faculty brought their first year seminars (the rest went to the church). When I asked people how they made that decision I heard lots of interesting things:

- “I really want my students to
*be there*to hear Kemba” - “I’m not sure my students will have the focus you’re looking for”
- “Sounds cool but I really want to be in the church”
- “I’d love to because I’m always squished in the church”
- “That’s an interesting experiment”

In addition a few other faculty and students came. All together we had over 80 people there.

We passed out cards explaining what we were doing, because I figured if anyone came late I wouldn’t be able to explain it myself. Many were there early and we verified the technology worked with everyone’s cell phone.

We only had a handful of submitted questions, the highest rated of which only got six votes.

I submitted the questions a little early (we had a 2-minute delay that I didn’t want to miss). The question ranking changed a little after I submitted them, but the top three remained the top three. In the church all our questions were asked, but not all at once at the beginning of the Q&A session.

I was a little disappointed at the lack of engagement with the technology, but quite happy with the respectful and attentive attitude in the room. I’ve spoken with some about why there weren’t so many questions submitted and a few suggested that a lot of Kemba’s presentation was personal narrative, and that’s sometimes hard to question.

I think our questions were good. They certainly weren’t the horror stories you sometimes see at Q&A sessions for big speakers. You know what I’m talking about:

- “Thank you for your talk. I agree that _____ and let me tell you my whole life story before getting to my actual question.”
- “I came in late, could you please say everything you said at the beginning again?”
- “I have told you before that I disagree with you about point ____ and I’m going to walk you through every conversation we’ve ever had right now.”
- “Do you know ____ who says the same stuff as you but better?”

I had a question voted down. What’s fascinating about that is my emotional reaction. I would have thought I’d be disappointed about that. But it was interesting that I was relieved! I realized that I might have asked it if there were no crowd-sourcing and I might only hear after the event how dumb a question it was. In this case I don’t think people thought it was dumb, but they clearly thought other questions were more worth their time, and I think that’s great!

One interesting feature of this experiment was that the speaker couldn’t see how the voting and “leader board” evolved during her presentation. I think that’s likely a good thing, as it can be very distracting. In our implementation I did not project the leader board, but it was on everyone’s phone.

I think I’d like to do a little more experimentation with this. I think it could help with student engagement and I think it could really make the Q&A sessions more worthwhile.

Your thoughts? Here are some starters for you:

- This is cool! You could also think about doing . . .
- This is dumb! Instead you should have . . .
- I thought you used to love Google Moderator, why didn’t you use that?
- I think you didn’t get too many submitted questions because . . .
- I think you didn’t get too many votes because . . .
- I’m personally hurt by your examples of horror shows in Q&A sessions. I love all of those examples you describe!
- Here’s another to add to your horror show list . . .

]]>

Our continuing goal is to produce an 2D surface (drum) that has resonant frequencies that are harmonic. The parameter space to search in is huge (infinite?) but ultimately we’d love to find a shape (that we could 3D print!) that would sound cool. If we could find it we could print several different sizes to have a harmonic instrument.

For most instrument designs, if you name the lowest/dominant frequency that you want, you can usually pretty easily find the physical parameters you need to achieve that. Take a simple stringed instrument as an example. The three variables that matter are the length of the string, the tension in the string, and the linear mass density of the string:

So it’s pretty easy to find a string that sounds right. After that the beauty of a string is that all the other resonances are simple multiples of that fundamental frequency, so all of them sound good (except the 7th harmonic, that sounds like crap – see the placement of pickups on electric guitars that try to kill that one).

The problem with drums is that most of the time the resonances don’t have such an easy integer ratio relationship. That’s why drums aren’t usually considered harmonic instruments.

So, in our case we’re hunting for a shape that has some interesting resonances. This post is trying to get some help from you fine folks on how to use a neural network to do that.

Here’s what we’d love: name a set of frequencies we’d like a drum to have and determine the shape that would do it. We’ve tried some other approaches, but here I’m trying to get some help on how to design a neural network to do it. Here’s our set-up (nearly all points welcome your challenges!):

- We are somewhat convinced that the resonances for a polygon shaped drum are pretty close to the resonances of a smoothed out shape that would hit the same points as the polygon (this allows speed on our end to generate the frequencies for a given drum – ie the opposite of what we’re looking for).
- We make a training set by setting the order of the polygon (n) and then repeating:
- generate n points in the plane
- Find the shortest tour of visiting them to give us a region that doesn’t cross itself
- Make the region (in Mathematica: BoundaryMeshRegion[points, FindShortestTour[pts, Line[Range[n]]][[2]]])
- Find the lowest 3 eigenfrequencies (in Mathematica: NDEigenvalues . . .)
- Have the new trainer be {f1, f2, f3} -> coordinates of polygon (note I say more about this point below)

- We make a neural network that takes 3 inputs and matches the number of coordinates for the polygon for the outputs.
- Mathematica allows us to quickly set up such a network and to go crazy with the number of nodes in each layer and how many layers. Here’s the syntax for a single hidden layer with 10 nodes, Sigmoid-based NN:
- NetTrain[NetChain[{10,LogisticalSigmoid, 7}], trainingset]
- NetTrain looks at the training set to get the input layer size, but you have to put in the output size. The 7 there is for a pentagon shape (see below).

- We’ve tried 7 hidden layers with 100 nodes each along with all kinds of different shapes and sizes.
**We’d love some ideas here**- Beyond a sigmoid nonlinearlity Mathematica lets you do all kinds of things like hyperbolic tangent and ramp

- Mathematica allows us to quickly set up such a network and to go crazy with the number of nodes in each layer and how many layers. Here’s the syntax for a single hidden layer with 10 nodes, Sigmoid-based NN:

For n=5 (pentagons) I originally thought to try 5 ordered pairs for the coordinates of the pentagon. I realized, though, that there’s lots of redundancy built into that. For example, rotating a region or translating it doesn’t change the resonant frequencies. So instead, for the moment, I’m trying 4 lengths and 3 turning angles (because assuming the 5th link goes back to the first point – which I set at the origin – is enough) or 7 pieces of information. For triangles I use two lengths and one angle, which is also enough. I figure that savings should be useful.

Unfortunately, even after training tens of thousands of rounds with training sets containing of tens of thousands of trainers we’re not making much progress. Hence this post.

So, can you help? We’d love some challenges to our assumptions/approaches listed above. We’d also love to hear some good ideas for neural network structures to try. Luckily doing it in Mathematica is pretty easy, but if you’ve got a system you’d like to try we’re happy to provide the training set.

Some starters for you:

- This is cool! I think point x.y above can be improved and here’s how . . .
- This is really dumb. It’s obvious from point x.y above that you guys don’t know what you’re doing. What you should do is . . .
- You didn’t italicize
*Mathematica*at all in this post so I stopped reading. - I thought you said you were trying to do as much as you can using python these days. What gives?
- What makes you think a neural network can actually solve this problem?
- I don’t understand point x.y above. Please explain it better so I can get some sleep.
- My band’s name is “7th harmonic” and we’re suing you because you said we sound like crap

]]>

I especially like how he stuck with it over several years! I liked the explanation a lot about why the propellers take on such weird shapes, but I didn’t think much about the mathematical structure of them.

But then I saw this page and got really interested. Ok, I admitted to the world that I was stumped

So then I decided to dig in to figure out why the simple Mathematica command:

ContourPlot[Sqrt[x^2+y^2]==Cos[5 ArcTan[x,y]+17y], {x,-1,1},{y,-1,1}]

gives the correct form for a simple mathematically-based propeller. At first I thought that maybe it was just similar enough to the image the original poster wanted but then I made this gif and realized that it was dead on:

(Click through and you’ll see a bunch of other examples that I slapped together)

So why does that simple statement (ContourPlot) do the trick? Well, what do we need to figure out? We need to find the locations on the plane where the black rolling shutter line intersects with the blue propeller function. So let’s see if we can express that mathematically:

where v is the vertical speed of the shutter and a is the maximum radial extent of the propeller.

where is the angular rotation speed of the propeller. This gives you the distance from the origin to the edge of the propeller for a given angle, . Expressed as a function of x and y you just need or better yet Mathematica’s ArcTan[x,y] function that can work on the whole plane.

So what we’re looking for are locations on the plane whose distance to the origin matches the propeller’s radial extent at that angle when the rolling shutter is there, or:

where I’ve solved the equation for the y-position of the shutter for t.

Aha! So we just need to find points on the plane where that equality holds. But that’s what ContourPlot is really good at doing! Really all it does is make a big grid on the plane, check all the points, and if it finds points where that equality is close it zooms in and makes a smaller grid until it finds points that are close enough. That process repeats MaxRecursion number of times (I think the default is 2). The suggestion on the StackExchange post is to set PlotPoints->100 so that the initial grid is fine enough. If I do that but set MaxRecursion to zero it looks pretty jaggedy (not sure that’s a word).

Yesterday when I was futzing around it took forever to make the movies. That’s because at every time step I was redoing that ContourPlot command but with a plot range only below the rolling shutter. It’s the ContourPlot that takes forever so I found a better way today. Now I do the ContourPlot command just once for the whole plane. Then I extract from it the points it finds and then I just use a Graphics command to plot the points that are below the rolling shutter for the movies. The whole process (including exporting the GIF) is about 30 seconds now compared to 5-10 minutes yesterday.

What’s fun is that you can set your propeller function to be anything. Here’s a couple examples:

So, I’m glad I put a little more time into this, it’s certainly been both fun and entertaining. I hope you’ve enjoyed it too.

Your thoughts? Here’s some starters for you:

- This is cool, do you mind sharing your code? (as usual it’s incredibly sloppy with almost no comments)
- None of these look like real life propellers, this sucks.
- Why didn’t you do this in python? (seriously, does the contour plot package in plotly or mathplot lib work for this type of problem?)
- Can you try this function for a propeller: _____
- What happens if the shutter comes in from a different angle?
- If Smarter Every Day already explained all this, why did you bother at all?
- Instead of just giving the outline of the blades, can you fill it in?
- Why did you write “seriously, . . .” in that fake comment above? Aren’t these starters supposed to be strictly for us readers to use?

Then my brother was interested in a web page that would work on his phone that would help him check in his bike shop customers. So I dug in a little deeper in to google apps script web apps. That’s what got this current fire really going (note: I made a page that’s driven by a simple spreadsheet that has items and price estimates in one sheet and work-order quotes in another sheet. He calls up the page and sees checkboxes for every item in that first sheet. He checks whatever makes sense and hits submit. He’s then shown a cost estimate where he can add notes (like customer name, etc) and hit submit again to save it in the second sheet).

Ok, so here’s what I’m working on and wondering about: Could I use google apps script web apps on some some small-scale full stack problems I’ve been working on? I do a lot of PHP/Laravel/MYSQL/LAMP/Javascript/CSS full stack programming, but it’s often overkill for a simple thing (like my brother’s problem). I do Laravel instead of python/Django, Ruby on Rails, or Meteor mostly because it’s easiest to get the sys admins at my institution to support PHP. Whatever, they all have basically the same functionality (and the same rabid fan bases). So I know how to do fully-functional database-driven web sites. That’s not my problem. Instead I’m interested in GAS web apps because they offer an intriguing list of opportunities:

- No server to set up. It’s just google
- baked in reliability etc

- Super easy authentication/authorization. It’s already built in to the google ecosystem
- The data layer looks and acts like a spreadsheet
- Note that google sheets are promoted as spreadsheets but they’re really quite powerful due to “query” (see below) and the interconnectedness with all the other google stuff
- End users are way more willing to engage with a data layer that looks like a spreadsheet than a mysql database. Take my brother, for example. I didn’t have to make a front-end script to allow him to change his price list. He’s perfectly happy to do that right in the spreadsheet

- emailing is easy.
- In Laravel, for example, you have to set up the right package, turn on SMTP stuff, and make sure you’re not pissing off your sys admins

- Single page apps
- I’m not actually sold on this, but I notice it in the PR sites I’ve been perusing. Basically you can just load one site and then interact with the server to change portions of the page. I did a ton of this with my “myTurnNow” app that lets up to 100 people engage with each other without having to raise their hands. But it sure it easier to use old fashioned “submit” buttons with multiple pages (yes, Meteor users, I know, I know, . . . shut up!)

So I decided to write this blog post not so much as a “how you do it” as “should I do it?” Most of those points above are interesting, but maybe I shouldn’t be so afraid to just fire up a fresh Laravel app and do even little stuff.

Here’s some downsides:

- It’s kind of slow. You are having the script access a google drive doc and do stuff. That access is what’s seemingly pretty slow. If you just do non-data-layer stuff it’s pretty quick, but it’s noticeable so I thought I’d mention it.
- Really playing with the data almost always requires running google sheets formulas. You don’t have to do this. Most of the web sites suggest just sucking all the data in and dealing with it in javascript. I think that’s fine unless you think the data’s going to scale a little. If you google “use google apps script to run sheets formulas” you’ll see a few “impossibles” in your results, but don’t despair! You can do a very dumb sounding thing:
- Create a new sheet programmatically
- Set the top left cell of the new sheet to something like “=query(mycoolsheet!A:E, \”select max(A), B group by B\”)”
- Read any data on that sheet into javascript/GAS
- Delete the sheet

- Ok, yes, I know, that seems really dumb. But I’ve done it a bunch now and it seems to work. It gives you access to the fantastically useful “query” formula and it dramatically reduces the amount of data you’d suck into javascript. Also you don’t have do basically rewrite your favorite spreadsheet formulas into javascript.
- Weird urls: these are crazy looking but who cares (tinyurl exists, after all)

My current project: I need to write a bunch of reviews for a bunch of folks (I am in the dean’s office these days, after all). I want to be able to access both the formal stuff I’ve written and any notes that I have for this year and all years for every person I’m reviewing. I could do this in a heartbeat (ok, a day) in Laravel, but then I’m the forever owner, even when I’m out of the dean’s office. Doing this in GAS seemed like a fun project and, if it’s successful, I can just transfer ownership to someone else.

I’ve got it working, after lots of fits and starts, and now I’m writing to you, dear reader, to find out if it’s worth exploring more and putting this tool in my tool chest.

So what do you think? Here’s some starters for you:

- Laravel sucks. If you’re not doing Ruby you’re just dumb
- This is really interesting. What’s the learning curve like?
- Laravel sucks. If you’re not doing Django you’re just dumb.
- I really like the ______ aspect of this. Do you think that you could also _______?
- Laravel sucks. If you’re not using Meteor you’re just dumb.
- Tell me more about myTurnNow, that sounds really useful
- Laravel sucks. If you’re not using carrier pigeons you’re just dumb.
- I’ve used GAS and have come to the conclusion that . . .
- You never explicitly said that GAS was google apps script so I stopped reading. You suck.