Principals have been realizing lately how hard it is to get their teachers to do good work. Too many of them have been spending time talking to each other to find better ways to teach. That takes away from the time that they could be interacting with their students. Now a few schools around the country have started to use a new approach: locking each teacher away from all others.

When the teachers get to school, they can be seen smiling and chatting with their friends in the parking lot. But when they get to the school, they’re met with a phalanx of administrators just inside the door who put all the teachers in bags and cart them off to their classrooms where they’re locked in. At the end of the day they are brought back out, and only then are they able to interact with their co-workers.

“It really sucks because it’s usually so great to get good ideas from people in similar situations that I can incorporate into my teaching,” said one teacher, clearly pining for her friends. Another added “I can’t believe that they’re painting us all with the same brush, assuming that all we want to do it talk with our friends instead of teaching. That’s not fair. I love to teach!”

But the principals have come to understand that any sort of access to professional development that might bring in new ideas can only take away from the tried and true approaches to working with students. While their teachers are complaining, the principals are sure this is the right approach.

Some brave teachers have pointed out that the lack of access to their professional coworkers has changed their behavior. Now if they can’t figure out how to work with a student, they just keep trying other things that they come up with off the top of their head. “Before I would find a friend at lunch to brainstorm ways to help, but now I have way more time to really focus on the problem.”

The bag and lock technology is being pioneered by BeAlone, whose founder realized just how bad things were when he asked how his daughter was doing in school and only got a response after the teacher checked in with all his daughter’s teachers. “I couldn’t believe how long that took! I just wanted to know if she was getting an A+, not whether she was developing lifelong learning strategies.” The company’s clients also include comedians and musicians who like to make sure their audience is only listening to them. The cost is $200 per bag or schools can rent them for $30 per teacher for the school year.

Your thoughts? Here are some starters for you

- That’s weird, I listened to NPR on the way home today and I didn’t hear that story.
- Wait, I feel like you’re being sarcastic, but I can’t put my finger on it.
- You forgot about my favorite part: …
- This is dumb. Principals who do this just don’t realize how creative teachers can be when they can work together.
- This is great. I think we should seek to have this in all schools!

My first thought was to see if there were ways to simplify the differential equation solving approach. What John was doing was the full 3D version assuming the sun’s mass stays significantly above the mass of the planet. So he’s already doing some simplifications because he’s not bothering with the center of mass frame. I think he’s right to do that because after a page or so of notes, I’ve realized that the center of mass frame gets pretty ugly when one of the participants is losing mass.

So what else can be simplified? The great thing about central force problems is that you can reduce it all the way down from six variable to one (in fact that’s one of my standards when I teach Theoretical Mechanics):

- Each of the two masses has 3 variables (x, y, and z) so you start with six.
- The center of mass approach lets you model the problem as a fictitious mass (=m1m2/(m1+m2)) that’s the same distance from a fictitious force center as the two actual masses are from each other. Now you’re down to three.
- If it’s a central force, the angular momentum is conserved. That means the fictitious particle has to stay in a plane. Now you’re down to two variables.
- If the angular momentum is conserved, you can treat the rotational part of the kinetic energy as an effective potential energy, leaving only the radius variable. Now you’re down to one.

You can model the complex 6-dimensional problem as a single (reduced) mass experiencing a potential energy function given by:

where is the reduced mass and U is the potential energy that is only a function of r. To get the force this fictitious one dimensional particle feels, you just need to take a derivative (and add a negative sign).

So I gave it a try. The first thing I did was try to see how far into the future I could integrate using Mathematica. It turns out I could go quite a ways! Here’s a plot of the radius as a function of time.

As you can see, I was able to go out several billion seconds of integration time. This turned out to be around a billion “years” given the simple parameters I chose. If the radius grows, we expect the years to take longer. Here’s a plot of the instantaneous “year time” over the simulation:

as expected!

So it seems that a very slow loss of the sun’s mass would just slowly increase both the circle radius and the year time of the planet.

Another question that John asked was whether there might be an analytical solution to this. I quickly tried DSolve instead of NDSolve in Mathematica and got no joy (I wasn’t overly hopeful). I did ask a good friend of mine who’s a real expert in differential equations whether he knew of a particular decay function I could use that might have an analytical solution. He couldn’t think of one, but did point out that if you had the mass changes be discrete you could “easily” build up the solution since in between the mass changes you’ve got a simple inverse square law orbit that does have an analytical solution.

What he means is that if you start with, say, a circular orbit, you can predict exactly where the planet will be and what direction it’s traveling (and what speed) when the first mass change happens. When it does, you now have initial conditions for a slightly different inverse square law problem. Because the sun has lost mass, it doesn’t pull as hard as before so the circular orbit becomes an ellipse. That ellipse is fully analytical and you can figure out everything you need to know about the planet at the next mass drop. Repeat this to your hearts content and you’ve got a piecewise analytical solution.

This sounded intriguing, but I started to wonder what the physical differences would be between the two approaches. I figured I could check on a relatively small time scale and look for differences. So I coded up both a continuous and a discrete mass loss model, where they connect with each other after each mass loss. Here’s a plot of both mass loss functions:

Here’s the animated result (the blue dots are the points of the mass change for the discrete (blue) model:

As you can see, there’s a pretty noticeable difference in the orbits. Admittedly this is only because the mass jumps are pretty big, but it still makes me nervous.

Here’s a plot of the radius function for both:

Note how the continuous one seems to never get any close to the sun while the blue one is clearly showing more of an elliptical motion (it gets closer and further from the sun during every orbit). To see that more clearly, here’s a plot of r'[t] or the rate of change of the radius:

It sure looks like the orange line doesn’t go negative (indicating the planet never gets closer to the sun). Here’s a zoom in of the orange one to see it better:

Yep, never negative!

Your thoughts? Here are some starters for you

- This is cool, but I’d like to hear more about . . .
- This is dumb. Why didn’t you talk about this instead: . . .
- It looks like you were doing a Hamiltonian approach in your notes. I thought you hated the Hamiltonian approach!
- You do know that the perturbations due to Jupiter alone would totally wash out these small effects, right?
- As soon as I saw that you don’t bother to label your axes, I stopped reading. Thanks for saving me some time.
- I thought you said you were going to try to do everything in python from now on? Liar!
- Why didn’t you set up your constants so that a year takes a year? Seems obvious to me.
- Can you share your code?
- How well do your students do on the 6->3->2->1 standard?
- In the piecewise analytical approach, could you look at the solution when the gap time between mass changes goes to the limit of zero?

I asked him if he thought there was an infinite number of cells in the human body. That launched us into talking about all of these:

- Air molecules on earth
- houses
- homes
- books
- gallons of milk
- hairs on your head
- cups in the world
- heaps of sand

Some were easy: houses, gallons of milk, cups. Some got us really talking, especially “books” as we started to interpret those as fiction books.

Here are some of the thoughts that occurred to us as we argued around the table:

- If you don’t know where the end is, you can’t say you’re halfway done.
- Once it’s done, there’s a halfway point if you count pages or words, but half a story or half a plot is harder.
- We talked a lot about how the Harry Pottter books cram a lot in the last 100 pages or so, for example

- If you have a heap of sand and take a grain out, it’s still a heap. If you repeat, at some point it’s no longer a heap, but it’s never a fractional heap.
- So maybe integers are used for things that can’t be split up? If you can split them up, you should use reals or decimals or rationals or something.

- My partner is a writer and she talks about how many of her writing friends are heavy outliners. They know where the half-way point of their story is.
- Houses are measured with real numbers, but homes are like heaps: they’re a home until they’re not. Half a home doesn’t make sense.
- Human cells are interesting. They “divide” to reproduce, but my argument was that right up until it actually splits, it’s one cell, and once it splits, it’s two.

I’ve been thinking about this all day. I’m coming around to the notion that we often say something is integral (or is counted by integers) when really we should use real numbers and admit that it just works out that they’re often things like 2.00000 . . . etc (like houses, or gallons of milk, or cups, but not homes, books (maybe?), and air molecules). I think we use rational things (fractions) when maybe we shouldn’t. Maybe when someone says they’re halfway done with a story they’re really saying they are still at zero stories but will soon be at 1 story. They might be measuring time, or words, or pages, but that’s a proxy, using things that can’t be measured with integers.

One interesting thing was the different approaches of my kids. L was interested but admitted he was confused at times (now we’re a little nervous about tomorrow’s test – I joked that I should send this post to his teacher). C (10th grade) really felt that if you couldn’t clearly see the end of something, figuring out fractions didn’t make sense. A half gallon makes sense because we know what a full gallon looks like, but a half story is tough to make sense of. B (12th grade) felt that you can convince yourself that you have less than 1 of lots of things (like books), but even if you can’t figure out what the fraction actually is, if there’s a way to think about it being less than one, you can’t say it’s described by integers. Mostly that argument was on the book side, not air molecules or hairs on your head.

Overall it was a fun conversation. I love seeing the #tmwyk hashtag on twitter (talk math with your kids) but it’s often hit or miss with my own kids. This was fun mostly (I think) because I was really trying to wrap my own brain around it, and not just trying to teach them something.

So what do you think? Here are some starters for you:

- This is great. I think another great thing to talk about to see if it’s integral is . . .
- Why don’t you use the word “quantized” for this? What, are you scared of physics or something?
- This is dumb, everything is countable and split-able. I can’t believe I even read half of this post.
- #tmwyk can work great even if you’re “just” teaching them something, here’s 7.5 examples . . .
- What did you have for breakfast?
- I’m a fiction author and I’m really bothered by what you say. I often take 3/4 of one book and put it together with 1/4 of another to get a new book I can publish.

If you click through you’ll see lots of great ideas. I’m not sure what the right answer is, so feel free to weigh in below in the comments.

What actually made me decide to blog about it was that I realized that I asked the wrong question. I really wanted to know what would cause the repetitive pattern, so I think really I was thinking about what would cause the frequency of the wave.

Now, I think everyone who replied on twitter recognized one of the fundamental relationships about waves when answering my question:

and really just jumped to physical descriptions of what might cause that frequency. In other words, they realized that the car was moving and basically leaving behind a trail of snow blasts at a particular frequency. Spatially that all works together to leave a record with a measurable wavelength.

As I thought about both my question and the answers throughout the day, it hit me that it’s one of those things that might lose students, especially early on before they’ve really internalized the relationship above. If you ask students to engage with the image or even the Hyundai commercial it comes from, they’ll engage and come up with all kinds of interesting questions, it seems to me. But if you ask about the wavelength like I did, it might shut them down, because then they’re not going with their gut and instead are trying to remember the relationship between wavelength and frequency (or possibly period).

I guess what I’m saying is that I knew my audience and I figured I could ask the question any way I wanted to. And it worked! But as I think about using this in class, I think I would have to be more careful. I think that’s a cautionary tale for me. It reminds me of times I’ll ask about something I think they’ll have experience with, or maybe some cool insights about, but I’ll ask it using vocabulary that’s still too new for them. I think instead I should just show them something and ask “what do you see?” or “what do you think is going on here?” or “Is there anything interesting going on?”

Your thoughts? Here are some starters for you:

- This is interesting. It reminds me of . . .
- This is really dumb. What you should have asked instead was . . .
- This is really cool. I think I’m going to buy a Hyundai now.
- This is really a waste of my time. I already have a car.
- Why didn’t you post a link to the video instead of a crappy screen grab you clearly took while pausing the tv during a really exciting Manchester Derby?
- Here’s a better question to ask students about this pic . . .
- I was the driver in this commercial and here’s what actually caused that . . .
- I was the camera person in this commercial and here’s why the driver really doesn’t understand physics.
- Here’s my crazy explanation for that snow pattern.
- It’s not a wave, you should stop saying that.

]]>

In my role as director of the first year seminar I was involved in some of the planning (I need to be clear here and heap praise on the C2C team – they did all the work and deserve all the credit for the great day). Specifically I was involved with planning how the overflow room should work. We hold the event in a neighboring church that can only seat something like 500. We like to have a satellite location that can simulcast the event. In early planning, I expressed how it would be interesting to do something different in that room. I thought it would be great to brainstorm activities people could do, while listening, that could raise the engagement of the audience. What we decided on was to crowd-source the prioritization of the questions we’d ask.

We thought it would be great to encourage the audience (only in the satellite room) to use internet-connected devices to submit and vote on potential questions for the speaker.

We picked the Q&A feature of Google Slides to do this. We made a simple one-page Google Slides document and turned on Q&A when the event started. We made sure the url was clearly displayed in the room.

I invited the first year seminar faculty to bring their classes, with a limit of 3 classes, and talked to a few other faculty about it as well.

We told people we’d be the first three questions asked in the church since I promised to text our questions to a plant (from the C2C committee) in the church.

Only two faculty brought their first year seminars (the rest went to the church). When I asked people how they made that decision I heard lots of interesting things:

- “I really want my students to
*be there*to hear Kemba” - “I’m not sure my students will have the focus you’re looking for”
- “Sounds cool but I really want to be in the church”
- “I’d love to because I’m always squished in the church”
- “That’s an interesting experiment”

In addition a few other faculty and students came. All together we had over 80 people there.

We passed out cards explaining what we were doing, because I figured if anyone came late I wouldn’t be able to explain it myself. Many were there early and we verified the technology worked with everyone’s cell phone.

We only had a handful of submitted questions, the highest rated of which only got six votes.

I submitted the questions a little early (we had a 2-minute delay that I didn’t want to miss). The question ranking changed a little after I submitted them, but the top three remained the top three. In the church all our questions were asked, but not all at once at the beginning of the Q&A session.

I was a little disappointed at the lack of engagement with the technology, but quite happy with the respectful and attentive attitude in the room. I’ve spoken with some about why there weren’t so many questions submitted and a few suggested that a lot of Kemba’s presentation was personal narrative, and that’s sometimes hard to question.

I think our questions were good. They certainly weren’t the horror stories you sometimes see at Q&A sessions for big speakers. You know what I’m talking about:

- “Thank you for your talk. I agree that _____ and let me tell you my whole life story before getting to my actual question.”
- “I came in late, could you please say everything you said at the beginning again?”
- “I have told you before that I disagree with you about point ____ and I’m going to walk you through every conversation we’ve ever had right now.”
- “Do you know ____ who says the same stuff as you but better?”

I had a question voted down. What’s fascinating about that is my emotional reaction. I would have thought I’d be disappointed about that. But it was interesting that I was relieved! I realized that I might have asked it if there were no crowd-sourcing and I might only hear after the event how dumb a question it was. In this case I don’t think people thought it was dumb, but they clearly thought other questions were more worth their time, and I think that’s great!

One interesting feature of this experiment was that the speaker couldn’t see how the voting and “leader board” evolved during her presentation. I think that’s likely a good thing, as it can be very distracting. In our implementation I did not project the leader board, but it was on everyone’s phone.

I think I’d like to do a little more experimentation with this. I think it could help with student engagement and I think it could really make the Q&A sessions more worthwhile.

Your thoughts? Here are some starters for you:

- This is cool! You could also think about doing . . .
- This is dumb! Instead you should have . . .
- I thought you used to love Google Moderator, why didn’t you use that?
- I think you didn’t get too many submitted questions because . . .
- I think you didn’t get too many votes because . . .
- I’m personally hurt by your examples of horror shows in Q&A sessions. I love all of those examples you describe!
- Here’s another to add to your horror show list . . .

]]>

Our continuing goal is to produce an 2D surface (drum) that has resonant frequencies that are harmonic. The parameter space to search in is huge (infinite?) but ultimately we’d love to find a shape (that we could 3D print!) that would sound cool. If we could find it we could print several different sizes to have a harmonic instrument.

For most instrument designs, if you name the lowest/dominant frequency that you want, you can usually pretty easily find the physical parameters you need to achieve that. Take a simple stringed instrument as an example. The three variables that matter are the length of the string, the tension in the string, and the linear mass density of the string:

So it’s pretty easy to find a string that sounds right. After that the beauty of a string is that all the other resonances are simple multiples of that fundamental frequency, so all of them sound good (except the 7th harmonic, that sounds like crap – see the placement of pickups on electric guitars that try to kill that one).

The problem with drums is that most of the time the resonances don’t have such an easy integer ratio relationship. That’s why drums aren’t usually considered harmonic instruments.

So, in our case we’re hunting for a shape that has some interesting resonances. This post is trying to get some help from you fine folks on how to use a neural network to do that.

Here’s what we’d love: name a set of frequencies we’d like a drum to have and determine the shape that would do it. We’ve tried some other approaches, but here I’m trying to get some help on how to design a neural network to do it. Here’s our set-up (nearly all points welcome your challenges!):

- We are somewhat convinced that the resonances for a polygon shaped drum are pretty close to the resonances of a smoothed out shape that would hit the same points as the polygon (this allows speed on our end to generate the frequencies for a given drum – ie the opposite of what we’re looking for).
- We make a training set by setting the order of the polygon (n) and then repeating:
- generate n points in the plane
- Find the shortest tour of visiting them to give us a region that doesn’t cross itself
- Make the region (in Mathematica: BoundaryMeshRegion[points, FindShortestTour[pts, Line[Range[n]]][[2]]])
- Find the lowest 3 eigenfrequencies (in Mathematica: NDEigenvalues . . .)
- Have the new trainer be {f1, f2, f3} -> coordinates of polygon (note I say more about this point below)

- We make a neural network that takes 3 inputs and matches the number of coordinates for the polygon for the outputs.
- Mathematica allows us to quickly set up such a network and to go crazy with the number of nodes in each layer and how many layers. Here’s the syntax for a single hidden layer with 10 nodes, Sigmoid-based NN:
- NetTrain[NetChain[{10,LogisticalSigmoid, 7}], trainingset]
- NetTrain looks at the training set to get the input layer size, but you have to put in the output size. The 7 there is for a pentagon shape (see below).

- We’ve tried 7 hidden layers with 100 nodes each along with all kinds of different shapes and sizes.
**We’d love some ideas here**- Beyond a sigmoid nonlinearlity Mathematica lets you do all kinds of things like hyperbolic tangent and ramp

- Mathematica allows us to quickly set up such a network and to go crazy with the number of nodes in each layer and how many layers. Here’s the syntax for a single hidden layer with 10 nodes, Sigmoid-based NN:

For n=5 (pentagons) I originally thought to try 5 ordered pairs for the coordinates of the pentagon. I realized, though, that there’s lots of redundancy built into that. For example, rotating a region or translating it doesn’t change the resonant frequencies. So instead, for the moment, I’m trying 4 lengths and 3 turning angles (because assuming the 5th link goes back to the first point – which I set at the origin – is enough) or 7 pieces of information. For triangles I use two lengths and one angle, which is also enough. I figure that savings should be useful.

Unfortunately, even after training tens of thousands of rounds with training sets containing of tens of thousands of trainers we’re not making much progress. Hence this post.

So, can you help? We’d love some challenges to our assumptions/approaches listed above. We’d also love to hear some good ideas for neural network structures to try. Luckily doing it in Mathematica is pretty easy, but if you’ve got a system you’d like to try we’re happy to provide the training set.

Some starters for you:

- This is cool! I think point x.y above can be improved and here’s how . . .
- This is really dumb. It’s obvious from point x.y above that you guys don’t know what you’re doing. What you should do is . . .
- You didn’t italicize
*Mathematica*at all in this post so I stopped reading. - I thought you said you were trying to do as much as you can using python these days. What gives?
- What makes you think a neural network can actually solve this problem?
- I don’t understand point x.y above. Please explain it better so I can get some sleep.
- My band’s name is “7th harmonic” and we’re suing you because you said we sound like crap

]]>

I especially like how he stuck with it over several years! I liked the explanation a lot about why the propellers take on such weird shapes, but I didn’t think much about the mathematical structure of them.

But then I saw this page and got really interested. Ok, I admitted to the world that I was stumped

So then I decided to dig in to figure out why the simple Mathematica command:

ContourPlot[Sqrt[x^2+y^2]==Cos[5 ArcTan[x,y]+17y], {x,-1,1},{y,-1,1}]

gives the correct form for a simple mathematically-based propeller. At first I thought that maybe it was just similar enough to the image the original poster wanted but then I made this gif and realized that it was dead on:

(Click through and you’ll see a bunch of other examples that I slapped together)

So why does that simple statement (ContourPlot) do the trick? Well, what do we need to figure out? We need to find the locations on the plane where the black rolling shutter line intersects with the blue propeller function. So let’s see if we can express that mathematically:

where v is the vertical speed of the shutter and a is the maximum radial extent of the propeller.

where is the angular rotation speed of the propeller. This gives you the distance from the origin to the edge of the propeller for a given angle, . Expressed as a function of x and y you just need or better yet Mathematica’s ArcTan[x,y] function that can work on the whole plane.

So what we’re looking for are locations on the plane whose distance to the origin matches the propeller’s radial extent at that angle when the rolling shutter is there, or:

where I’ve solved the equation for the y-position of the shutter for t.

Aha! So we just need to find points on the plane where that equality holds. But that’s what ContourPlot is really good at doing! Really all it does is make a big grid on the plane, check all the points, and if it finds points where that equality is close it zooms in and makes a smaller grid until it finds points that are close enough. That process repeats MaxRecursion number of times (I think the default is 2). The suggestion on the StackExchange post is to set PlotPoints->100 so that the initial grid is fine enough. If I do that but set MaxRecursion to zero it looks pretty jaggedy (not sure that’s a word).

Yesterday when I was futzing around it took forever to make the movies. That’s because at every time step I was redoing that ContourPlot command but with a plot range only below the rolling shutter. It’s the ContourPlot that takes forever so I found a better way today. Now I do the ContourPlot command just once for the whole plane. Then I extract from it the points it finds and then I just use a Graphics command to plot the points that are below the rolling shutter for the movies. The whole process (including exporting the GIF) is about 30 seconds now compared to 5-10 minutes yesterday.

What’s fun is that you can set your propeller function to be anything. Here’s a couple examples:

So, I’m glad I put a little more time into this, it’s certainly been both fun and entertaining. I hope you’ve enjoyed it too.

Your thoughts? Here’s some starters for you:

- This is cool, do you mind sharing your code? (as usual it’s incredibly sloppy with almost no comments)
- None of these look like real life propellers, this sucks.
- Why didn’t you do this in python? (seriously, does the contour plot package in plotly or mathplot lib work for this type of problem?)
- Can you try this function for a propeller: _____
- What happens if the shutter comes in from a different angle?
- If Smarter Every Day already explained all this, why did you bother at all?
- Instead of just giving the outline of the blades, can you fill it in?
- Why did you write “seriously, . . .” in that fake comment above? Aren’t these starters supposed to be strictly for us readers to use?

Then my brother was interested in a web page that would work on his phone that would help him check in his bike shop customers. So I dug in a little deeper in to google apps script web apps. That’s what got this current fire really going (note: I made a page that’s driven by a simple spreadsheet that has items and price estimates in one sheet and work-order quotes in another sheet. He calls up the page and sees checkboxes for every item in that first sheet. He checks whatever makes sense and hits submit. He’s then shown a cost estimate where he can add notes (like customer name, etc) and hit submit again to save it in the second sheet).

Ok, so here’s what I’m working on and wondering about: Could I use google apps script web apps on some some small-scale full stack problems I’ve been working on? I do a lot of PHP/Laravel/MYSQL/LAMP/Javascript/CSS full stack programming, but it’s often overkill for a simple thing (like my brother’s problem). I do Laravel instead of python/Django, Ruby on Rails, or Meteor mostly because it’s easiest to get the sys admins at my institution to support PHP. Whatever, they all have basically the same functionality (and the same rabid fan bases). So I know how to do fully-functional database-driven web sites. That’s not my problem. Instead I’m interested in GAS web apps because they offer an intriguing list of opportunities:

- No server to set up. It’s just google
- baked in reliability etc

- Super easy authentication/authorization. It’s already built in to the google ecosystem
- The data layer looks and acts like a spreadsheet
- Note that google sheets are promoted as spreadsheets but they’re really quite powerful due to “query” (see below) and the interconnectedness with all the other google stuff
- End users are way more willing to engage with a data layer that looks like a spreadsheet than a mysql database. Take my brother, for example. I didn’t have to make a front-end script to allow him to change his price list. He’s perfectly happy to do that right in the spreadsheet

- emailing is easy.
- In Laravel, for example, you have to set up the right package, turn on SMTP stuff, and make sure you’re not pissing off your sys admins

- Single page apps
- I’m not actually sold on this, but I notice it in the PR sites I’ve been perusing. Basically you can just load one site and then interact with the server to change portions of the page. I did a ton of this with my “myTurnNow” app that lets up to 100 people engage with each other without having to raise their hands. But it sure it easier to use old fashioned “submit” buttons with multiple pages (yes, Meteor users, I know, I know, . . . shut up!)

So I decided to write this blog post not so much as a “how you do it” as “should I do it?” Most of those points above are interesting, but maybe I shouldn’t be so afraid to just fire up a fresh Laravel app and do even little stuff.

Here’s some downsides:

- It’s kind of slow. You are having the script access a google drive doc and do stuff. That access is what’s seemingly pretty slow. If you just do non-data-layer stuff it’s pretty quick, but it’s noticeable so I thought I’d mention it.
- Really playing with the data almost always requires running google sheets formulas. You don’t have to do this. Most of the web sites suggest just sucking all the data in and dealing with it in javascript. I think that’s fine unless you think the data’s going to scale a little. If you google “use google apps script to run sheets formulas” you’ll see a few “impossibles” in your results, but don’t despair! You can do a very dumb sounding thing:
- Create a new sheet programmatically
- Set the top left cell of the new sheet to something like “=query(mycoolsheet!A:E, \”select max(A), B group by B\”)”
- Read any data on that sheet into javascript/GAS
- Delete the sheet

- Ok, yes, I know, that seems really dumb. But I’ve done it a bunch now and it seems to work. It gives you access to the fantastically useful “query” formula and it dramatically reduces the amount of data you’d suck into javascript. Also you don’t have do basically rewrite your favorite spreadsheet formulas into javascript.
- Weird urls: these are crazy looking but who cares (tinyurl exists, after all)

My current project: I need to write a bunch of reviews for a bunch of folks (I am in the dean’s office these days, after all). I want to be able to access both the formal stuff I’ve written and any notes that I have for this year and all years for every person I’m reviewing. I could do this in a heartbeat (ok, a day) in Laravel, but then I’m the forever owner, even when I’m out of the dean’s office. Doing this in GAS seemed like a fun project and, if it’s successful, I can just transfer ownership to someone else.

I’ve got it working, after lots of fits and starts, and now I’m writing to you, dear reader, to find out if it’s worth exploring more and putting this tool in my tool chest.

So what do you think? Here’s some starters for you:

- Laravel sucks. If you’re not doing Ruby you’re just dumb
- This is really interesting. What’s the learning curve like?
- Laravel sucks. If you’re not doing Django you’re just dumb.
- I really like the ______ aspect of this. Do you think that you could also _______?
- Laravel sucks. If you’re not using Meteor you’re just dumb.
- Tell me more about myTurnNow, that sounds really useful
- Laravel sucks. If you’re not using carrier pigeons you’re just dumb.
- I’ve used GAS and have come to the conclusion that . . .
- You never explicitly said that GAS was google apps script so I stopped reading. You suck.

A few people chimed in on twitter, but mostly they suggested I try the experiment. Being that son’s birthday today we happened to have some helium around. So we tried it. But before we did (and before I let you see the video . . . oh, you went down and watched the video and are now back up to this paragraph, I see how it goes) we thought about how we’d know if your whistle sounded different. So we tested each other to see if we could match pitches. We also used the piano to give random pitches. By the way, I’ve talked before about my ability to match a pitch while whistling.

So, here’s the experiment:

My take, especially at the end, is that it did affect my whistle.

So here’s my theory: The helium changes the speed of sound in my mouth cavity which raises the resonant frequency of the effective Helmholtz resonator. Really that’s the same thing that happens when you talk with helium. There the helium doesn’t affect the vibrations of your vocal folds/chords since that’s decided by the tension in them which is decided by you. However, those vibrations have lots of resonances and the helium in your oral chamber raises the resonances of the chamber so higher resonances of the vocal folds/chords are amplified making you sound higher. Since the resonances of the vocal folds/chords are roughly harmonic, you typically sound an octave higher.

When you whistle you aren’t choosing the tension of your vocal folds/chords. You are shaping your lips/mouth cavity from memory to be a shape that causes a Helmholtz resonance with a particular frequency. When you have helium in your mouth, the resonance goes up and so does the frequency you hear.

So what do you think? Here’s some starters for you:

- Why do you keep saying vocal folds/chords?
- What about the other kind of whistle where some people use their fingers?
- How do we know you were trying just as hard to match the pitch when you claimed that it wasn’t right?
- I don’t think you understand how sound is generated at all in both whistles and talking. Here’s a tutorial for you.
- So you decided to pop your son’s balloon? On his birthday? What kind of monster are you?
- Why are you too lazy to see if the heard frequency compared to the expected frequency matches the ratio of sound speeds in helium and air?
- You wrote “raises you whistle” instead of “raises your whistle” in your tweet. How do you ever expect people to take you seriously?
- You posted this before the youtube video was ready. What, you think I have the time to come back when it’s ready? Thanks for nothing, loser.

So I got to wondering how much energy it had used to do that. I asked the question on twitter

and got a lot of interesting answers involving mass/energy, carbon bond energy, etc.

I’m not sure what the answer is, so I thought I’d post about it to see what other answers I could get. I’m really interested in what happened over the weekend, and I have a hazy memory of seeing tiny green buds before the weekend. So really I figured the buds all existed and just had to be forced out. I figured something had to move a few centimeters with, say, a kg-weight worth of force. That’s just me guessing though.

Of course with that approach (and really all the approaches people were using on twitter), you have to have an estimate of how many blooms there are. So I decided to see if Mathematica could help me with that.

I imported the image and used the image mouse-over tools to find the color of the blooms. Then I set all pixels that weren’t within a ColorDistance of 0.2 (found empirically) of that color to black. Then I did a DistanceTransform of that to replace all pixel values with the distance to the nearest black pixel. Then I used MaxDetect on that to find the centers of all of the blooms. Then I used MorphologicalComponents and ComponentMeasurements to count them (just under 3000) and HighlightImage to make this image:

I think it did a pretty good job of finding all the blooms that way.

So, with roughly 3000 blooms in a period of 3 days, how much energy did it take? I’m not sure but I hope you can help.

Here are some starters for you:

- What’s wrong with a mass/energy approach?
- Just cut some off, burn them, and see how much you raise the temperature of water. Then multiply!
- Why didn’t you use ImageJ for this? (here’s the link to the original image if you want to try)
- The way you calculated the number of blooms is dumb, a much better way is . . .
- Why do you care about the energy, I thought you said energy doesn’t exist?
- Why can’t you just enjoy the view instead of ruining it with all this science?
- On twitter you said that the tree sprouted. What are you, an idiot?
- If you’re asking about energy, why does the three days part matter?
- You didn’t say anything about seeing some green before the weekend on Twitter, thanks for having me waste my day.