Snow wave

Earlier today I posted this pic and asked a question about it on twitter:20171210_102417

If you click through you’ll see lots of great ideas. I’m not sure what the right answer is, so feel free to weigh in below in the comments.

What actually made me decide to blog about it was that I realized that I asked the wrong question. I really wanted to know what would cause the repetitive pattern, so I think really I was thinking about what would cause the frequency of the wave.

Now, I think everyone who replied on twitter recognized one of the fundamental relationships about waves when answering my question:

\text{wavelength}=\frac{\text{speed}}{\text{frequency}}

and really just jumped to physical descriptions of what might cause that frequency. In other words, they realized that the car was moving and basically leaving behind a trail of snow blasts at a particular frequency. Spatially that all works together to leave a record with a measurable wavelength.

As I thought about both my question and the answers throughout the day, it hit me that it’s one of those things that might lose students, especially early on before they’ve really internalized the relationship above. If you ask students to engage with the image or even the Hyundai commercial it comes from, they’ll engage and come up with all kinds of interesting questions, it seems to me. But if you ask about the wavelength like I did, it might shut them down, because then they’re not going with their gut and instead are trying to remember the relationship between wavelength and frequency (or possibly period).

I guess what I’m saying is that I knew my audience and I figured I could ask the question any way I wanted to. And it worked! But as I think about using this in class, I think I would have to be more careful. I think that’s a cautionary tale for me. It reminds me of times I’ll ask about something I think they’ll have experience with, or maybe some cool insights about, but I’ll ask it using vocabulary that’s still too new for them. I think instead I should just show them something and ask “what do you see?” or “what do you think is going on here?” or “Is there anything interesting going on?”

Your thoughts? Here are some starters for you:

  • This is interesting. It reminds me of . . .
  • This is really dumb. What you should have asked instead was . . .
  • This is really cool. I think I’m going to buy a Hyundai now.
  • This is really a waste of my time. I already have a car.
  • Why didn’t you post a link to the video instead of a crappy screen grab you clearly took while pausing the tv during a really exciting Manchester Derby?
  • Here’s a better question to ask students about this pic . . .
  • I was the driver in this commercial and here’s what actually caused that . . .
  • I was the camera person in this commercial and here’s why the driver really doesn’t understand physics.
  • Here’s my crazy explanation for that snow pattern.
  • It’s not a wave, you should stop saying that.

 

 

Advertisements
Posted in physics, teaching, Uncategorized | 7 Comments

Crowd-prioritized questions for speakers

This past week I tried an experiment during a major speaking engagement on my campus. This was our annual “Commitment to Community” address by the fabulous Kemba Smith. We had her on campus for a day and she interacted with our students in lots of ways, culminating in a major presentation to the campus in the evening.

In my role as director of the first year seminar I was involved in some of the planning (I need to be clear here and heap praise on the C2C team – they did all the work and deserve all the credit for the great day). Specifically I was involved with planning how the overflow room should work. We hold the event in a neighboring church that can only seat something like 500. We like to have a satellite location that can simulcast the event. In early planning, I expressed how it would be interesting to do something different in that room. I thought it would be great to brainstorm activities people could do, while listening, that could raise the engagement of the audience. What we decided on was to crowd-source the prioritization of the questions we’d ask.

What we planned

We thought it would be great to encourage the audience (only in the satellite room) to use internet-connected devices to submit and vote on potential questions for the speaker.

We picked the Q&A feature of Google Slides to do this. We made a simple one-page Google Slides document and turned on Q&A when the event started. We made sure the url was clearly displayed in the room.

I invited the first year seminar faculty to bring their classes, with a limit of 3 classes, and talked to a few other faculty about it as well.

We told people we’d be the first three questions asked in the church since I promised to text our questions to a plant (from the C2C committee) in the church.

What happened

Only two faculty brought their first year seminars (the rest went to the church). When I asked people how they made that decision I heard lots of interesting things:

  • “I really want my students to be there to hear Kemba”
  • “I’m not sure my students will have the focus you’re looking for”
  • “Sounds cool but I really want to be in the church”
  • “I’d love to because I’m always squished in the church”
  • “That’s an interesting experiment”

In addition a few other faculty and students came. All together we had over 80 people there.

We passed out cards explaining what we were doing, because I figured if anyone came late I wouldn’t be able to explain it myself. Many were there early and we verified the technology worked with everyone’s cell phone.

We only had a handful of submitted questions, the highest rated of which only got six votes.

I submitted the questions a little early (we had a 2-minute delay that I didn’t want to miss). The question ranking changed a little after I submitted them, but the top three remained the top three. In the church all our questions were asked, but not all at once at the beginning of the Q&A session.

Analysis

I was a little disappointed at the lack of engagement with the technology, but quite happy with the respectful and attentive attitude in the room. I’ve spoken with some about why there weren’t so many questions submitted and a few suggested that a lot of Kemba’s presentation was personal narrative, and that’s sometimes hard to question.

I think our questions were good. They certainly weren’t the horror stories you sometimes see at Q&A sessions for big speakers. You know what I’m talking about:

  • “Thank you for your talk. I agree that _____ and let me tell you my whole life story before getting to my actual question.”
  • “I came in late, could you please say everything you said at the beginning again?”
  • “I have told you before that I disagree with you about point ____ and I’m going to walk you through every conversation we’ve ever had right now.”
  • “Do you know ____ who says the same stuff as you but better?”

I had a question voted down. What’s fascinating about that is my emotional reaction. I would have thought I’d be disappointed about that. But it was interesting that I was relieved! I realized that I might have asked it if there were no crowd-sourcing and I might only hear after the event how dumb a question it was. In this case I don’t think people thought it was dumb, but they clearly thought other questions were more worth their time, and I think that’s great!

One interesting feature of this experiment was that the speaker couldn’t see how the voting and “leader board” evolved during her presentation. I think that’s likely a good thing, as it can be very distracting. In our implementation I did not project the leader board, but it was on everyone’s phone.

I think I’d like to do a little more experimentation with this. I think it could help with student engagement and I think it could really make the Q&A sessions more worthwhile.

Your thoughts? Here are some starters for you:

  • This is cool! You could also think about doing . . .
  • This is dumb! Instead you should have  . . .
  • I thought you used to love Google Moderator, why didn’t you use that?
  • I think you didn’t get too many submitted questions because . . .
  • I think you didn’t get too many votes because . . .
  • I’m personally hurt by your examples of horror shows in Q&A sessions. I love all of those examples you describe!
  • Here’s another to add to your horror show list . . .

 

Posted in community, teaching | Leave a comment

Harmonic drums neural network

I’ve written before about my research group’s efforts at trying to find harmonic drums. One of those students wants to continue that work as a independent study so I’ve been putting some more thought into it. This post is about my fledgling efforts to use neural network technology to help us out.

Our continuing goal is to produce an 2D surface (drum) that has resonant frequencies that are harmonic. The parameter space to search in is huge (infinite?) but ultimately we’d love to find a shape (that we could 3D print!) that would sound cool. If we could find it we could print several different sizes to have a harmonic instrument.

For most instrument designs, if you name the lowest/dominant frequency that you want, you can usually pretty easily find the physical parameters you need to achieve that. Take a simple stringed instrument as an example. The three variables that matter are the length of the string, the tension in the string, and the linear mass density of the string:

f_\text{fundamental}=\frac{1}{2L}\sqrt{\frac{\text{tension}}{\text{linear mass density}}}

So it’s pretty easy to find a string that sounds right. After that the beauty of a string is that all the other resonances are simple multiples of that fundamental frequency, so all of them sound good (except the 7th harmonic, that sounds like crap – see the placement of pickups on electric guitars that try to kill that one).

The problem with drums is that most of the time the resonances don’t have such an easy integer ratio relationship. That’s why drums aren’t usually considered harmonic instruments.

So, in our case we’re hunting for a shape that has some interesting resonances. This post is trying to get some help from you fine folks on how to use a neural network to do that.

Here’s what we’d love: name a set of frequencies we’d like a drum to have and determine the shape that would do it. We’ve tried some other approaches, but here I’m trying to get some help on how to design a neural network to do it. Here’s our set-up (nearly all points welcome your challenges!):

  1. We are somewhat convinced that the resonances for a polygon shaped drum are pretty close to the resonances of a smoothed out shape that would hit the same points as the polygon (this allows speed on our end to generate the frequencies for a given drum – ie the opposite of what we’re looking for).
  2. We make a training set by setting the order of the polygon (n) and then repeating:
    1. generate n points in the plane
    2. Find the shortest tour of visiting them to give us a region that doesn’t cross itself
    3. Make the region (in Mathematica: BoundaryMeshRegion[points, FindShortestTour[pts, Line[Range[n]]][[2]]])
    4. Find the lowest 3 eigenfrequencies (in Mathematica: NDEigenvalues . . .)
    5. Have the new trainer be {f1, f2, f3} -> coordinates of polygon (note I say more about this point below)
  3. We make a neural network that takes 3 inputs and matches the number of coordinates for the polygon for the outputs.
    1. Mathematica allows us to quickly set up such a network and to go crazy with the number of nodes in each layer and how many layers. Here’s the syntax for a single hidden layer with 10 nodes, Sigmoid-based NN:
      1. NetTrain[NetChain[{10,LogisticalSigmoid, 7}], trainingset]
      2. NetTrain looks at the training set to get the input layer size, but you have to put in the output size. The 7 there is for a pentagon shape (see below).
    2. We’ve tried 7 hidden layers with 100 nodes each along with all kinds of different shapes and sizes. We’d love some ideas here
      1. Beyond a sigmoid nonlinearlity Mathematica lets you do all kinds of things like hyperbolic tangent and ramp

For n=5 (pentagons) I originally thought to try 5 ordered pairs for the coordinates of the pentagon. I realized, though, that there’s lots of redundancy built into that. For example, rotating a region or translating it doesn’t change the resonant frequencies. So instead, for the moment, I’m trying 4 lengths and 3 turning angles (because assuming the 5th link goes back to the first point – which I set at the origin – is enough) or 7 pieces of information. For triangles I use two lengths and one angle, which is also enough. I figure that savings should be useful.

Unfortunately, even after training tens of thousands of rounds with training sets containing of tens of thousands of trainers we’re not making much progress. Hence this post.

So, can you help? We’d love some challenges to our assumptions/approaches listed above. We’d also love to hear some good ideas for neural network structures to try. Luckily doing it in Mathematica is pretty easy, but if you’ve got a system you’d like to try we’re happy to provide the training set.

Some starters for you:

  • This is cool! I think point x.y above can be improved and here’s how . . .
  • This is really dumb. It’s obvious from point x.y above that you guys don’t know what you’re doing. What you should do is . . .
  • You didn’t italicize Mathematica at all in this post so I stopped reading.
  • I thought you said you were trying to do as much as you can using python these days. What gives?
  • What makes you think a neural network can actually solve this problem?
  • I don’t understand point x.y above. Please explain it better so I can get some sleep.
  • My band’s name is “7th harmonic” and we’re suing you because you said we sound like crap

 

Posted in mathematica, programming, research | 2 Comments

Propellers with rolling shutter

I really loved Smarter Every Day’s cool video about propellers shot by digital video cameras:

I especially like how he stuck with it over several years! I liked the explanation a lot about why the propellers take on such weird shapes, but I didn’t think much about the mathematical structure of them.

But then I saw this page and got really interested. Ok, I admitted to the world that I was stumped

So then I decided to dig in to figure out why the simple Mathematica command:

ContourPlot[Sqrt[x^2+y^2]==Cos[5 ArcTan[x,y]+17y], {x,-1,1},{y,-1,1}]

gives the correct form for a simple mathematically-based propeller. At first I thought that maybe it was just similar enough to the image the original poster wanted but then I made this gif and realized that it was dead on:

(Click through and you’ll see a bunch of other examples that I slapped together)

So why does that simple statement (ContourPlot) do the trick? Well, what do we need to figure out? We need to find the locations on the plane where the black rolling shutter line intersects with the blue propeller function. So let’s see if we can express that mathematically:

y_\text{shutter}(t)=vt-a

where v is the vertical speed of the shutter and a is the maximum radial extent of the propeller.

r_\text{propeller}(t)=f(\theta-\omega t)

where \omega is the angular rotation speed of the propeller. This gives you the distance from the origin to the edge of the propeller for a given angle, \theta. Expressed as a function of x and y you just need \theta=\tan^{-1}(y/x) or better yet Mathematica’s ArcTan[x,y] function that can work on the whole plane.

So what we’re looking for are locations on the plane whose distance to the origin matches the propeller’s radial extent at that angle when the rolling shutter is there, or:

\sqrt{x^2+y^2}=f\left(\tan^{-1}(y/x)-\omega \frac{y+a}{v}\right)

where I’ve solved the equation for the y-position of the shutter for t.

Aha! So we just need to find points on the plane where that equality holds. But that’s what ContourPlot is really good at doing! Really all it does is make a big grid on the plane, check all the points, and if it finds points where that equality is close it zooms in and makes a smaller grid until it finds points that are close enough. That process repeats MaxRecursion number of times (I think the default is 2). The suggestion on the StackExchange post is to set PlotPoints->100 so that the initial grid is fine enough. If I do that but set MaxRecursion to zero it looks pretty jaggedy (not sure that’s a word).

Yesterday when I was futzing around it took forever to make the movies. That’s because at every time step I was redoing that ContourPlot command but with a plot range only below the rolling shutter. It’s the ContourPlot that takes forever so I found a better way today. Now I do the ContourPlot command just once for the whole plane. Then I extract from it the points it finds and then I just use a Graphics command to plot the points that are below the rolling shutter for the movies. The whole process (including exporting the GIF) is about 30 seconds now compared to 5-10 minutes yesterday.

What’s fun is that you can set your propeller function to be anything. Here’s a couple examples:

r=0.75+0.25 \sin(10\theta)

10blades

r=0.75+0.25\sin(10\theta-0.1)+0.1\sin(4\theta)

10blades2

So, I’m glad I put a little more time into this, it’s certainly been both fun and entertaining. I hope you’ve enjoyed it too.

Your thoughts? Here’s some starters for you:

  • This is cool, do you mind sharing your code? (as usual it’s incredibly sloppy with almost no comments)
  • None of these look like real life propellers, this sucks.
  • Why didn’t you do this in python? (seriously, does the contour plot package in plotly or mathplot lib work for this type of problem?)
  • Can you try this function for a propeller: _____
  • What happens if the shutter comes in from a different angle?
  • If Smarter Every Day already explained all this, why did you bother at all?
  • Instead of just giving the outline of the blades, can you fill it in?
  • Why did you write “seriously, . . .” in that fake comment above? Aren’t these starters supposed to be strictly for us readers to use?
Posted in fun, math, mathematica, physics, Uncategorized | Leave a comment

Google Apps Script full stack?

A few weeks ago I brushed off my google apps scripting rust and managed to learn how to do a mass ownership change for a colleague who was leaving the institution. It always takes me a while to remember how everything works but I got it done (note: it’s doable, but the script has to be run by the original owner and the new person has to be in the same institution. Also, stupidly in my opinion, google gives the new person ownership of the entire directory structure, but it also puts the top-level label on every file and folder so it seems like all that stuff is at the top level — hence a second script I wrote for the new owner to run that gets rid of those labels).

Then my brother was interested in a web page that would work on his phone that would help him check in his bike shop customers. So I dug in a little deeper in to google apps script web apps. That’s what got this current fire really going (note: I made a page that’s driven by a simple spreadsheet that has items and price estimates in one sheet and work-order quotes in another sheet. He calls up the page and sees checkboxes for every item in that first sheet. He checks whatever makes sense and hits submit. He’s then shown a cost estimate where he can add notes (like customer name, etc) and hit submit again to save it in the second sheet).

Ok, so here’s what I’m working on and wondering about: Could I use google apps script web apps on some some small-scale full stack problems I’ve been working on? I do a lot of PHP/Laravel/MYSQL/LAMP/Javascript/CSS full stack programming, but it’s often overkill for a simple thing (like my brother’s problem). I do Laravel instead of python/Django, Ruby on Rails, or Meteor mostly because it’s easiest to get the sys admins at my institution to support PHP. Whatever, they all have basically the same functionality (and the same rabid fan bases). So I know how to do fully-functional database-driven web sites. That’s not my problem. Instead I’m interested in GAS web apps because they offer an intriguing list of opportunities:

  • No server to set up. It’s just google
    • baked in reliability etc
  • Super easy authentication/authorization. It’s already built in to the google ecosystem
  • The data layer looks and acts like a spreadsheet
    • Note that google sheets are promoted as spreadsheets but they’re really quite powerful due to “query” (see below) and the interconnectedness with all the other google stuff
    • End users are way more willing to engage with a data layer that looks like a spreadsheet than a mysql database. Take my brother, for example. I didn’t have to make a front-end script to allow him to change his price list. He’s perfectly happy to do that right in the spreadsheet
  • emailing is easy.
    • In Laravel, for example, you have to set up the right package, turn on SMTP stuff, and make sure you’re not pissing off your sys admins
  • Single page apps
    • I’m not actually sold on this, but I notice it in the PR sites I’ve been perusing. Basically you can just load one site and then interact with the server to change portions of the page. I did a ton of this with my “myTurnNow” app that lets up to 100 people engage with each other without having to raise their hands. But it sure it easier to use old fashioned “submit” buttons with multiple pages (yes, Meteor users, I know, I know, . . . shut up!)

So I decided to write this blog post not so much as a “how you do it” as “should I do it?” Most of those points above are interesting, but maybe I shouldn’t be so afraid to just fire up a fresh Laravel app and do even little stuff.

Here’s some downsides:

  • It’s kind of slow. You are having the script access a google drive doc and do stuff. That access is what’s seemingly pretty slow. If you just do non-data-layer stuff it’s pretty quick, but it’s noticeable so I thought I’d mention it.
  • Really playing with the data almost always requires running google sheets formulas. You don’t have to do this. Most of the web sites suggest just sucking all the data in and dealing with it in javascript. I think that’s fine unless you think the data’s going to scale a little. If you google “use google apps script to run sheets formulas” you’ll see a few “impossibles” in your results, but don’t despair! You can do a very dumb sounding thing:
    • Create a new sheet programmatically
    • Set the top left cell of the new sheet to something like “=query(mycoolsheet!A:E,  \”select max(A), B group by B\”)”
    • Read any data on that sheet into javascript/GAS
    • Delete the sheet
  • Ok, yes, I know, that seems really dumb. But I’ve done it a bunch now and it seems to work. It gives you access to the fantastically useful “query” formula and it dramatically reduces the amount of data you’d suck into javascript. Also you don’t have do basically rewrite your favorite spreadsheet formulas into javascript.
  • Weird urls: these are crazy looking but who cares (tinyurl exists, after all)

My current project: I need to write a bunch of reviews for a bunch of folks (I am in the dean’s office these days, after all). I want to be able to access both the formal stuff I’ve written and any notes that I have for this year and all years for every person I’m reviewing. I could do this in a heartbeat (ok, a day) in Laravel, but then I’m the forever owner, even when I’m out of the dean’s office. Doing this in GAS seemed like a fun project and, if it’s successful, I can just transfer ownership to someone else.

I’ve got it working, after lots of fits and starts, and now I’m writing to you, dear reader, to find out if it’s worth exploring more and putting this tool in my tool chest.

So what do you think? Here’s some starters for you:

  • Laravel sucks. If you’re not doing Ruby you’re just dumb
  • This is really interesting. What’s the learning curve like?
  • Laravel sucks. If you’re not doing Django you’re just dumb.
  • I really like the ______ aspect of this. Do you think that you could also _______?
  • Laravel sucks. If you’re not using Meteor you’re just dumb.
  • Tell me more about myTurnNow, that sounds really useful
  • Laravel sucks. If you’re not using carrier pigeons you’re just dumb.
  • I’ve used GAS and have come to the conclusion that . . .
  • You never explicitly said that GAS was google apps script so I stopped reading. You suck.
Posted in HUWebApps, programming, Uncategorized | 5 Comments

Helium whistling

Earlier today my son asked me a question I didn’t know the answer to. So I took to twitter:

A few people chimed in on twitter, but mostly they suggested I try the experiment. Being that son’s birthday today we happened to have some helium around. So we tried it. But before we did (and before I let you see the video . . . oh, you went down and watched the video and are now back up to this paragraph, I see how it goes) we thought about how we’d know if your whistle sounded different. So we tested each other to see if we could match pitches. We also used the piano to give random pitches. By the way, I’ve talked before about my ability to match a pitch while whistling.

So, here’s the experiment:

My take, especially at the end, is that it did affect my whistle.

So here’s my theory: The helium changes the speed of sound in my mouth cavity which raises the resonant frequency of the effective Helmholtz resonator. Really that’s the same thing that happens when you talk with helium. There the helium doesn’t affect the vibrations of your vocal folds/chords since that’s decided by the tension in them which is decided by you. However, those vibrations have lots of resonances and the helium in your oral chamber raises the resonances of the chamber so higher resonances of the vocal folds/chords are amplified making you sound higher. Since the resonances of the vocal folds/chords are roughly harmonic, you typically sound an octave higher.

When you whistle you aren’t choosing the tension of your vocal folds/chords. You are shaping your lips/mouth cavity from memory to be a shape that causes a Helmholtz resonance with a particular frequency. When you have helium in your mouth, the resonance goes up and so does the frequency you hear.

So what do you think? Here’s some starters for you:

  • Why do you keep saying vocal folds/chords?
  • What about the other kind of whistle where some people use their fingers?
  • How do we know you were trying just as hard to match the pitch when you claimed that it wasn’t right?
  • I don’t think you understand how sound is generated at all in both whistles and talking. Here’s a tutorial for you.
  • So you decided to pop your son’s balloon? On his birthday? What kind of monster are you?
  • Why are you too lazy to see if the heard frequency compared to the expected frequency matches the ratio of sound speeds in helium and air?
  • You wrote “raises you whistle” instead of “raises your whistle” in your tweet. How do you ever expect people to take you seriously?
  • You posted this before the youtube video was ready. What, you think I have the time to come back when it’s ready? Thanks for nothing, loser.
Posted in fun, physics, teaching, twitter | 3 Comments

Blooming energy

Today I saw that the tree outside my office window had blossomed over the weekend:tree

So I got to wondering how much energy it had used to do that. I asked the question on twitter

and got a lot of interesting answers involving mass/energy, carbon bond energy, etc.

I’m not sure what the answer is, so I thought I’d post about it to see what other answers I could get. I’m really interested in what happened over the weekend, and I have a hazy memory of seeing tiny green buds before the weekend. So really I figured the buds all existed and just had to be forced out. I figured something had to move a few centimeters with, say, a kg-weight worth of force. That’s just me guessing though.

Of course with that approach (and really all the approaches people were using on twitter), you have to have an estimate of how many blooms there are. So I decided to see if Mathematica could help me with that.

I imported the image and used the image mouse-over tools to find the color of the blooms. Then I set all pixels that weren’t within a ColorDistance of 0.2 (found empirically) of that color to black. Then I did a DistanceTransform of that to replace all pixel values with the distance to the nearest black pixel. Then I used MaxDetect on that to find the centers of all of the blooms. Then I used MorphologicalComponents and ComponentMeasurements to count them (just under 3000) and HighlightImage to make this image:treehighlight

I think it did a pretty good job of finding all the blooms that way.

So, with roughly 3000 blooms in a period of 3 days, how much energy did it take? I’m not sure but I hope you can help.

Here are some starters for you:

  • What’s wrong with a mass/energy approach?
  • Just cut some off, burn them, and see how much you raise the temperature of water. Then multiply!
  • Why didn’t you use ImageJ for this? (here’s the link to the original image if you want to try)
  • The way you calculated the number of blooms is dumb, a much better way is . . .
  • Why do you care about the energy, I thought you said energy doesn’t exist?
  • Why can’t you just enjoy the view instead of ruining it with all this science?
  • On twitter you said that the tree sprouted. What are you, an idiot?
  • If you’re asking about energy, why does the three days part matter?
  • You didn’t say anything about seeing some green before the weekend on Twitter, thanks for having me waste my day.
Posted in fun, mathematica, physics, technology | 3 Comments