## No connections

Driving home today I heard a great story on NPR. I liked it so much that I thought I’d put it here to remind myself about it. I might have forgotten some of the details, but I think I still remember the gist.

### Teachers without their PLCs

Principals have been realizing lately how hard it is to get their teachers to do good work. Too many of them have been spending time talking to each other to find better ways to teach. That takes away from the time that they could be interacting with their students. Now a few schools around the country have started to use a new approach: locking each teacher away from all others.

When the teachers get to school, they can be seen smiling and chatting with their friends in the parking lot. But when they get to the school, they’re met with a phalanx of administrators just inside the door who put all the teachers in bags and cart them off to their classrooms where they’re locked in. At the end of the day they are brought back out, and only then are they able to interact with their co-workers.

“It really sucks because it’s usually so great to get good ideas from people in similar situations that I can incorporate into my teaching,” said one teacher, clearly pining for her friends. Another added “I can’t believe that they’re painting us all with the same brush, assuming that all we want to do it talk with our friends instead of teaching. That’s not fair. I love to teach!”

But the principals have come to understand that any sort of access to professional development that might bring in new ideas can only take away from the tried and true approaches to working with students. While their teachers are complaining, the principals are sure this is the right approach.

Some brave teachers have pointed out that the lack of access to their professional coworkers has changed their behavior. Now if they can’t figure out how to work with a student, they just keep trying other things that they come up with off the top of their head. “Before I would find a friend at lunch to brainstorm ways to help, but now I have way more time to really focus on the problem.”

The bag and lock technology is being pioneered by BeAlone, whose founder realized just how bad things were when he asked how his daughter was doing in school and only got a response after the teacher checked in with all his daughter’s teachers. “I couldn’t believe how long that took! I just wanted to know if she was getting an A+, not whether she was developing lifelong learning strategies.” The company’s clients also include comedians and musicians who like to make sure their audience is only listening to them. The cost is $200 per bag or schools can rent them for$30 per teacher for the school year.

Your thoughts? Here are some starters for you

• That’s weird, I listened to NPR on the way home today and I didn’t hear that story.
• Wait, I feel like you’re being sarcastic, but I can’t put my finger on it.
• You forgot about my favorite part: …
• This is dumb. Principals who do this just don’t realize how creative teachers can be when they can work together.
• This is great. I think we should seek to have this in all schools!
Advertisements
Posted in fun, teaching, technology | 1 Comment

## Mass changing orbits

A few weeks ago my good friend John Burk posted some intriguing questions about what happens to planetary orbits as the sun loses mass (all that heat has to come from somewhere!). I’ve been thinking about it ever since and finally got around to doing some modeling to see if I could answer any of the questions.

My first thought was to see if there were ways to simplify the differential equation solving approach. What John was doing was the full 3D version assuming the sun’s mass stays significantly above the mass of the planet. So he’s already doing some simplifications because he’s not bothering with the center of mass frame. I think he’s right to do that because after a page or so of notes, I’ve realized that the center of mass frame gets pretty ugly when one of the participants is losing mass.

So what else can be simplified? The great thing about central force problems is that you can reduce it all the way down from six variable to one (in fact that’s one of my standards when I teach Theoretical Mechanics):

1. Each of the two masses has 3 variables (x, y, and z) so you start with six.
2. The center of mass approach lets you model the problem as a fictitious mass (=m1m2/(m1+m2)) that’s the same distance from a fictitious force center as the two actual masses are from each other. Now you’re down to three.
3. If it’s a central force, the angular momentum is conserved. That means the fictitious particle has to stay in a plane. Now you’re down to two variables.
4. If the angular momentum is conserved, you can treat the rotational part of the kinetic energy as an effective potential energy, leaving only the radius variable. Now you’re down to one.

You can model the complex 6-dimensional problem as a single (reduced) mass experiencing a potential energy function given by:

$U_\text{effective}=U_\text{actual}+\frac{l^2}{2 \mu r^2}$

where $\mu$ is the reduced mass and U is the potential energy that is only a function of r. To get the force this fictitious one dimensional particle feels, you just need to take a derivative (and add a negative sign).

So I gave it a try. The first thing I did was try to see how far into the future I could integrate using Mathematica. It turns out I could go quite a ways! Here’s a plot of the radius as a function of time.

As you can see, I was able to go out several billion seconds of integration time. This turned out to be around a billion “years” given the simple parameters I chose. If the radius grows, we expect the years to take longer. Here’s a plot of the instantaneous “year time” over the simulation:

as expected!

So it seems that a very slow loss of the sun’s mass would just slowly increase both the circle radius and the year time of the planet.

Another question that John asked was whether there might be an analytical solution to this. I quickly tried DSolve instead of NDSolve in Mathematica and got no joy (I wasn’t overly hopeful). I did ask a good friend of mine who’s a real expert in differential equations whether he knew of a particular decay function I could use that might have an analytical solution. He couldn’t think of one, but did point out that if you had the mass changes be discrete you could “easily” build up the solution since in between the mass changes you’ve got a simple inverse square law orbit that does have an analytical solution.

What he means is that if you start with, say, a circular orbit, you can predict exactly where the planet will be and what direction it’s traveling (and what speed) when the first mass change happens. When it does, you now have initial conditions for a slightly different inverse square law problem. Because the sun has lost mass, it doesn’t pull as hard as before so the circular orbit becomes an ellipse. That ellipse is fully analytical and you can figure out everything you need to know about the planet at the next mass drop. Repeat this to your hearts content and you’ve got a piecewise analytical solution.

This sounded intriguing, but I started to wonder what the physical differences would be between the two approaches. I figured I could check on a relatively small time scale and look for differences. So I coded up both a continuous and a discrete mass loss model, where they connect with each other after each mass loss. Here’s a plot of both mass loss functions:

Here’s the animated result (the blue dots are the points of the mass change for the discrete (blue) model:

As you can see, there’s a pretty noticeable difference in the orbits. Admittedly this is only because the mass jumps are pretty big, but it still makes me nervous.

Here’s a plot of the radius function for both:

Note how the continuous one seems to never get any close to the sun while the blue one is clearly showing more of an elliptical motion (it gets closer and further from the sun during every orbit). To see that more clearly, here’s a plot of r'[t] or the rate of change of the radius:

It sure looks like the orange line doesn’t go negative (indicating the planet never gets closer to the sun). Here’s a zoom in of the orange one to see it better:

Yep, never negative!

Your thoughts? Here are some starters for you

• This is cool, but I’d like to hear more about . . .
• This is dumb. Why didn’t you talk about this instead:  . . .
• It looks like you were doing a Hamiltonian approach in your notes. I thought you hated the Hamiltonian approach!
• You do know that the perturbations due to Jupiter alone would totally wash out these small effects, right?
• As soon as I saw that you don’t bother to label your axes, I stopped reading. Thanks for saving me some time.
• I thought you said you were going to try to do everything in python from now on? Liar!
• Why didn’t you set up your constants so that a year takes a year? Seems obvious to me.
• Can you share your code?
• How well do your students do on the 6->3->2->1 standard?
• In the piecewise analytical approach, could you look at the solution when the gap time between mass changes goes to the limit of zero?
Posted in general physics, mathematica, physics | 2 Comments

## What are integers

This morning over the breakfast table my family had a great conversation about integers It started when my youngest, L (5th grade), talked about his math test tomorrow. He said the whole chapter was easy and that he wasn’t worried about it. I asked what kind of questions would be on the test, and he said that it would be things like “identify the integer in the following statement: it is -20 degrees C outside.” I’m sure the test will have more than that on it, by the way, but that launched us into some fun conversation about integers.

I asked him if he thought there was an infinite number of cells in the human body. That launched us into talking about all of these:

• Air molecules on earth
• houses
• homes
• books
• gallons of milk
• hairs on your head
• cups in the world
• heaps of sand

Some were easy: houses, gallons of milk, cups. Some got us really talking, especially “books” as we started to interpret those as fiction books.

Here are some of the thoughts that occurred to us as we argued around the table:

• If you don’t know where the end is, you can’t say you’re halfway done.
• Once it’s done, there’s a halfway point if you count pages or words, but half a story or half a plot is harder.
• We talked a lot about how the Harry Pottter books cram a lot in the last 100 pages or so, for example
• If you have a heap of sand and take a grain out, it’s still a heap. If you repeat, at some point it’s no longer a heap, but it’s never a fractional heap.
• So maybe integers are used for things that can’t be split up? If you can split them up, you should use reals or decimals or rationals or something.
• My partner is a writer and she talks about how many of her writing friends are heavy outliners. They know where the half-way point of their story is.
• Houses are measured with real numbers, but homes are like heaps: they’re a home until they’re not. Half a home doesn’t make sense.
• Human cells are interesting. They “divide” to reproduce, but my argument was that right up until it actually splits, it’s one cell, and once it splits, it’s two.

I’ve been thinking about this all day. I’m coming around to the notion that we often say something is integral (or is counted by integers) when really we should use real numbers and admit that it just works out that they’re often things like 2.00000 . . . etc (like houses, or gallons of milk, or cups, but not homes, books (maybe?), and air molecules). I think we use rational things (fractions) when maybe we shouldn’t. Maybe when someone says they’re halfway done with a story they’re really saying they are still at zero stories but will soon be at 1 story. They might be measuring time, or words, or pages, but that’s a proxy, using things that can’t be measured with integers.

One interesting thing was the different approaches of my kids. L was interested but admitted he was confused at times (now we’re a little nervous about tomorrow’s test – I joked that I should send this post to his teacher). C (10th grade) really felt that if you couldn’t clearly see the end of something, figuring out fractions didn’t make sense. A half gallon makes sense because we know what a full gallon looks like, but a half story is tough to make sense of. B (12th grade) felt that you can convince yourself that you have less than 1 of lots of things (like books), but even if you can’t figure out what the fraction actually is, if there’s a way to think about it being less than one, you can’t say it’s described by integers. Mostly that argument was on the book side, not air molecules or hairs on your head.

Overall it was a fun conversation. I love seeing the #tmwyk hashtag on twitter (talk math with your kids) but it’s often hit or miss with my own kids. This was fun mostly (I think) because I was really trying to wrap my own brain around it, and not just trying to teach them something.

So what do you think? Here are some starters for you:

• This is great. I think another great thing to talk about to see if it’s integral is . . .
• Why don’t you use the word “quantized” for this? What, are you scared of physics or something?
• This is dumb, everything is countable and split-able. I can’t believe I even read half of this post.
• #tmwyk can work great even if you’re “just” teaching them something, here’s 7.5 examples . . .
• What did you have for breakfast?
• I’m a fiction author and I’m really bothered by what you say. I often take 3/4 of one book and put it together with 1/4 of another to get a new book I can publish.
Posted in math, parenting | 4 Comments

## Snow wave

Earlier today I posted this pic and asked a question about it on twitter:

If you click through you’ll see lots of great ideas. I’m not sure what the right answer is, so feel free to weigh in below in the comments.

What actually made me decide to blog about it was that I realized that I asked the wrong question. I really wanted to know what would cause the repetitive pattern, so I think really I was thinking about what would cause the frequency of the wave.

Now, I think everyone who replied on twitter recognized one of the fundamental relationships about waves when answering my question:

$\text{wavelength}=\frac{\text{speed}}{\text{frequency}}$

and really just jumped to physical descriptions of what might cause that frequency. In other words, they realized that the car was moving and basically leaving behind a trail of snow blasts at a particular frequency. Spatially that all works together to leave a record with a measurable wavelength.

As I thought about both my question and the answers throughout the day, it hit me that it’s one of those things that might lose students, especially early on before they’ve really internalized the relationship above. If you ask students to engage with the image or even the Hyundai commercial it comes from, they’ll engage and come up with all kinds of interesting questions, it seems to me. But if you ask about the wavelength like I did, it might shut them down, because then they’re not going with their gut and instead are trying to remember the relationship between wavelength and frequency (or possibly period).

I guess what I’m saying is that I knew my audience and I figured I could ask the question any way I wanted to. And it worked! But as I think about using this in class, I think I would have to be more careful. I think that’s a cautionary tale for me. It reminds me of times I’ll ask about something I think they’ll have experience with, or maybe some cool insights about, but I’ll ask it using vocabulary that’s still too new for them. I think instead I should just show them something and ask “what do you see?” or “what do you think is going on here?” or “Is there anything interesting going on?”

Your thoughts? Here are some starters for you:

• This is interesting. It reminds me of . . .
• This is really dumb. What you should have asked instead was . . .
• This is really cool. I think I’m going to buy a Hyundai now.
• This is really a waste of my time. I already have a car.
• Why didn’t you post a link to the video instead of a crappy screen grab you clearly took while pausing the tv during a really exciting Manchester Derby?
• Here’s a better question to ask students about this pic . . .
• I was the driver in this commercial and here’s what actually caused that . . .
• I was the camera person in this commercial and here’s why the driver really doesn’t understand physics.
• Here’s my crazy explanation for that snow pattern.
• It’s not a wave, you should stop saying that.

Posted in physics, teaching, Uncategorized | 7 Comments

## Crowd-prioritized questions for speakers

This past week I tried an experiment during a major speaking engagement on my campus. This was our annual “Commitment to Community” address by the fabulous Kemba Smith. We had her on campus for a day and she interacted with our students in lots of ways, culminating in a major presentation to the campus in the evening.

In my role as director of the first year seminar I was involved in some of the planning (I need to be clear here and heap praise on the C2C team – they did all the work and deserve all the credit for the great day). Specifically I was involved with planning how the overflow room should work. We hold the event in a neighboring church that can only seat something like 500. We like to have a satellite location that can simulcast the event. In early planning, I expressed how it would be interesting to do something different in that room. I thought it would be great to brainstorm activities people could do, while listening, that could raise the engagement of the audience. What we decided on was to crowd-source the prioritization of the questions we’d ask.

### What we planned

We thought it would be great to encourage the audience (only in the satellite room) to use internet-connected devices to submit and vote on potential questions for the speaker.

We picked the Q&A feature of Google Slides to do this. We made a simple one-page Google Slides document and turned on Q&A when the event started. We made sure the url was clearly displayed in the room.

I invited the first year seminar faculty to bring their classes, with a limit of 3 classes, and talked to a few other faculty about it as well.

We told people we’d be the first three questions asked in the church since I promised to text our questions to a plant (from the C2C committee) in the church.

### What happened

Only two faculty brought their first year seminars (the rest went to the church). When I asked people how they made that decision I heard lots of interesting things:

• “I really want my students to be there to hear Kemba”
• “I’m not sure my students will have the focus you’re looking for”
• “Sounds cool but I really want to be in the church”
• “I’d love to because I’m always squished in the church”
• “That’s an interesting experiment”

In addition a few other faculty and students came. All together we had over 80 people there.

We passed out cards explaining what we were doing, because I figured if anyone came late I wouldn’t be able to explain it myself. Many were there early and we verified the technology worked with everyone’s cell phone.

We only had a handful of submitted questions, the highest rated of which only got six votes.

I submitted the questions a little early (we had a 2-minute delay that I didn’t want to miss). The question ranking changed a little after I submitted them, but the top three remained the top three. In the church all our questions were asked, but not all at once at the beginning of the Q&A session.

### Analysis

I was a little disappointed at the lack of engagement with the technology, but quite happy with the respectful and attentive attitude in the room. I’ve spoken with some about why there weren’t so many questions submitted and a few suggested that a lot of Kemba’s presentation was personal narrative, and that’s sometimes hard to question.

I think our questions were good. They certainly weren’t the horror stories you sometimes see at Q&A sessions for big speakers. You know what I’m talking about:

• “Thank you for your talk. I agree that _____ and let me tell you my whole life story before getting to my actual question.”
• “I came in late, could you please say everything you said at the beginning again?”
• “I have told you before that I disagree with you about point ____ and I’m going to walk you through every conversation we’ve ever had right now.”
• “Do you know ____ who says the same stuff as you but better?”

I had a question voted down. What’s fascinating about that is my emotional reaction. I would have thought I’d be disappointed about that. But it was interesting that I was relieved! I realized that I might have asked it if there were no crowd-sourcing and I might only hear after the event how dumb a question it was. In this case I don’t think people thought it was dumb, but they clearly thought other questions were more worth their time, and I think that’s great!

One interesting feature of this experiment was that the speaker couldn’t see how the voting and “leader board” evolved during her presentation. I think that’s likely a good thing, as it can be very distracting. In our implementation I did not project the leader board, but it was on everyone’s phone.

I think I’d like to do a little more experimentation with this. I think it could help with student engagement and I think it could really make the Q&A sessions more worthwhile.

Your thoughts? Here are some starters for you:

• This is cool! You could also think about doing . . .
• This is dumb! Instead you should have  . . .
• I thought you used to love Google Moderator, why didn’t you use that?
• I think you didn’t get too many submitted questions because . . .
• I think you didn’t get too many votes because . . .
• I’m personally hurt by your examples of horror shows in Q&A sessions. I love all of those examples you describe!
• Here’s another to add to your horror show list . . .

Posted in community, teaching | Leave a comment

## Harmonic drums neural network

I’ve written before about my research group’s efforts at trying to find harmonic drums. One of those students wants to continue that work as a independent study so I’ve been putting some more thought into it. This post is about my fledgling efforts to use neural network technology to help us out.

Our continuing goal is to produce an 2D surface (drum) that has resonant frequencies that are harmonic. The parameter space to search in is huge (infinite?) but ultimately we’d love to find a shape (that we could 3D print!) that would sound cool. If we could find it we could print several different sizes to have a harmonic instrument.

For most instrument designs, if you name the lowest/dominant frequency that you want, you can usually pretty easily find the physical parameters you need to achieve that. Take a simple stringed instrument as an example. The three variables that matter are the length of the string, the tension in the string, and the linear mass density of the string:

$f_\text{fundamental}=\frac{1}{2L}\sqrt{\frac{\text{tension}}{\text{linear mass density}}}$

So it’s pretty easy to find a string that sounds right. After that the beauty of a string is that all the other resonances are simple multiples of that fundamental frequency, so all of them sound good (except the 7th harmonic, that sounds like crap – see the placement of pickups on electric guitars that try to kill that one).

The problem with drums is that most of the time the resonances don’t have such an easy integer ratio relationship. That’s why drums aren’t usually considered harmonic instruments.

So, in our case we’re hunting for a shape that has some interesting resonances. This post is trying to get some help from you fine folks on how to use a neural network to do that.

Here’s what we’d love: name a set of frequencies we’d like a drum to have and determine the shape that would do it. We’ve tried some other approaches, but here I’m trying to get some help on how to design a neural network to do it. Here’s our set-up (nearly all points welcome your challenges!):

1. We are somewhat convinced that the resonances for a polygon shaped drum are pretty close to the resonances of a smoothed out shape that would hit the same points as the polygon (this allows speed on our end to generate the frequencies for a given drum – ie the opposite of what we’re looking for).
2. We make a training set by setting the order of the polygon (n) and then repeating:
1. generate n points in the plane
2. Find the shortest tour of visiting them to give us a region that doesn’t cross itself
3. Make the region (in Mathematica: BoundaryMeshRegion[points, FindShortestTour[pts, Line[Range[n]]][[2]]])
4. Find the lowest 3 eigenfrequencies (in Mathematica: NDEigenvalues . . .)
5. Have the new trainer be {f1, f2, f3} -> coordinates of polygon (note I say more about this point below)
3. We make a neural network that takes 3 inputs and matches the number of coordinates for the polygon for the outputs.
1. Mathematica allows us to quickly set up such a network and to go crazy with the number of nodes in each layer and how many layers. Here’s the syntax for a single hidden layer with 10 nodes, Sigmoid-based NN:
1. NetTrain[NetChain[{10,LogisticalSigmoid, 7}], trainingset]
2. NetTrain looks at the training set to get the input layer size, but you have to put in the output size. The 7 there is for a pentagon shape (see below).
2. We’ve tried 7 hidden layers with 100 nodes each along with all kinds of different shapes and sizes. We’d love some ideas here
1. Beyond a sigmoid nonlinearlity Mathematica lets you do all kinds of things like hyperbolic tangent and ramp

For n=5 (pentagons) I originally thought to try 5 ordered pairs for the coordinates of the pentagon. I realized, though, that there’s lots of redundancy built into that. For example, rotating a region or translating it doesn’t change the resonant frequencies. So instead, for the moment, I’m trying 4 lengths and 3 turning angles (because assuming the 5th link goes back to the first point – which I set at the origin – is enough) or 7 pieces of information. For triangles I use two lengths and one angle, which is also enough. I figure that savings should be useful.

Unfortunately, even after training tens of thousands of rounds with training sets containing of tens of thousands of trainers we’re not making much progress. Hence this post.

So, can you help? We’d love some challenges to our assumptions/approaches listed above. We’d also love to hear some good ideas for neural network structures to try. Luckily doing it in Mathematica is pretty easy, but if you’ve got a system you’d like to try we’re happy to provide the training set.

Some starters for you:

• This is cool! I think point x.y above can be improved and here’s how . . .
• This is really dumb. It’s obvious from point x.y above that you guys don’t know what you’re doing. What you should do is . . .
• You didn’t italicize Mathematica at all in this post so I stopped reading.
• I thought you said you were trying to do as much as you can using python these days. What gives?
• What makes you think a neural network can actually solve this problem?
• I don’t understand point x.y above. Please explain it better so I can get some sleep.
• My band’s name is “7th harmonic” and we’re suing you because you said we sound like crap

Posted in mathematica, programming, research | 2 Comments

## Propellers with rolling shutter

I really loved Smarter Every Day’s cool video about propellers shot by digital video cameras:

I especially like how he stuck with it over several years! I liked the explanation a lot about why the propellers take on such weird shapes, but I didn’t think much about the mathematical structure of them.

But then I saw this page and got really interested. Ok, I admitted to the world that I was stumped

So then I decided to dig in to figure out why the simple Mathematica command:

ContourPlot[Sqrt[x^2+y^2]==Cos[5 ArcTan[x,y]+17y], {x,-1,1},{y,-1,1}]

gives the correct form for a simple mathematically-based propeller. At first I thought that maybe it was just similar enough to the image the original poster wanted but then I made this gif and realized that it was dead on:

(Click through and you’ll see a bunch of other examples that I slapped together)

So why does that simple statement (ContourPlot) do the trick? Well, what do we need to figure out? We need to find the locations on the plane where the black rolling shutter line intersects with the blue propeller function. So let’s see if we can express that mathematically:

$y_\text{shutter}(t)=vt-a$

where v is the vertical speed of the shutter and a is the maximum radial extent of the propeller.

$r_\text{propeller}(t)=f(\theta-\omega t)$

where $\omega$ is the angular rotation speed of the propeller. This gives you the distance from the origin to the edge of the propeller for a given angle, $\theta$. Expressed as a function of x and y you just need $\theta=\tan^{-1}(y/x)$ or better yet Mathematica’s ArcTan[x,y] function that can work on the whole plane.

So what we’re looking for are locations on the plane whose distance to the origin matches the propeller’s radial extent at that angle when the rolling shutter is there, or:

$\sqrt{x^2+y^2}=f\left(\tan^{-1}(y/x)-\omega \frac{y+a}{v}\right)$

where I’ve solved the equation for the y-position of the shutter for t.

Aha! So we just need to find points on the plane where that equality holds. But that’s what ContourPlot is really good at doing! Really all it does is make a big grid on the plane, check all the points, and if it finds points where that equality is close it zooms in and makes a smaller grid until it finds points that are close enough. That process repeats MaxRecursion number of times (I think the default is 2). The suggestion on the StackExchange post is to set PlotPoints->100 so that the initial grid is fine enough. If I do that but set MaxRecursion to zero it looks pretty jaggedy (not sure that’s a word).

Yesterday when I was futzing around it took forever to make the movies. That’s because at every time step I was redoing that ContourPlot command but with a plot range only below the rolling shutter. It’s the ContourPlot that takes forever so I found a better way today. Now I do the ContourPlot command just once for the whole plane. Then I extract from it the points it finds and then I just use a Graphics command to plot the points that are below the rolling shutter for the movies. The whole process (including exporting the GIF) is about 30 seconds now compared to 5-10 minutes yesterday.

What’s fun is that you can set your propeller function to be anything. Here’s a couple examples:

$r=0.75+0.25 \sin(10\theta)$

$r=0.75+0.25\sin(10\theta-0.1)+0.1\sin(4\theta)$

So, I’m glad I put a little more time into this, it’s certainly been both fun and entertaining. I hope you’ve enjoyed it too.

Your thoughts? Here’s some starters for you:

• This is cool, do you mind sharing your code? (as usual it’s incredibly sloppy with almost no comments)
• None of these look like real life propellers, this sucks.
• Why didn’t you do this in python? (seriously, does the contour plot package in plotly or mathplot lib work for this type of problem?)
• Can you try this function for a propeller: _____
• What happens if the shutter comes in from a different angle?
• If Smarter Every Day already explained all this, why did you bother at all?
• Instead of just giving the outline of the blades, can you fill it in?
• Why did you write “seriously, . . .” in that fake comment above? Aren’t these starters supposed to be strictly for us readers to use?
Posted in fun, math, mathematica, physics, Uncategorized | Leave a comment