I’ve been playing with ImageFeatureTrack in Mathematica over the last few days. My interest is in helping me and my students track the beads on a swinging beaded chain (something we worked on quite a bit last summer). I just wanted to get a few of my initial results down here.

And here’s the movie that makes (Mathematica just makes a movie showing the slider going forward and back). I tried to export the whole thing as an avi running at the same speed as the above one but I ran into memory problems.

Here’s an animated gif of just the tracked points:

just the tracked points from the original video

And, here’s the traces of those particles (done with ListLinePlot[Transpose[track]]):

traces of all the tracked weights

The second command (the ImageFeatureTrack one) allows you to tell it what to track. However, if you don’t give it any coordinates (as I didn’t above) it finds features that it thinks are worth tracking. It takes a while (~2 minutes) but the results are pretty cool.

Here’s one more example where I dangled the camera and shook it around:

I’m pretty impressed, though I’m not sure this does any better than the very cool (and free!) tracker software. Maybe others can chime in about that. I know for sure my students really struggled to get tracker to grab all our data last summer, but I’ll let them weigh in on whether they think this is better.

Some comment starters for you:

This is cool, would it work with … ?

This is dumb, I could have done it better with …

This is confusing, can you explain ___ better?

I worked in the lab last summer and this is great because …

I worked in the lab last summer. I have spent months trying to forget that horror. Thanks for dredging it all back up.

As a student who worked on this project last year, my question for the ImageFeatureTrack is how well does it work? Using Tracker to analyze a 40 second video would take a good portion of a morning and was quite user intensive. (and not too fun either) If this is faster I would definitely prefer it over Tracker. It would get rid of transposing the data from Tracker to Mathematica, which would also be helpful.

Have you tried tracking a more complicated motion?

that’s the most complicated one I’ve tried yet, but I’ll keep working on it. A 40 second video would really take a while, but, on the other hand, it’s the computer’s time, not yours, so hopefully more could get done in the mean time.

As a member of the team that worked on the summer project, I remember vividly the horror of trying to continuously monitor tracking each individual bead frame by frame. This Mathematica feature from the video seems to solve that problem almost entirely by itself. The first question that popped into my head is why it tracked multiple points on one bead, and what can be done to prevent the software from doing that. Would it be as simple as putting in the initial coordinates of the beads? I am also curious how consistent it is at picking certain points, what I mean by that is how easy would it be to setup two cameras and analyze both data sets and merge them? If it can easily do that, then this is the way to go in the future. However the ImageFeatureTrack is will certainly save on a lot of stress and generate more time to pursuing interesting things instead of spending three hours on one bead.

Hey Andy, cool blog post.

As a student who worked on this project last year, my question for the ImageFeatureTrack is how well does it work? Using Tracker to analyze a 40 second video would take a good portion of a morning and was quite user intensive. (and not too fun either) If this is faster I would definitely prefer it over Tracker. It would get rid of transposing the data from Tracker to Mathematica, which would also be helpful.

Have you tried tracking a more complicated motion?

that’s the most complicated one I’ve tried yet, but I’ll keep working on it. A 40 second video would really take a while, but, on the other hand, it’s the computer’s time, not yours, so hopefully more could get done in the mean time.

As a member of the team that worked on the summer project, I remember vividly the horror of trying to continuously monitor tracking each individual bead frame by frame. This Mathematica feature from the video seems to solve that problem almost entirely by itself. The first question that popped into my head is why it tracked multiple points on one bead, and what can be done to prevent the software from doing that. Would it be as simple as putting in the initial coordinates of the beads? I am also curious how consistent it is at picking certain points, what I mean by that is how easy would it be to setup two cameras and analyze both data sets and merge them? If it can easily do that, then this is the way to go in the future. However the ImageFeatureTrack is will certainly save on a lot of stress and generate more time to pursuing interesting things instead of spending three hours on one bead.

Pingback: Finding grains | SuperFly Physics