Three different technologies have affected either my actual teaching or my thinking about future teaching this past week.
For the first time since teaching in what others call a “flipped classroom,” I’m teaching a class that I’ve already developed a full set of out-of-class video resources for my students. The course is Physical Optics and I taught it two years ago with an earlier version of the same text that I’m using this year. Back then I produced nearly 100 3-5 minute long screencasts that supported the text. It’s been fun leveraging that cache of resources this year.
The first thing I did was put links on my daily outlines that mapped the new book version’s sections to the old screencasts. Nearly all the chapters changed the sectioning numbering by one by not numbering the introduction section, and I had titled the old screencasts by the old numbering. Now students can find the appropriate screencasts for each day’s material, though they are still free to browse any of the screencasts at their leisure.
For a given day of class, I post the reading I want them to do, and post the appropriate screencasts. In class I answer any questions posted through my home-built question and summary database and discuss any of the comments that have come through the groupme.com out-0f-class backchannel. Then we spend the majority of the time working in small groups, developing and practicing the skills and knowledge necessary to be successful when assessing the various standards of the course. The new twist, in this class, is that at the end of the time the students request more screencasts to address the areas they’re still confused about.
What that means is that they have online screencasts that work as both pre- and post-class resources. Last semester I realized, by looking at the browsing data for my screencasts, that students mostly use my screencasts in a review mode. As far as workflow for me goes, I’ve gone from producing five or so screencasts per class that focused on what I thought was important/confusing in the text, to producing two or three screencasts targeted directly to what the students feel they need help with.
I like this workflow and I can see how the pattern would likely continue if I were to teach the class again in the future.
Sharing screens in a computer lab
For the fourth year in a row, I’m teaching the lab for a course in the math department called “Applied Math.” In it I typically show them ways to numerically investigate the theory presented in the lecture. For the physics department the lab tries to get students to the point where they’re quite comfortable with Mathematica, in general, and error propagation, curve fitting, graphing, data manipulation, and integrating differential equations, specifically. We spend an hour and a half per week (minus the 10 minutes I devote to my 10-integrals-in-10-minutes quiz -another blog post some day probably) in a computer lab.
As I was preparing for the course this year, I realized that I get frustrated with the varying speeds that the students move through the material. It’s to be expected, of course, but I know that I’m not great at keeping the fast end engaged and the slow end not frustrated. So this year I thought I’d try to see how well it would work to have everyone help me diagnose syntax errors, instead of me roving around the room and helping people individually. I decided, even though we’re all physically in the same room, to have everyone log into my online office hours (the same system we use for the Global Physics Department) so that they can easily share their screens. This past week we tried this for the first time and I felt it went pretty well. As students found things weren’t working, I would ask them to share their screen and the whole class would engage in helping debug the problem. This kept the fast students engaged, as they were developing valuable debugging skills, along with learning different ways to accomplish the same thing. I think it helps the slower students, too, some who often don’t even know how to begin a task.
At first I tried to get everyone to look up at the projected screen, which necessitated asking the student to zoom their text, but then I realized it was much easier to have them just look at their own terminals where the shared screen also existed.
I don’t know if I’ll go further and have students request control of other’s screens, but it’s definitely available in the software.
In my Optics class, the text is a free PDF document. Often in class I’ll project it onto the white board to either annotate it myself or have students or groups come up and do the same. What sucks is when I forget what I’m doing and scroll the screen, because the marks on the white board don’t scroll.
That got me thinking about so-called smartboards. I know my school has a couple of classrooms with those installed, and if I were in them I could annotate the text in the digital document and the marks would scroll right along with the text/figures. However, I’m not scheduled in those rooms. So, I’ve decided to finally pursue something that I’ve heard about for a few years: wiimote smartboards.
If you haven’t heard about these, basically they make use of an IR LED and the camera on a wii-mote along with some bluetooth-tinged software. It’ll turn any projected screen into a smartboard. The software is either free or ~$30, depending on how hard you’re willing to work, and the hardware is about ~$60-$70. So, for $100 you get nearly all the functionality of a $2000-$3000 smartboard. Plus you get portability!
I’m hoping to get it set up and working over the next few weeks to see whether I and/or my students benefit. It may be that it’ll be too hokey, but I have a sense it’ll come in handy in this class.