This week I gave a presentation at the Physics Education Research Conference (PERC) on Standards-Based Grading (SBG) with Voice: Listening for Students’ Understanding. I was grateful for the opportunity to speak there and for the opportunity to gather my thoughts about what I’ve been up to, and to brainstorm changes for the future.
I’ve written before about how I taught my Theoretical Mechanics course last spring. What I want to do in this post is capture some of the great feedback I got from the educators at PERC.
When I told people what I was up to, some of them said things like “sounds good but it would never work for me – I have too many students.” This is a point I’ve been thinking about a lot as I’m really excited for this assessment technique and I’d love to use it in all my courses. The basic complaint is that it would be too hard to watch all the student screencasts in a large class.
At the end of my talk, Noah Finkelstein asked what the PERC community could research that would help me out and I mentioned this scaling problem right away. Specifically I asked whether the concept of peer review of these screencasts could be benefitial.
One idea people had was to check out Calibrated Peer Review. I have only scratched the surface at that site but I’m grateful for being pointed to it.
After thinking and talking about the idea of peer review for the past few days, I think I’ve decided to try some low-level research on this in my upcoming class (Advanced E&M) in the fall. I want each student to review a different student’s screencast once during the semester. At first I wondered how to motivate it and I realized I could do it the way I do everything: Make it a standard! Essentially the research question is whether performing peer review can aid in learning (from both perspectives). The standard will be something like “I can assess another students work using the common class scale.” I’ll assess their work by comparing their feedback to what I would give. If a students gets a 4 in that first try (meaning their feedback is similar in quality to mine), they can be done for the semester. If not, they can always try again.
I asked some students doing summer research a week ago about this plan and they immediately hated it. Their comments focused on the work they’d have to do for others. I wasn’t able to convince them of the value, to them, of doing the reviewing. I think I can couch it better in the course this fall. We’ll see.
Two passes through a problem
Another thing I asked the PERC community for help with was the notion of how students should prepare their assessments. The two main ways they do it are:
- Write up a solution on paper, scan it, and call up the pdf on their computers to screencast their commentary. They hover their mouse over the relevant steps to make sure I’m looking at what they want me to.
- Write up a solution with a pen tablet or smart pen, recording all the while.
I was talking with several people about the notion that the first method might lead to more learning. Effectively it forces the student to go over the material twice: once when writing it the first time, and once when they make the recording.
I think there’s something to that notion but it would be cool if someone could research it. For me, I’ll probably discuss it with my students (many of the students from the spring course are enrolled in the fall course) and see what they have to say about it.
I feel that my talk went over reasonably well. Several people let me know that it got them thinking and a few said they’re planning on revisiting their fall syllabi with my talk in mind. I’m not sure if the SBG part or the “with voice” part was getting people the most excited. My guess is a little of both.
It was fun to be at the conference, as I had never been before. It was great to see how much setting “learning goals” or “learning outcomes” or, as I put it, “standards” is starting to focus a lot of the conversations around Physics Education Research. I’m excited for where that’s going in the future.