Well, I’m in the dean’s office now. Pause while the wordpress server handles the sudden exodus from this page. Ok, now we’re ready. I wanted to get some thoughts down (and seek useful advice from you, dear reader) about how we can incentivize various activities in higher ed. The reason I titled this “quantum economies” is that I’m realizing that most of my incentives (carrots and sticks) come in quite large chunks. What I’m trying to figure out is a way to have smaller chunks at my disposal.
I just got back from a conference that was all about assessment of general education in higher ed. Most of the people there have either a) drunk the koolaid about assessment, or b) have accreditation breathing down their necks. I’d say I’m really in both camps. One of the most common points of discussion in the hallways was how to get better buy in for various assessment activities on our various campuses. Some give stipends to those who really try to i) collect data, ii) analyze data, and iii) make changes accordingly. Some make sure to give a shout out to programs that are doing some or all of all 3 of those things. But I have to say that most of us were feeling like we don’t really have great ways to incentivize programs to do this necessary work.
So I thought about what we do. One thing we do is explicitly pay attention to the assessment work that departments have done when doing both program reviews and reviews of new hire requests. Some at the conferences were aghast when I told them that latter one, but for me it feels appropriate. My problem is that it’s too big. Our departments hire on average every 5 years or so, and our program review cycle is similar. That’s too rare and too big to really make the day-to-day incentives that I’m looking for. The problem for me is that we don’t have that many other program-level sticks or carrots.
I did have one idea that some at the conference thought was amusing. Some said that it would definitely work while others just assumed I was joking. I’m not sure. I definitely originally thought of it as a joke but maybe there’s something to it. What do you think: If your program doesn’t do assessment work, all (or most? or many? or more than usual?) of your classes have to be at 8am. We struggle a little to get people to teach at that time (I could write a whole other post about the interesting conversation on our campus about that time slot) and most would rank it towards the bottom of their favorite time to teach. Some at the conference also talked about courses that meet on Fridays in a similar vein.
This is definitely smaller than “you can’t hire” but it’s still pretty . . . petty I guess is the right word. At the conference others talked about how it would be great if getting to teach a gen ed course was something people fought over. They talked about how if those were the only, say, 18-person capped classes on campus then people would beg to teach them. Basically we were all trying to find ways to reward those who do good work (again: i) measure, ii) analyze, iii) correct) and, I suppose, punish those who don’t. One problem, of course, is the vicious negative cycle the “bad” programs would undergo with most if not all of these ideas.
So I’m still kicking around some thoughts. I’d love to get some more from you. I guess my goals are:
- That we find a better way to value assessing our learning outcomes.
- That we incentivize programs to do that hard work and take it seriously.
- (most importantly) that we become better at serving our students.
I’ll end with my favorite quote from Kathleen Landy at the conference:
Assessment is not an autopsy, it’s a physical.
Thoughts? Here are some starters for you:
- That’s not at all what ‘quantum’ means. It’s clear you’re on the dark side of administration if you can’t even remember that from your physics teacher days. I’m out.
- I tried to leave this page but wordpress insisted I stay and read all this drivel. That’s 5 minutes I’ll never get back.
- I like the idea of ____ but I think a better twist on it would be . . .
- I hate the idea of _____ though I suppose you could make it better by . . .
- Who is General Ed? What’s his last name?
- Boy for me assessment of learning outcomes is definitely a bloody autopsy. What do you mean by it being a physical instead?
- I love teaching at 8am, this didn’t make any sense to me at all.
- Instead you should punish programs by …
- You should never use sticks, you should always use carrots and here’s why . . .
- You should never use carrots, you should always use sticks and here’s why . . .
- You should make any department that’s not doing their assessment do all their programming in python (long time readers will get that)
What is your definition of assessment? Is this college-prescribed student/teacher performance analysis? Coming from an area (physics) that has been, to my observation, at the forefront of reflective teaching through assessment (FCI, MBT, TUG-K, etc.) It would be very off-putting to have someone try and carrot/stick us into doing some third party’s assessment program.
Where are decisions on enrollment limits or class sizes traditionally made in your institution? Is this idea going to be seen as administrative overreach and meddling in faculty affairs?
I can also imagine that carrot/stick forced assessment is likely to result in low quality data from people who have no buy-in to the value of assessment, and therefore do whatever is least inconvenient to them so they can get back to research/teaching/whatever.
BTW, does this new job mean a blog name change? You can only get by with physics terms mixed into administrative stuff for so long…
Just you wait, we’re going to have QED enrollment seminars, Lagrangian adjunct hiring, Schroedinger first year seminars, etc.
There’s no question that everyone buys into assessment at some level. Usually it’s at their own class level where they write, assess, and correct approaches to their own teaching. The problem I’m talking about is at the larger department level, and, even worse, at the whole curriculum level (general ed stuff). There we’ve identified learning outcomes that don’t always weave into a single course learning outcomes (which are dealt with by the individual instructor). Your notion about low quality data is one of the biggest concerns I have. I’m just looking for ways to have conversations with departments about the long term improvements of their own programs but with some real benefits for them.
Hi,
I like the way you are thinking, but I do not like the particular suggestion of having departments teach at 8 am as a punishment. First, this punishment seems to be too arbitrary, and I think that it would make faculty resent both assessment (resulting in low-quality data AND a longer wait for everyone to drink the Kool-Aid) and the administration (perhaps your school is the exception, but there has been some tension between faculty and administration at every school I have been at). My second concern is that having mostly 8 am classes in one department could hurt students—it makes it harder for students to take classes from that department.
My best idea is to tie submission of assessment data with getting a GenEd designation—if a course does not submit assessment data for n consecutive years (maybe n=2 or n=3?), then that could will no longer count as a Math/Science/Writing/Whatever General Education course. That would drive students away, create less demand for the course, and indirectly lowers the need for additional tenure lines in that department. This seems less arbitrary to me, since there is a valid argument of, “We need to make sure that our GenEd courses are effective. If you don’t help, you don’t get to be part of GenEd.”
I am certain that I am missing some flaws in this plan. What are they?
Yeah I’m on board with the issues about resentment. What’s interesting is how many fundamentally like the notion of setting and achieving learning goals but when it comes to some of the harder of more longitudinal work it’s sometimes hard to get everyone at the table.
We have plans for basically your suggestion about removing designations if assessment data shows that no improvements are made. We haven’t really pulled the trigger on that yet so it’ll be interesting to see if that really helps. For now that still looks to long term (not really like a huge hammer or anything, just slow).
Are you primarily talking about department level assessment here? In my LTC role, I’m working with IRA to think about better ways to support department-level assessment. It’s a challenge. One thing that IRA did was to compile a matrix of students learning outcomes for every department, and then look for departments that had shared students learning outcomes (e.g. information literacy, engagement with the creative process, ability to work in groups, etc). We are going to try some targeted discussions among representatives of a group of departments around a common SLO and see if there are ways that they can share the assessment efforts — share their rubrics, data collection, etc. — so that it reduces the workload for any one department. Hopefully, by encouraging collegial conversations, we can change the mentality from “our department versus the administrator asking for the assessment data” to “our group of departments trying to learn from each other about ways to assess a common SLO”. This doesn’t provide any direct carrots or sticks, but we’re hoping people will see the appeal of trying to make individual department assessment less onerous by sharing ideas with others trying to assess similar SLOs.
Yeah, primarily about program-level assessment, though also about our general education-level as well. I really like your idea of having departments work together. I know that we’re kicking around the notion that if, say, a math major is required to take a physics class that seems to address a math-major-level learning outcome, then the physics class/instructor/department should be involved with that assessment (artifact collection, rubric measurement, course correction (pun intended)).
I’ve been one of three people doing our department’s annual assessment report for the last few years and I also did time on the college assessment committee (reviewing departmental assessments). I don’t like any of your suggestions, and I don’t think external motivators will work (unless your goals are very limited; making departments do something rather than nothing).
For me, the only incentives that work are if (a) I’m personally interested in the results of the project, or (b) other faculty in my department are interested. Right now, we don’t really have (b), so in years we find a project that satisfies (a) I’m interested and in other years I just do the minimum required.
The biggest impediment from my point of view is the one-size-fits-all nature of the projects we must design and the reports we must right. I understand why this is (administrators need to take these and write more reports for the accreditors), but it makes it very hard to come up with interesting projects that could really impact the courses we teach.
So my advice would be to go to the departments and ask them what they want to get out of their assessment projects, and see if you can find some way to give them what they want, so they can do projects they actually want to do.
We definitely have a few programs that aren’t really doing much, so some of this is about them (do something instead of nothing). But I’m much more interested in working with colleagues who see a ton of benefits to themselves and their programs from doing this hard work.
Thanks so much, Andy, for the mention in your post. I’d be remiss if I didn’t give due credit to Doug Reeves, whom I believe first introduced the medical analogy (getting a physical, as opposed to an autopsy) to reinforce the difference between formative and summative assessment.
That being said, while I’m more a fan of carrots than sticks, I think there is more to gain through a broader, institutional effort to foster a culture of teaching & learning that recognizes that assessment is PART of teaching, not merely something with which we must bother AFTER we teach.
Thanks again!
Thanks for the comment Kathleen (and thanks again for a great session at AAC&U!). I agree that assessment should be part of teaching, I’m just feeling like there are some programs who need to be enticed into thinking that way. Once they’re in, and we’re in a world where we can really figure out which approaches are helping our students, I think we won’t need those types of incentives any more. At that point I think programs who deserve/need resources will be easy to identify.
I’m still trying to figure out how to assess the learning outcomes I really value — big-picture understanding, epistemological stance towards math as a language and modeling toolkit for physics, etc.