There’s a fascinating piece up today at The Chronicle‘s website on a new trend in student course evaluation — “smart” recommendation systems.
The premise is that course evaluations, on their own, don’t tell provide you with as much information as they could about how you’re likely to respond to (and how well you’re likely to do in) a particular class. If most of the folks taking “Immigration in America” are upper-level Sociology majors, and you’re a Bio student looking to fill out a distribution requirement, the fact that the prof gets high ratings for clarity doesn’t tell you a lot about whether you’re likely to sink or swim.
A smart course recommendation system, on the other hand, can pull out course evaluations from students like you — same year, same major, even similar GPAs — to see how folks in your position responded to a given class or professor. As the Chronicle notes, it’s basically applying the Netflix “our best guess for you” approach to movie ratings to the world of academic advising.
While writing my dissertation, I uncovered evidence that student course evaluations first appeared in the late 1940s as a program of the National Student Association, a student-run organization that eventually grew to be one of the largest and most important student activist groups in American history. The course evaluation program at my own alma mater, in fact, started as an NSA-inspired project.
Student course evaluations have since been adopted by colleges and universities themselves, of course, even as sites like Rate My Professor have sprung up to provide students with franker, less filtered feedback. But as someone who is now on the receiving end of such evaluations, I know that they’re still often frustratingly vague and incomplete, and this kind of demographic number crunching strikes me as a big step in the direction of making them more valuable for everyone.