← Previous Post: | Next Post:


Fish v. Douthat on Student Evaluations

I’m with Douthat.

Here’s Fish.

Margaret Soltan, June 24, 2010 9:09PM
Posted in: the university

Trackback URL for this post:

11 Responses to “Fish v. Douthat on Student Evaluations”

  1. GTWMA Says:

    A point in favor of Fish


    The key quote:”Student evaluations are positively correlated with contemporaneous professor value‐added and negatively correlated with follow‐on student achievement. That is, students appear to reward higher grades in the introductory course but punish professors who increase deep learning”

  2. Cassandra Says:

    It’s obvious you’ve rarely (if ever) had a functionally illiterate student in your class (who failed to meet college standards for a writing class) write blatant lies in your evaluations that trigger a call to the chair’s office to discuss what to most rational people is clearly a blatant lie.

    In addition to the rather common accusation of a professor being a racist (who obviously has a grudge against the student), there’s always the claims about professors not being prepared to teach (despite those prepared lectures and discussion questions), canceling too many classes (despite dates being listed on the syllabus as university holidays), and arriving late (even when one is always standing outside the door waiting for the previous class to end before any student arrives). These claims rarely have evidence attached. (Most students with legitimate gripes provide that evidence!) But many administrators would rather believe the anonymous complaints of their customers than actually investigate what are usually claims of pedagogical malfeasance.

    Just try teaching someone about crafting a thesis statement who cannot write a clear sentence (with a verb!) while also dealing with laptops and texters and chronic late-comers and absentees who demand notes (who, more often than not, also have trouble with those very same troubling thesis statements).

    I dare you.

    It would be nice if students could be trusted to be fair and accurate on evaluations, but, depending on your school and the quality of their admissions, many of us get stuck with students who retaliate solely based on the fact that we spoil their fun-time with expectations of actual college-level work.

    It seems quite clear at times that your view from the rarefied tower of a private university obscures many of the on-the-ground problems for those of your peers who do not share that same job site.

  3. david foster Says:

    Seems like student evaluations could be useful *if* the procedure is designed, and the results are reviewed, with realistic understanding of some of the factors that will influence students in their assessments, such as the ones Cassandra mentions.

    I’d guess that administrators have strong incentives to make students happy, thereby maximizing tuition income and minimizing student/parent complaints and even threatened litigation, while having much weaker incentives to improve the quality of the teaching.

  4. GTWMA Says:

    As a department head, I have to say that I do think most of my peers try to use student evals conscientiously. The incentives to “make students happy” are a lot weaker than you think, and I am more than willing to deal with the student and parental complaints–that’s my job and why they pay me what they do.

    Cassandra’s obviously had a different experience, and I am sure it’s one that others have had, too. As a teacher, my view has always been that I need to put down my defensiveness before I pick up my evals, and be willing to hear what the students are telling me. Usually, I know ahead of time where I did well and where I did not. Sometimes, however, someone gives me some good advice, even after 20 years. Most of what I’ve learned about teaching came from my students.

    As an administrator, I tell my faculty to do that, too. I tell them that I’d rather see them hold students to good standards and take the slight hit in evals, than compromise their standards to curry favor, and that my job is to stand up for them when they do get complaints about that.

    And, I tell them that I’m not going to over-react to what a few students say in one class. Problems usually come out in a pattern across multiple courses from many students, reinforced by comments from peer reviews. The biggest issue is whether the faculty member can demonstrate that if the pattern is there, they are hearing the concerns and working to address them. I only need to respond when my colleagues seem unable or unwilling to address an ongoing problem. And, I try to avoid hiring people that don’t take teaching seriously. That prevents a lot of problems.

  5. J. Fisher Says:

    Speaking of evals, what’s UD’s opinion on the death of Rate Your Students? I’m pretty darn sad about it. First the Celtics lose, then that site goes under. Not a good start to the summer.

    I’ll be brief, because I’m grading papers (and therefore have not waded through the two linked articles). I always take these eval conversations as time to sound the adjunct bell. As one, I live in fear of my virtual evaluations (whatever they might be on The Site That Shall Not Be Named) and the hardcopy ones that go to my department chair. As much as I am forever rethinking my grading policies and what constitutes “rigorous” assessment, I can’t shake the fear that if I piss off enough of my students, by assigning *slightly* low grades, that irritation will show up in enough evaluations that, eventually, I’ll just be out of a job. Whether or not that would actually happen is another story. Nevertheless, the fact that my attachment to higher education is, well . . . adjunct, I never want to push my luck (or, in some cases, the students). Therefore, I find my pen turning those B minuses into B pluses more frequently than it probably should because, quite simply, I need work. That’s a problem, in simple terms.

  6. theprofessor Says:

    The whole point behind the teacher evaluation movement was to improve teaching. Using these evaluations as the sole or primary way of evaluating teaching has been explicitly advised against from the start by the developers of the better instruments, but lazy department chairs and administrators started using the apparent “hard” numbers provided by the instruments as a shortcut for annual evaluations. In the late 90s, one especially dysfunctional college here based 100% of their teaching evaluations on two questions of a ca. 25 question survey.

    The instruments that I have used & seen are not useful nor are they intended to make fine distinctions in teaching effectiveness between, say, someone with a percentile ranking of 75 vs another with 85. But–show me someone who is consistently below the 30th percentile in a class in their own field and taught to majors in that or a closely-related field, and I will show you a teacher who needs to improve. Show me someone consistently at or around the 90th percentile, and I will show you someone from whom you should be stealing ideas. In my experience, easy teachers do not get extremely high evaluations. They get better ones than they deserve, but on crucial questions that relate to how much they learned, the students don’t seem to have much problem telling the truth. De-emotionalize the results, especially the comments, and focus on what can be improved. I have worked with duds and superstars, a dud who became a superstar, and everything in between. One common feature of the duds was an insistence that their teaching was wonderful, and the students stupid and malicious. Having had some classes stocked with unusually stupid and/or malicious students, I can believe that happens on occasion. But nearly every time? Come on. When I have observed the duds’ classes, I have often seen poor speech mechanics, a lack of interaction with the class, reading verbatim from Powerpoints or overheads, confusing directions, visible disinterest in the subject matter, etc. I had an older colleague whose students complained consistently that she mumbled to the point of inaudibility. In my own observation, not only did she mumble, she tended to place her hand over mouth for extended periods. Her preferred analysis, though, was that the students were discriminating against her as a woman, even though 70%+ of them were women.

    Cassandra, you are kidding yourself if you think that all private universities are Elysian Fields of docile, well-schooled overachievers. We face exactly the same problems that you do. The economic demographic of our student body is at the median for the 4-year public universities in this state. Since the kids and their parents are paying a higher price, it is probably the case that they are willing to work a bit harder, but they have higher expectations of us, too.

  7. david foster Says:

    GTWMA…”As a department head, I have to say that I do think most of my peers try to use student evals conscientiously. The incentives to “make students happy” are a lot weaker than you think, and I am more than willing to deal with the student and parental complaints”

    Glad to hear it. I’m not in academia, but I do have a lot of experience with the design, use, and misuse of measurement and incentive systems of various kinds. Very often, they wind up encouraging behavior which is not desirable…this is not a reason for avoiding them, IMNSHO, but *is* a reason for thinking about them and administering them intelligently.

  8. Ahistoricality Says:

    In short, they’re both right: Fish is right that student satisfaction surveys — which, along with “skill” testing, constitute the vast majority of attempts to gauge teacher quality — are crappy tools for evaluating teachers. Douthat is right that some method of evaluating teacher quality should exist, and student feedback can and should be part of that process.

    They pay these people to write this stuff?

  9. jim Says:

    I’ve never found student evaluations useful, either as a student or faculty.

    When I was an undergraduate, I ignored them. What did I care what other people thought? More to the point, the courses I wanted to take, on the subjects I found interesting, weren’t given by a variety of teachers from whom I could choose. They were the enthusiasms of individual faculty. If I wanted to learn this stuff, I had to put up with the guy who wanted to teach it.

    As faculty, I find the numbers too few, too late to serve as statistical quality control and the comments too few, too scattered to provide feedback that I could use to modify what I did (plus I don’t see them until well into the next semester).

  10. Bill Gleason Says:

    I guess I’m with GTWMA on this one. I am also very sympathetic with Cassandra because it appears from this and earlier posts, that she has had rum luck.

    When I was a graduate student, I stopped filling out evaluation because they were useless and no one ever paid attention to them. So I tell my current students that I will pay attention, and will make changes.

    This does not mean that I am willing to dumb things down. I want my students to be able to compete with anyone. They seem to understand this, and although some of them beat me up, most seem to be grateful.

    I also tell students, occasionally, to drop the course, when they complain. In about 10% of the cases, it is just not a good fit. Everyone’s brain works in a slightly different way and sometimes you just aren’t on the same wavelength. Best to move on. This also goes for graduate student/faculty mentoring relationships.

  11. Uncivil Liberties: Teaching Evaluations and A Clarification - Tenured Radical - The Chronicle of Higher Education Says:

    […] of Margaret Soltan at University Diaries, who draws our attention to the recent exchange between Stanley Fish and Ross Douthat on this […]

Comment on this Entry

Latest UD posts at IHE