By Cooper Hammond
Section V.B.1 of the Faculty Code states that “The recommended distribution of passing grades over a period of years for all courses is as follows: A – 25%, B – 45%, C – 25%, D – 5%.” In other words, it asks professors to grade their students in such a way as to create this distribution.
Although it is in the Faculty Code, many professors do not follow the guidelines. The Quest spoke with Kelly Chacón and Danielle Cass – both professors of chemistry – and a third professor who wished to remain anonymous. It was evident that these three professors had very different thoughts on what V.B.1. meant to them, how they used it, if at all, and how they saw grades generally.
Professor Chacón saw the guideline as a very positive thing, saying that when they started teaching at Reed, “The guideline was extremely helpful to me, to make sure that at the end of the day the 1 or 2 [high-performing] students, if they did exist,” did not influence Chacón to block “other students” from receiving good grades, as well.
They also found that V.B.1. reinforces the idea that, “There probably will be a lot more B’s and C’s [than A’s] because that’s an average.” The understanding of an average grade was inconsistent between professors, with the anonymous professor expressing that, “B has to be the average score.”
Professor Chacón wondered, “How do we distribute fairly to where not everybody’s getting a C? So, ‘what is an A?’ sounds weird, but is that 100% of all of the material done exactly perfectly, or is that some other metric by which you’ve shown over and above that you have complete mastery of the material? So that’s the question, but… you rarely get A’s because… that would indicate you got 100% of every single thing, so they’re very rare, and they ought to be because that’s not something we should aspire to; it’s just a thing that sometimes happens.” They made it clear that “You don’t have to get an A, or be an A, to show that you understand and took joy [from the subject] and can become something in the major.”
Professor Chacón continued, saying, “[At other colleges], if it’s below a 48%, it’s a D. But at Reed, that’s different because we’re looking at your understanding, your willingness to learn – there are other metrics that come into this.” Professor Chacón is of the belief that V.B.1. allows professors to quantify the willingness to learn, which means performance and effort could, in theory, be graded on the same metric.
The anonymous professor, meanwhile, opined that a grade is, “a measure of performance,” and that grades indicate, “mastery… rather than performance relative to each other.” They saw grades distributed under V.B.1. as a measure of comparison between students. They found it not only confusing but “problematic to evaluate students by their peers,” which contrasted with Professor Chacón’s thoughts on this method of grading. As Professor Chacón said, “If there was clearly one outlier, they would get an A+, and I would start the A’s at the next clump of students.”
The two professors did agree on one point: what letter grades matter to them. Professor Chacón asked, “Where is the D? Where is the F? That’s more important to me than the A’s and B’s. I want to give other students an opportunity to continue without feeling like I am underinflating or overinflating, or fucking anybody over. That’s the last thing I wanna do because here, we’re here to learn, not to perform. So that’s … where the guideline was important to me.” The anonymous professor felt similarly, saying, “I want as many students to get an A as possible,” but, “the only number I pay attention to is how many students are failing. I don’t really care about the top end.”
The anonymous professor also said that their class’ grades aren’t actually normally distributed, so V.B.1. wouldn’t work. “I’ve never taught a math class in ten years where I get a normal distribution, so why is it fair to grade that way?” To them, widespread student performance doesn’t work when grading with the guideline because it would become “patently unfair,” returning to the idea that grading students by comparative performance is problematic. They generally found Reed’s ideology on grading to be dissatisfactory. They were of the mind that the grades of Reed graduates are disregarded outside of the college because Reed’s grades, due to their non-inflation, may not measure up to standards in the “real world,” even with attached letters of recommendation. In their words, Reed needs less of an “ivory tower” mindset when it comes to grades because such a mindset is a, “real disservice to students.”
This anonymous professor was very clear that they, “will never support a uniform grade distribution,” and that they did not use or refer to V.B.1. in any way while grading their students. Further, they, “really don’t know,” who does or does not use V.B.1.
Professor Danielle Cass also didn’t focus on V.B.1. while grading, saying it was something “I hadn’t thought to care about.” During a faculty meeting she attended last year that discussed V.B.1., Cass felt that, “I wasn’t the only one that didn’t think about it,” and, “we all were curious about it.”
To Professor Cass, the distribution recommended by V.B.1. was over a period of five years, not one year, as Professor Chacón and B understood it. The reason she gave for this interpretation was that, “on any given year, I have trouble being forced to put students into given bins,” as she presumed the guideline asked her to. Put differently, using this guideline over the course of one year felt unfair to students, as they naturally “bin themselves,” so she doesn’t “need an algorithm” to tell her where to put students.
Despite her distaste for the guideline under a one-year interpretation, Professor Cass found it “useful as a comparative tool” for picking up on unexpected trends in grading but specified that “It’s typically [used] after the fact; I don’t want to be biased by it. It’s another piece of evidence to figure out if I’m being reasonable or not.”
Professor Chacón also touched on bias, but in a different manner: using V.B.1. to proactively counter it. They put forth the hypothetical that, “if you do have someone, perhaps, that does have some inherent bias, at least they have that [knowledge that] 15% [of students] should be in that [high-performing] range, and they’re sort of … forced to check [themself] and think,” about what grades they assign to whom.