"Leopold Stotch" at Outside the Beltway discusses Princeton's recent efforts to crack down on grade inflation. In a nutshell, Princeton has limited the number of grades of A- or better to 35 percent of the class in most cases. Leopold doesn't like the idea of an administrator who has never taught telling him how to evaluate his students. As a fellow professor, I cannot help but agree with that (though I acknowledge that my situation is different--my department chair, associate dean, and dean are all first-rate academics). Steven Taylor at Poliblogger agrees and Robert Prather at Signifying Nothing evaluates the policy a bit more sympathetically as it pertains to the way administrators deal with professors.
But in reference to the Princeton case, Leopold states:
The fact of the matter is that students at Ivy League schools should be getting a disproportionate number of As -- otherwise, why were they admitted?
This should be true if they were put in classes alongside students that did not otherwise gain acceptance into Princeton, but not among other students who did. They were admitted because they showed the promise to be able to make the most of Princeton's academic environment. In the context of that environment, expectations should be sufficiently high that only a small minority of the students will merit an A or A-.
The main problem with grade inflation is that, since the maximum grade stays fixed, it it tends to compress the distribution of grades. Some people get A's because they would earn them even if expectations were higher, and some people get A's rather than B's because grades have been inflated. This benefits less capable students at the expense of more capable students, and I see absolutely no reason for this to occur.
Note that this problem of grade compression refers to grades at a point in time. What generally captures people's attention is when the distribution of grades shifts higher over time. This is what got Harvard into trouble during the 2001-2 academic year, when it acknowledged that a disproportionate share of its students were graduating with honors. (The lowest honors threshold was a fixed GPA, and over time, more and more students crossed it. I believe that the problem has been fixed by limiting the number of students who can qualify for honors based solely on a GPA. At Dartmouth, where Latin honors are determined by GPA, the fractions graduating summa, magna, and cum laude have historically been limited to a fixed percent of the class.) If students are arriving better prepared over time (I'm not convinced), or if resources available to them are improving over time (certainly true), then our expectations of them should be increasing over time in a commensurate way.
At Dartmouth, the faculty voted in the 1993-4 academic year (the year before I got here) to display the median grade for each course on the student's transcript alongside the grade awarded. Classes with fewer than 10 students are excepted. The transcript contains a summary of how many classes the student earned a grade above, at, or below the median. This is a useful addition, because it makes the transcript a more honest representation of the student's performance. (Read here to see how that point was lost on the editors of a student paper from a university with the motto, "Veritas.")
However, including the median grade on the transcript is incomplete as a measure to address the problems of grade inflation.
First, it doesn't stop grade inflation over time. We know this because the honors thresholds typically increase each year and because we can analyze the data and see for ourselves that median grades are rising. The reason that the policy doesn't stop grade inflation is that it is not used to change grading policies in any formal way--there is no consequence on campus of having a course with a high median grade.
Second, because the information is collected and presented but not used to change grading behavior, it allows for very large differences to persist across easily identifiable groups. For example, controlling for course size, course number (a proxy for the level of the course), and enrollment, courses in the humanities over the past two years have had median grades that are 0.136 and 0.111 (out of 4.0) higher than those in the sciences and social sciences, respectively. This comparison excludes the language courses, where median grades are even higher.
Third, it is possible that some classes have high median grades because there are a disproportionate number of very talented students in that class, and this policy doesn't do anything to reflect that information. It could be made even better if it did.
With those issues in mind, what changes would I make to Dartmouth's current system? When computing class rank and awarding Latin honors, I would adjust for known and persistent differences across departments in grading practices. I am open to suggestions about precisely how the adjustment would take place. Using the difference between the student's grade and the median grade in the course seems like a useful place to start.
Even better, I suppose, would be to also incorporate a measure of the ability of the students in the course in addition to the grade earned and the median grade. One such metric would be the average combined SAT scores of the students in the course. We would almost have the usual Ratings Percentage Index that is used in many sports like the NCAA basketball and hockey, which are combinations of the team's Winning Percentage, its opponents' winning percentage, and its opponents' opponents' winning percentage.
Other blogs commenting on this post