TL;DR Traditional tests require students to work a few (hopefully) appropriately chosen, illustrative problems to demonstrate proficiency. What does it matter if they do these few problems early in the semester or later in the semester? What does it matter if they do them with us watching or not? What makes one essentially random problem more valid during one part of the semester than another, especially if students are to be judged entirely on whether or not they work the problem correctly? I think this arbitrary inconsistency must be abandoned, especially if we are to successfully defeat cheating culture and if we are to find something better than traditional grades.
I debated whether to use hypocrisy, fallacy, randomness, arrogance, stupidity, or one of several other words in this post’s title but no particular word seemed strong enough to me, so I settled on hypocrisy.
Traditionally, when we test students, we like to think we are at least attempting to see if they have learned anything. Tradition has dictated these tests be administered in writing, in a classroom, in silence, in individual isolation, sans resources, and with a hard time deadline (ostensibly the end of the current class meeting). Then, we are to grade these tests using traditional point values, deducting points for errors. We end up with a number that we write on the top of the first page and this number is ostensibly an indicator of a student’s current understanding. There are so many problems with this traditional, rather quaint approach that I have difficulty deciding where to begin.
Allow me to begin with the point value system of grading. How many points is a sign error worth? What about a notation error (e.g. using something other than for electric field or equating a vector to a scalar)? How many points is an algebra error worth? How many points is a numerical error worth? How many points is a terminology error worth? How many points is organization worth? Showing one’s work? Neatness? Handwriting? The actual final answer (in whatever form was asked for)? The problem is that these things are totally subjective. There is no objective algorithm for them, and I dare say there never will be. I mean, if the physics community, which already doesn’t really value undergraduate teaching outside of the PER community, can’t even agree on what content should be in an introductory course I can’t see it reaching agreement on such mundane things as point values for grading. I think even asking is a waste of time, and I’m no longer up to dealing with the accompanying character assassination accompanying such requests. Furthermore, such a list would surely be dominated by the input of four year institutions, the perceived discipline experts, that would shut out community college and high school physics faculty and impose the strict adherence to which they (the four year schools) arrogantly refuse to subject themselves.
The other aspects of traditional testing, I feel, broadly constitute a set of conditions that exist nowhere in reality other than in a traditional academic classroom. I don’t know a single professional in any job who is evaluated the same way we traditionally evaluate students. Sure, professions like medicine and healthcare sometimes ask practitioners to sit for licensure exams, but that is an exception it seems to me. I don’t know a single professional scientist, in any discipline, who has to do that. Working scientists get to use the resources at their disposal. Science progresses at its own pace too. Look at how many years passed between Higgs predicting his boson and it being detected at the LHC. Higher education is already biased toward students who learn faster, not those who learn better (faster does not always mean better), and that significantly harms my teaching audience. This seems to fly in the face of all the equity discussions that are now all the rage on social media yet I haven’t seen anyone point that out.
Many outsiders to higher education like to complain about how the ivory towers bear no resemblance to the “real world.” That last term is used so often that I don’t even really know what it means, but I’ll forego that for the moment. When it comes to testing and other forms of assessment, I have to agree with them. If our overall goal is for students to learn, what difference does it make if they can work “this problem” or “that problem” yesterday, today, or next Tuesday as long as they demonstrate they can do it by, say, the end of the course? Different students figure things out at different paces and one thing I realized long ago (about two decades) is that traditional, status quo higher education doesn’t take this into account. It expects all students to learn at the same biased, unreasonable rate. It just doesn’t happen. This alone may contribute to students perceiving that they’re not cut out for our discipline, which let’s face it, has a reputation for being difficult. (But is it really more difficult than other disciplines? Have you taken a literature course lately?)
Well, okay then. If a student bombs this test, they’ll have another opportunity on the “final exam” right? Maybe, and maybe not. Will the student get to demonstrate that exact same problem again? Probably not, mainly because of the possibility of memorizing its solution without understanding it. Fair enough. Will that particular topic even be included on the “final exam” this time? I honestly can’t say. Even more troubling to me is the question of whether of not the student will receive full, and I mean completely full, course credit (e.g. an A) for performing perfectly on this mysterious “final exam” while having failed every other test during the semester? It seems to me they should, because otherwise they are being openly, and unfairly, penalized for having gone through the process of learning, that thing we all claim we want them to do but punish them like this for actually doing. I can’t see this as anything other than abusive hypocrisy. I also can’t understand how this practice squares with the recent calls for diversity, inclusiveness, and equity in higher education that rule social media these days. Perhaps I don’t understand those issues enough, and perhaps there is something that I’ve never been told about this hypocrisy that I’m supposed to “just know” after being in this career for three decades. Maybe I’m not in the “right” club to be in the know.
The process of proctoring tests troubles me too. Nowadays, in these pandemic times, it’s grown to be a home privacy issue and I fully stand behind students who see it that way. Here’s another jarring revelation that recently fell into my lap. We can’t stop cheating. It’s that simple. Students who have a reason to cheat will find a way to cheat and there’s nothing we can do about it within the traditional framework. As long as “getting the right answer” is what is valued (despite what students are told) there will be pressure to do just that and cheating will persist. I think this is common knowledge among serious teachers. The thing that annoys, even enrages, me the most is the outright refusal of most faculty to change their ways to reduce the pressure to cheat. I think this is mainly for two reasons. One is that it takes actual effort and at four year institutions teaching is not the focus (despite the marketing to the contrary). The other is a stubborn appeal to tradition, senseless tradition in my opinion. As academics, we’re supposed to be better than that.
Higher education’s business model contributes to this situation too. Nowadays the goal for most schools is to enroll as many students as possible, with no regard for their preparation or socioeconomic ability to devote sufficient time to education. In fact, higher education has explicitly marketed a college education as something which can be done on one’s spare time. This is nonsense, and I think we all know it. We quietly accept the propaganda to keep the peace with administrators. Some students simply do not have the time to devote to a college education and this is perfectly fine, but institutions should not predatorily lure these students into enrolling (and paying tuition) and then blame faculty when these students do not (and indeed, cannot) “succeed” as the buzzword says. If anything, enrollments should be kept low rather than increasingly large. Class sizes should be decreased rather than increased. Administrators cringe at this reality, because for them the game is all about money, as I have sadly come to realize. Many institutions across the country, particularly community colleges, have become teaching mills, where the goal is to enroll increasing numbers of insufficiently prepared students to “give them an opportunity to succeed” with no regard for the realities of anything else. I guess an opportunity where the probability of success is low is better than one where that probability is higher; that’s what I’m told. These inappropriate and unfair business models put pressure on institutions to “make students succeed” which has become such a clichĂ© that I don’t even know what it means now. That pressure is most easily accommodated with traditional grading, which is anything but fair. In fact, it is most unfair to the least prepared student, the ones we’re supposedly the most interested in helping. It just doesn’t make sense to me.
This post is mostly the result of a conversation I have been having with myself for a long time since abandoning traditional grading for standards based grading, for recently using oral interviews for assessment purposes, and for generally having the audacity to do things differently. I’ve been told I probably shouldn’t tell students how my methods are an improvement over traditional teaching because it may cause them to think negatively about faculty who still use traditional lecturing. I don’t understand this at all, because I think traditional lecturing should be frowned upon, but the goal is to “keep the customers happy” now. I constantly worry about whether or not what I’m doing is academically sound, harmful, too rigid, too loose, or any combination of these. I lie awake at night thinking about it. Students tell me they love what we do and wish every course were taught this way, but when they get frustrated they regress to the “why can’t I just get a grade and be done with it?” mindset and I begin questioning everything in sight, including whether I should go ahead and retire or stick with it for another year. Obviously I’ve chosen the latter because I’m still here, but I don’t know for how much longer I can stick with it (in this environment).
Higher education seems to be stubbornly holding onto a giant argumentum ad antiquitatem when it comes to what testing is all about. We are supposed to teach our students not to fall victim to logical fallacies, yet here we are. I hate that, because we are collectively supposed to do what’s best for students (that’s what I’m told anyway) but our practices don’t always reflect that. Sometimes that’s because of the flawed and unethical system higher education has become. But sometimes it’s just, as my father used to say, pure-tee stubbornness. Honestly, until familiarity with discipline based teaching literature and reformed teaching practice are made hiring criteria for teaching positions, which isn’t likely because of the ironically inherent disdain for teaching in the hard sciences, I don’t think any of this can change, much less will change. It’s almost as though we need a new model.
As always, feedback is welcome.