Oral Interviews as Assessment

TL;DR I have begun using oral interviews as a replacement for traditional written tests and quizzes. There are many advantages for faculty and students, including elimination of paperwork, better chances for understanding students’ state of comprehension, a more relaxed environment, and no possibility for gaming for points.

The ongoing pandemic has amplified many questions I have had for years:

  1. Must tests always be in written form?
  2. Must grades always be based on points?
  3. Must students be proctored during assessment?
  4. Is cheating widespread, and if so, why?
  5. If we want students to learn X, what does it matter whether they learn it in the first week of the semester or the last? Why should we penalize them for taking longer?
  6. Does previous practice hinder change and innovation?
  7. Do administrators hinder change and innovation in teaching by insisting on adhering to antiquated, inequitable, socially biased, subjective assessment practices?
  8. Why does trying to innovate induce so much fear?
  9. Nowadays faculty have to answer (or at least respond) to so many academic staff about the “status” of students in various groups. This is all code for “What are their grades like?” and these people are completely flummoxed when I tell them I don’t use traditional grades and they tell me that’s the “only way” they can report student progress. This is a huge problem, for them…for me.
  10. Do students really want better ways of demonstrating understanding, or are they so deeply conditioned by the status quo that they know how to game it and don’t want that to change?
  11. As duly hired qualified faculty, am I not obligated to use practices that are best for students regardless of fallacious logic against doing so?
  12. Am I wasting my time thinking about these things?

That list isn’t complete. In this post, I want to focus on assessment (in the good instructional sense, not in the administrative sense). In discussions with colleagues around the state and country, I am concerned about the number of people who think that assessment must look a certain way. Specifically, the only way to do it is with written, points based, problems presented under arbitrary time constraints that induce pressure and anxiety, punish students for not having immediate recall, punish students for learning at different rates, and present an unrealistic representation of future expectations. Any argument that begins with, “In the real world…” no longer carries any weight with me because the “real world” is subjective. Since the pandemic force us into remote teaching and learning, proctoring has reached new heights in privacy invasion. I think any attempt to compel students to install third party software on their computers that allows someone else to take control of their webcam or entire computer is unethical. Students’ rights don’t cease to exist just because they welcome US into their private environment to help them learn. If anything, students’ rights should take priority over all else in these situations.

I don’t claim expertise or superiority in any way, but here is how I have decide to tackle these problems. These guidelines are specific to my calculus-based physics course that uses Matter & Interactions but are obviously adaptable to most any situation.

  1. For each chapter, I select a list of five to ten (nothing special about that quantity) representative questions, problems, and computational problems. Criteria include conceptual content (e.g. choice of system, choice of reference frame, etc.), mathematical content (e.g. use of vector algebra/calculus, symbolic algebra, dimensional analysis, etc.), use of symbols rather than numbers, use of analytical calculus (e.g. textbook calculus), and programming skills appropriate for the course. I also generate some of these questions myself, mainly to reinforce things we discuss in class that may not be mentioned in the textbook.
  2. Students are expected to write up a full solution to each problem.
    1. For each problem, students must provide a step-by-step annotated solution written in LaTeX using Overleaf. Of course students know they must do thing on paper (actual paper or digital paper in an app like Notability or something similar…these apps are becoming ubiquitous).
    2. For each problem requiring numerical computation of any kind, students must provide a GlowScript program. There must be a link that takes me to the program on the GlowScript site. Handheld calculators are deprecated. We’ve been teaching computation all along with the use of calculators (did we even realize that?) but we need to move with the times. Obviously, for a purely algebraic problem this part isn’t required.
    3. For each computational problem, a full GlowScript program, with explanatory comments, is provided in the form of a LaTeX document. There must be a link that takes me to the program on the GlowScript site.
    4. Use of Zoom’s interactive whiteboard is encouraged, especially when drawings are appropriate.
  3. Students schedule a twenty minute “interview” with me by email at a mutually convenient time. There is no reason not to do this given students’ work schedules, family obligations, and such. There is no rule that assessment must be only at certain times, so why do it that way? I happily accept evening appointments. Students are shocked when I tell them not to schedule these interviews over weekends because that is their time. They really appreciate this, especially since we are officially on a four day instructional and work week. Yes, I make occasional exceptions when students really want to do it. The point here is to give students, not us, control over when they demonstrate what they have learned. Framing this as an “interview” is significant, because any mention of “testing” induces anxiety. I want them to be relaxed, confident, and engaged and I want them to feel safe. I want this to be a two-way discussion, not a one-way abusive experience.
  4. During the interview, I can choose any problem(s) from the list (I normally ask for three at random) for that chapter and ask students to talk me through their solutions. I will ask to see their complete PDF document created from LaTeX, their GlowScript code, and I ask them to click on the appropriate link in their document to take me to their program on the GlowScript site and run it.
  5. I reserve the right to ask students any questions about their solution strategy (e.g. “Why did you choose this particular system?”), notation (must be consistent with the book’s notation), terminology (again, must be consistent with the book’s), or any other aspect of the material that I think they should need, or want, to think about. The object is not to trap them, but rather to help them see what is important and to encourage them to think beyond the textbook.
  6. In turn, students are encouraged to bring up any questions that arose while they were working on their solutions so we can discuss them. At this point, the interview becomes a true learning experience.
  7. Any errors in LaTeX or GlowScript are noted and corrected on this spot. This is an opportunity to see if they know how to fix errors in their codes and is just as much an integral part of the assessment as the pure physics content. The expectation is that students learn to use the tools they have. There is nothing unreasonable about that.
  8. If any deficiencies are noted (e.g. the student has not provided solutions in LaTeX, a GlowScript program wasn’t included, conceptual deficiency apparent through verbal communication of the solution, etc.) or if a student just says something like, “I’m just not comfortable with this material yet.” the student simply schedules another interview appointment. There are no penalties, and this is extremely important. The process of learning must never be punitive. Students must feel encouraged and motivated to demonstrate progress in understanding and in presenting that progress.
  9. If the interview is deemed successful, that is immediately communicated and the assessment records are immediately updated and the interview ends.

There is nothing special about the number of steps in the description above. I will now present a list of questions and responses that address everything I can think of about this process.

Do students get partial credit for anything? No. The interview as a whole is either acceptable or not. If anything is deemed deficient or missing, another interview is warranted and scheduled at a mutually convenient time.

Is twenty minutes enough time for students to demonstrate good understanding? Absolutely.

Can students ask questions during an interview? Absolutely! This may be the first time they’ve felt comfortable enough to ask a question they would otherwise not feel comfortable asking during class or in the presence of other students. That they do is a sign they are comfortable.

Do students have to get all correct answers to demonstrate proficiency? No. Making mistakes is almost always discouraged and penalized, but it should be celebrated. If it isn’t, we need to stop telling students that being wrong is valuable. If I see a mistake in a solution, I will bring it up by asking, “Do you see a problem right there in step X?” If the student recognizes the error and can then correctly reason through a fix, and update the GlowScript code and LaTeX code to reflect the correction, that is proficiency!

How do you deal with cheating? Cheating can take many forms, but in physics the main form is copying textbook solutions from a certain website (which I won’t mention here). The first step is dealing with this problem is requiring students to use the same methods, notation, and terminology that the book uses and that we use in our class discussions. If I see solutions using different notations, etc., the interview ends and I ask them to rewrite everything using our consistent notation. Another step I take, even when I don’t suspect cheating, is asking the same question in as many different ways I can think of to see if students give consistent responses. For example from another mechanics class I’m teaching this semester (alg/trig-based), I may ask, “What is acceleration?” and then ask, “Is this particle’s velocity changing?” and then ask, “Is this particle accelerating?” Not surprisingly, one student managed to give completely inconsistent responses to these questions. As I see it, the way to deal with cheating is to make the assessment opportunity uncheatable (Is that a word?) and ungameable (Is that a word?).

Do students like this kind of flexibility? All I can say is they have told me they do. Students who like to work at their own pace particularly say the appreciate it. It gives them ownership, and the responsibility that comes with that ownership. That’s what we want, right?

Isn’t this just slack on your part? I have wrestled with this, but my final answer is no. Arnold Arons, one of the founders and architects of physics education research (PER) was very clear about the use of Socratic questioning to eke out what our students do and do not understand. They will tell us all of this, provided we ask the right questions and listen correctly. This is a perfect opportunity for instructors to hone their Socratic questioning skills. It’s far more enlightening than any written test can ever hope to be. I assert that this is the way assessment should be carried out even without the presence of forced remote teaching and learning due to a global pandemic. Yes, there are logistical problems that are prohibitive with large, herd-style classes, but this speaks more to the business model of higher education in America than to what is best for students. Students are not cattle and shouldn’t be treated as such. Here’s where administrators need to see the writing on the wall and make the necessary changes. Faculty are way out front here. Correction…innovative faculty are way out front here. Too many are fearful and lag behind. Fortunately, they won’t be around forever. That sounds harsh, and it is.

What kind of grade to students get for a successful interview? There are no grades. There are no points. The entire interview either demonstrates proficiency (rewarded with a PR in a spreadsheet that the student and I can see in the LMS) or it does not, in which case another interview is scheduled. If you want to make the “Well, in the real world…” argument, consider Ranger training in the Army. The only visible reward for successfully completing that training is a rather small badge to display on one’s uniform. I dare you to make the “Is that all you get?” argument to a Ranger. Go ahead…I’ll watch. Look, traditional grades are antiquated, socially biased and unjust, and entirely subjective (Is an A from Harvard treated the same as an A from your institution?). They need to go away. This is a step in that direction.

Do you use a formal step-by-step rubric? No, mainly because I want students, not me, to take control of their learning and how they demonstrate it. Students are not formulaic entities, and their learning and our evaluation of it shouldn’t be formulaic either. I do, however, establish guidelines as explained in this post and students are well aware of them. They know what is expected of them. This is a situation where less is more (I hate that cliché); broader, not narrower, guidelines are best.

Do students like this approach? Yes, at least they say they do. They say it makes them feel much more at ease and less stressed than traditional tests. They say it lets them demonstrate their strengths while acknowledging things that need improvement. They ask me why other instructors here don’t do this. They tell me they want more of it. They say it’s different from rote problem solving and they like that. Trust me, they are genuinely effusive and enthusiastic.

Does every student succeed on the first try? No. One student took four interviews to demonstrate understanding of two dimensional projectile motion. In the time between interviews, he reported that he was motivated to read and reread the textbook (Knight) and to consult other sources (e.g. YouTube videos). Traditional tests don’t provide this kind of motivation. Other students have taken two attempts to demonstrate proficiency.

Do students take something tangible away from this experience? Yes. At the end of course, they will have an organized library of problem solutions and GlowScript programs they can use in future courses.

Do you worry about grade inflation? In one sense, no, because I don’t use traditional grades to there is nothing to inflate. In another sense, yes, because final course grades must be recorded as traditional grades (this state is so antiquated) and yes, some people give me strange looks when most students pass with a high traditional grade. However, I stand by the assertion that my standards are quite high and that student who don’t demonstrate proficiency by the semester’s end don’t pass. I really don’t care what anyone thinks about what I do (a liberating attitude for me). The evidence is in the students who transfer to four year institutions and end up teaching their fellow students (and sometimes their professors) to use LaTeX or GlowScript. Professors who already expect students to use tools like this are happy. To my knowledge, no students have been harmed by this approach and all who have wanted to pursue degrees (including PhDs) in STEM disciplines have successfully done so (I have a folder full of unsolicited emails as evidence.). Bring it on. And while we’re talking about grade inflation, I think it’s both unethical and abusive to set goals students can’t achieve. That means every student should have the opportunity to earn an A and if they earn it they should get it. The key is to make the grade reflect the learning, not the institution’s name or how much the student paid to attend the institution. Learning must be fairly and equitably achievable.

Do you know of anyone else doing something like this? If you had asked me this before this past weekend, I would have said no. But at the fall NCSAAPT meeting, I met Dr. Sarah Formica, Professor of Physics at the University of North Georgia. (Trivia: UNG, previously called North Georgia College, was the site of my very first SACS-AAPT meeting back in the 1990’s!) She has already been doing this! Her procedure and mine are essentially identical. She expressed all the same frustrations with the status quo. Learning is learning, and it’s liberating to finally realize that it doesn’t have to look or feel a certain prescribed way just because someone in an office somewhere says so. As duly hired faculty, we, not administrators, get to make these calls. We in the community college environment are (too) frequently told that we should be more like our four year college counterparts, so now I have at least one excellent example of someone else doing this and getting great results.

Do you anticipate continuing with this approach when face to face classes resume after the pandemic ends? Yes. A good thing is a good thing, so why not? Zoom will allow scheduling flexibility over traditional office hours. Again, why not?

What might you change in the future? I may let each student choose at least one problem to present that they are particularly interested in for any reason. What better way to let them show what they’ve learned! That’s the point after all, isn’t it?

Were you at all uneasy about trying this? Yes! Dwain Desbien finally convinced me to go for it though. I don’t like being perceived as incompetent, slack, unprofessional, or anything like that. As I have repeatedly discovered over the years, uneasiness is a natural mechanism that keeps us from causing harm. As John Lewis said, a little good trouble now and then is a good thing, so I’m totally up for it.

Remember that I claim no expertise in anything; I’m just doing what I feel is best for my students. If you can think of new questions that I can address, let me know and I’ll add them. As always, feedback is welcome.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.