Oral Interviews as Assessment

TL;DR I have begun using oral interviews as a replacement for traditional written tests and quizzes. There are many advantages for faculty and students, including elimination of paperwork, better chances for understanding students’ state of comprehension, a more relaxed environment, and no possibility for gaming for points.

The ongoing pandemic has amplified many questions I have had for years:

  1. Must tests always be in written form?
  2. Must grades always be based on points?
  3. Must students be proctored during assessment?
  4. Is cheating widespread, and if so, why?
  5. If we want students to learn X, what does it matter whether they learn it in the first week of the semester or the last? Why should we penalize them for taking longer?
  6. Does previous practice hinder change and innovation?
  7. Do administrators hinder change and innovation in teaching by insisting on adhering to antiquated, inequitable, socially biased, subjective assessment practices?
  8. Why does trying to innovate induce so much fear?
  9. Nowadays faculty have to answer (or at least respond) to so many academic staff about the “status” of students in various groups. This is all code for “What are their grades like?” and these people are completely flummoxed when I tell them I don’t use traditional grades and they tell me that’s the “only way” they can report student progress. This is a huge problem, for them…for me.
  10. Do students really want better ways of demonstrating understanding, or are they so deeply conditioned by the status quo that they know how to game it and don’t want that to change?
  11. As duly hired qualified faculty, am I not obligated to use practices that are best for students regardless of fallacious logic against doing so?
  12. Am I wasting my time thinking about these things?

That list isn’t complete. In this post, I want to focus on assessment (in the good instructional sense, not in the administrative sense). In discussions with colleagues around the state and country, I am concerned about the number of people who think that assessment must look a certain way. Specifically, the only way to do it is with written, points based, problems presented under arbitrary time constraints that induce pressure and anxiety, punish students for not having immediate recall, punish students for learning at different rates, and present an unrealistic representation of future expectations. Any argument that begins with, “In the real world…” no longer carries any weight with me because the “real world” is subjective. Since the pandemic force us into remote teaching and learning, proctoring has reached new heights in privacy invasion. I think any attempt to compel students to install third party software on their computers that allows someone else to take control of their webcam or entire computer is unethical. Students’ rights don’t cease to exist just because they welcome US into their private environment to help them learn. If anything, students’ rights should take priority over all else in these situations.

I don’t claim expertise or superiority in any way, but here is how I have decide to tackle these problems. These guidelines are specific to my calculus-based physics course that uses Matter & Interactions but are obviously adaptable to most any situation.

  1. For each chapter, I select a list of five to ten (nothing special about that quantity) representative questions, problems, and computational problems. Criteria include conceptual content (e.g. choice of system, choice of reference frame, etc.), mathematical content (e.g. use of vector algebra/calculus, symbolic algebra, dimensional analysis, etc.), use of symbols rather than numbers, use of analytical calculus (e.g. textbook calculus), and programming skills appropriate for the course. I also generate some of these questions myself, mainly to reinforce things we discuss in class that may not be mentioned in the textbook.
  2. Students are expected to write up a full solution to each problem.
    1. For each problem, students must provide a step-by-step annotated solution written in LaTeX using Overleaf. Of course students know they must do thing on paper (actual paper or digital paper in an app like Notability or something similar…these apps are becoming ubiquitous).
    2. For each problem requiring numerical computation of any kind, students must provide a GlowScript program. There must be a link that takes me to the program on the GlowScript site. Handheld calculators are deprecated. We’ve been teaching computation all along with the use of calculators (did we even realize that?) but we need to move with the times. Obviously, for a purely algebraic problem this part isn’t required.
    3. For each computational problem, a full GlowScript program, with explanatory comments, is provided in the form of a LaTeX document. There must be a link that takes me to the program on the GlowScript site.
    4. Use of Zoom’s interactive whiteboard is encouraged, especially when drawings are appropriate.
  3. Students schedule a twenty minute “interview” with me by email at a mutually convenient time. There is no reason not to do this given students’ work schedules, family obligations, and such. There is no rule that assessment must be only at certain times, so why do it that way? I happily accept evening appointments. Students are shocked when I tell them not to schedule these interviews over weekends because that is their time. They really appreciate this, especially since we are officially on a four day instructional and work week. Yes, I make occasional exceptions when students really want to do it. The point here is to give students, not us, control over when they demonstrate what they have learned. Framing this as an “interview” is significant, because any mention of “testing” induces anxiety. I want them to be relaxed, confident, and engaged and I want them to feel safe. I want this to be a two-way discussion, not a one-way abusive experience.
  4. During the interview, I can choose any problem(s) from the list (I normally ask for three at random) for that chapter and ask students to talk me through their solutions. I will ask to see their complete PDF document created from LaTeX, their GlowScript code, and I ask them to click on the appropriate link in their document to take me to their program on the GlowScript site and run it.
  5. I reserve the right to ask students any questions about their solution strategy (e.g. “Why did you choose this particular system?”), notation (must be consistent with the book’s notation), terminology (again, must be consistent with the book’s), or any other aspect of the material that I think they should need, or want, to think about. The object is not to trap them, but rather to help them see what is important and to encourage them to think beyond the textbook.
  6. In turn, students are encouraged to bring up any questions that arose while they were working on their solutions so we can discuss them. At this point, the interview becomes a true learning experience.
  7. Any errors in LaTeX or GlowScript are noted and corrected on this spot. This is an opportunity to see if they know how to fix errors in their codes and is just as much an integral part of the assessment as the pure physics content. The expectation is that students learn to use the tools they have. There is nothing unreasonable about that.
  8. If any deficiencies are noted (e.g. the student has not provided solutions in LaTeX, a GlowScript program wasn’t included, conceptual deficiency apparent through verbal communication of the solution, etc.) or if a student just says something like, “I’m just not comfortable with this material yet.” the student simply schedules another interview appointment. There are no penalties, and this is extremely important. The process of learning must never be punitive. Students must feel encouraged and motivated to demonstrate progress in understanding and in presenting that progress.
  9. If the interview is deemed successful, that is immediately communicated and the assessment records are immediately updated and the interview ends.

There is nothing special about the number of steps in the description above. I will now present a list of questions and responses that address everything I can think of about this process.

Do students get partial credit for anything? No. The interview as a whole is either acceptable or not. If anything is deemed deficient or missing, another interview is warranted and scheduled at a mutually convenient time.

Is twenty minutes enough time for students to demonstrate good understanding? Absolutely. However, I have changed the interview duration to thirty minutes. It’s a bit easier for scheduling, and gives ten extra minutes when it’s needed.

Can students ask questions during an interview? Absolutely! This may be the first time they’ve felt comfortable enough to ask a question they would otherwise not feel comfortable asking during class or in the presence of other students. That they do is a sign they are comfortable.

Do students have to get all correct answers to demonstrate proficiency? No. Making mistakes is almost always discouraged and penalized, but it should be celebrated. If it isn’t, we need to stop telling students that being wrong is valuable. If I see a mistake in a solution, I will bring it up by asking, “Do you see a problem right there in step X?” If the student recognizes the error and can then correctly reason through a fix, and update the GlowScript code and LaTeX code to reflect the correction, that is proficiency!

How do you deal with cheating? Cheating can take many forms, but in physics the main form is copying textbook solutions from a certain website (which I won’t mention here). The first step is dealing with this problem is requiring students to use the same methods, notation, and terminology that the book uses and that we use in our class discussions. If I see solutions using different notations, etc., the interview ends and I ask them to rewrite everything using our consistent notation. Another step I take, even when I don’t suspect cheating, is asking the same question in as many different ways I can think of to see if students give consistent responses. For example from another mechanics class I’m teaching this semester (alg/trig-based), I may ask, “What is acceleration?” and then ask, “Is this particle’s velocity changing?” and then ask, “Is this particle accelerating?” Not surprisingly, one student managed to give completely inconsistent responses to these questions. As I see it, the way to deal with cheating is to make the assessment opportunity uncheatable (Is that a word?) and ungameable (Is that a word?).

Does a student have to work numerical problems or can they do something else? In the calc-based M&I class, it’s a mixture of both. They need to be able to demonstrate that they can get numerical results and they need to demonstrate conceptual understanding. I try to ask questions that mix the two modes. In the alg/trig-based class (which I normally don’t teach) I’m giving them the choice and telling them a mixture is preferable.

Do students like this kind of flexibility? All I can say is they have told me they do. Students who like to work at their own pace particularly say the appreciate it. It gives them ownership, and the responsibility that comes with that ownership. That’s what we want, right?

Isn’t this just slack on your part? I have wrestled with this, but my final answer is no. Arnold Arons, one of the founders and architects of physics education research (PER) was very clear about the use of Socratic questioning to eke out what our students do and do not understand. They will tell us all of this, provided we ask the right questions and listen correctly. This is a perfect opportunity for instructors to hone their Socratic questioning skills. It’s far more enlightening than any written test can ever hope to be. I assert that this is the way assessment should be carried out even without the presence of forced remote teaching and learning due to a global pandemic. Yes, there are logistical problems that are prohibitive with large, herd-style classes, but this speaks more to the business model of higher education in America than to what is best for students. Students are not cattle and shouldn’t be treated as such. Here’s where administrators need to see the writing on the wall and make the necessary changes. Faculty are way out front here. Correction…innovative faculty are way out front here. Too many are fearful and lag behind. Fortunately, they won’t be around forever. That sounds harsh, and it is.

What kind of grade to students get for a successful interview? There are no grades. There are no points. The entire interview either demonstrates proficiency (rewarded with a PR in a spreadsheet that the student and I can see in the LMS) or it does not, in which case another interview is scheduled. If you want to make the “Well, in the real world…” argument, consider Ranger training in the Army. The only visible reward for successfully completing that training is a rather small badge to display on one’s uniform. I dare you to make the “Is that all you get?” argument to a Ranger. Go ahead…I’ll watch. Look, traditional grades are antiquated, socially biased and unjust, and entirely subjective (Is an A from Harvard treated the same as an A from your institution?). They need to go away. This is a step in that direction.

Do you use a formal step-by-step rubric? No, mainly because I want students, not me, to take control of their learning and how they demonstrate it. Students are not formulaic entities, and their learning and our evaluation of it shouldn’t be formulaic either. I do, however, establish guidelines as explained in this post and students are well aware of them. They know what is expected of them. This is a situation where less is more (I hate that cliché); broader, not narrower, guidelines are better. Here’s another reason I don’t use traditional rubrics. Any rubric, no matter how detailed and because of its very nature, is inherently incomplete. No matter how fine grained it is it will always be incomplete. I liken it to using a finite summation as an approximation to an integral. The integral is what we’re after. Yeah, we can get arbitrarily close with a finite summation but we’re always lacking some bit of information. Do, or even can, we all agree on what is permissible to omit? I don’t think so. My likely unpopular personal opinion is that rubrics may be too restrictive. I don’t think it’s the case that everything must be written down. I never had that expectation as a student. Back to the mathematical analogy, I think an oral discussion is the actual integral. It can reveal so much more than any rubric can hope to reveal. It does necessarily allow for human interpretation, but I also think any “real world” situation does too. Yes a traditional written test can reveal certain things, but I think we all agree that one can “work problems” with little to no understanding of what they’re doing or why they’re doing it and we keep saying we want students to demonstrate the latter . So I think being able to work the problem and being able to explain that work is the target combination, and it’s okay to do the written work in advance.

Do students like this approach? Yes, at least they say they do. They say it makes them feel much more at ease and less stressed than traditional tests. They say it lets them demonstrate their strengths while acknowledging things that need improvement. They ask me why other instructors here don’t do this. They tell me they want more of it. They say it’s different from rote problem solving and they like that. Trust me, they are genuinely effusive and enthusiastic.

Does every student succeed on the first try? No. One student took four interviews to demonstrate understanding of two dimensional projectile motion. In the time between interviews, he reported that he was motivated to read and reread the textbook (Knight) and to consult other sources (e.g. YouTube videos). Traditional tests don’t provide this kind of motivation. Other students have taken two attempts to demonstrate proficiency.

Do students take something tangible away from this experience? Yes. At the end of course, they will have an organized library of problem solutions and GlowScript programs they can use in future courses.

Do you worry about grade inflation? In one sense, no, because I don’t use traditional grades to there is nothing to inflate. In another sense, yes, because final course grades must be recorded as traditional grades (this state is so antiquated) and yes, some people give me strange looks when most students pass with a high traditional grade. However, I stand by the assertion that my standards are quite high and that student who don’t demonstrate proficiency by the semester’s end don’t pass. I really don’t care what anyone thinks about what I do (a liberating attitude for me). The evidence is in the students who transfer to four year institutions and end up teaching their fellow students (and sometimes their professors) to use LaTeX or GlowScript. Professors who already expect students to use tools like this are happy. To my knowledge, no students have been harmed by this approach and all who have wanted to pursue degrees (including PhDs) in STEM disciplines have successfully done so (I have a folder full of unsolicited emails as evidence.). Bring it on. And while we’re talking about grade inflation, I think it’s both unethical and abusive to set goals students can’t achieve. That means every student should have the opportunity to earn an A and if they earn it they should get it. The key is to make the grade reflect the learning, not the institution’s name or how much the student paid to attend the institution. Learning must be fairly and equitably achievable.

Do you know of anyone else doing something like this? If you had asked me this before this past weekend, I would have said no. But at the fall NCSAAPT meeting, I met Dr. Sarah Formica, Professor of Physics at the University of North Georgia. (Trivia: UNG, previously called North Georgia College, was the site of my very first SACS-AAPT meeting back in the 1990’s!) She has already been doing this! Her procedure and mine are essentially identical. She expressed all the same frustrations with the status quo. Learning is learning, and it’s liberating to finally realize that it doesn’t have to look or feel a certain prescribed way just because someone in an office somewhere says so. As duly hired faculty, we, not administrators, get to make these calls. We in the community college environment are (too) frequently told that we should be more like our four year college counterparts, so now I have at least one excellent example of someone else doing this and getting great results.

Do you anticipate continuing with this approach when face to face classes resume after the pandemic ends? Yes. A good thing is a good thing, so why not? Zoom will allow scheduling flexibility over traditional office hours. Again, why not?

What might you change in the future? I may let each student choose at least one problem to present that they are particularly interested in for any reason. What better way to let them show what they’ve learned! That’s the point after all, isn’t it?

Were you at all uneasy about trying this? Yes! Dwain Desbien finally convinced me to go for it though. I don’t like being perceived as incompetent, slack, unprofessional, or anything like that. As I have repeatedly discovered over the years, uneasiness is a natural mechanism that keeps us from causing harm. As John Lewis said, a little good trouble now and then is a good thing, so I’m totally up for it.

Remember that I claim no expertise in anything; I’m just doing what I feel is best for my students. If you can think of new questions that I can address, let me know and I’ll add them. As always, feedback is welcome.

 

3 thoughts on “Oral Interviews as Assessment

  1. Thanks Joe for saying this so clearly!
    I too have felt that the only way to prevent driving students away from STEM education is to stop beating them up!
    Since the “transition”, I have taken advantage of discussion post assignments; some where they share their frustrations and successes-what they are most proud of accomplishing so far; another where they get to explore a profile of a minority physicist doing PHY currently. That little bit of anonymity online has given students a way to share that never happened in a seated class. It has been a revelation! And, I get to provide them with personal feedback that has gone much deeper than we ever could have being “live”.

    The other discussion post I like is for them to present an explanation of a solution to problems I pose. “Dissect a PHY Problem” requires them to annotate the solution as you describe. They can also prepare a video of them explaining the solution out loud. Training students how to ask good questions is still a work in progress, but I feel like I have gained some converts to PHY-it is not an impossible subject nor a humiliating experience.

    Allowing a larger variety of ways for students to demonstrate what they can do is so important. The “real world” is not about test scores.

    You are correct, we cannot leave these new strategies behind when we “go back to normal”. Listening to NPR about how Sal Khan developed Khan Academy agrees with the idea that real learning occurs at an individual pace. Self-paced with videos has been transformational. https://www.npr.org/2020/09/18/914394221/khan-academy-sal-khan

    I was almost to a burn-out point before the plague hit. Students were never ready to hear what I was teaching until they were ready to hear it. Saying it louder or repeatedly wasn’t working. This really flipped the classroom for me and I won’t be able to return to what students have been brought up on or expect a class to be like.

    I hope to get closer to my goal of eliminating negative reactions when I tell people what I do for a living. Building students’ self-confidence that they are capable people who can really contribute something brings me joy.

    1. Thank you, Denise, for confirming what I’ve been seeing for a long time. Here in the North Carolina Community College System, teaching seems to have become just another factory job where both administrators and students expect the same old things and expect “success” when it’s not warranted. I’ve decided that I just can’t be part of that mindset any more, at least not professionally. Every prospective innovation must now be weighed against the probability of student complaints, the mysterious, unwritten “guarantee of success” and the problems with marketing higher education as something to be done in one’s “spare time” when most of my students nowadays have no spare time to begin with. It’s a mess, and I don’t know how much longer I can keep up the charade.

  2. Even though I embrace data-driven decision making when it comes to physics and science, I am constantly challenged when it comes to applying this to student learning outcomes. Measuring student performance is straight forward for the instantaneous measurement at the time of the performance. But, measuring student learning is not easy. Now, if we get to the point where we can just scan their heads before instruction and then again after instruction, I could feel more like an objective observer. But, that would require a lot of filtering if I am only measuring PHY skills; I don’t want to see everything in their heads.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.