The Kind of Feedback I Want and Need from Students

I’ve hesitated to post this but I’ve decided to anyway. This past Friday night at Lucile Miller Observatory’s monthly public night, a student from my spring introductory astronomy course, a student who has previously attended a four year university, presented me with a page of feedback from the course. For whatever reason, he didn’t give me the feedback during the course (it was a WebAssign assignment) but he said he felt obligated to give it to me now. I’ll let you read it before I comment on it.


So, what’s the big deal? The big deal is that this student totally “got” what I’ve been trying to do in introductory astronomy for nearly a decade. Namely, he “got” that in any introductory (and maybe intermediate and advanced…) science course, there’s a bigger underlying picture that frequently isn’t ever recognized. It’s the entire reason the course exists in the first place, at least according to the lofty course descriptions we see in catalogs and wordy justifications we see in degree requirements. All too often, we say we want students to learn to think like a scientist, reason like a scientist, and question like a scientist, but we fail to show them just how that’s done. Instead, we beat them over the head with sanitized textbook problems and questions which require little if any reasoning and this just reinforces the students’ misconceptions that science is all about sanitized questions with absolute answers. I decided long ago to move away from that, and radically so, and make my introductory astronomy courses, especially the first of the two semester sequence, look nothing like the traditional model. I used to spend lots of time (proportionally, that is) explaining my motives for this move to students during the first week of class but I got away from that over the past few semesters because I wanted to get right into the course content (at least what I defined as content, not necessarily what the students defined as course content). Either way, most students seem to start out quite enthusiastic about the changes but temporarily reverse course when they see that this is really how the course is going to be. Gone are back of the book answers because there is no textbook. Gone are simple yes/no questions. Gone are multiple choice assessments. Gone are traditional letters grades (I moved to standards-based grading last fall). Now students have to actively engage, speak up when they want to, back up their rhetoric, learn correct terminology, explain things clearly and concisely, and accept that sometimes they must struggle to learn. I have at most two or three student each semester who give me the kind of feedback about the experience, versus feedback about my mannerisms (which I tell them are not up for evaluation to begin with) or their grades. During the Q&A after an invited talk I gave at AAPT in Philadelphia, more than one colleague said I was doing harm by adopting this approach, and that I wasn’t exposing my students to real science. When I asked these vocal colleagues what they do to expose students to real science, they said they avoided historical topics and concentrated on numerical data analysis.

So what kind of feedback do I ask for? Years ago, I became very wary of traditional course evaluations from students at my institution, mostly because the evaluations didn’t address teaching. They addressed administrivia (e.g. “The instructor handed out a syllabus on the first class day.” followed by a five point Likert scale from strongly disagree to strongly agree). Well, either I handed out the syllabus or I did not so a Likert scale isn’t appropriate, and either way the question addresses nothing about what I was hired to do. Furthermore, our administration eventually adopted a policy of randomly picking classes each semester for these so-called evaluations and the picture of our faculty’s teaching quality went further into obscurity. So I created my own course evaluation. I have two versions, one that I give around mid-semester and one that I give at the semester’s end. The questions are mostly the same, but I look to see what, if anything, changed. I have several years’ worth of data from these evaluations on WebAssign and someday will analyze it somehow.

Here is a copy of the evaluation for the end of the semester. Take at look at the kind of questions I ask and the type of feedback I expect. I give the same evaluation to my astronomy and physics classes. For those of you who use WebAssign, the mid-semester version is assignment 332236 and the post-semester version is assignment 344379. The questions have morphed somewhat over the years, since about 1999 as I recall, but the overall theme has not changed.

What do you think?



Comments 2

  • That’s really a great and useful letter. It’s cool that s/he took the time to write that.

    For your own personal eval, is it anonymous? I would hope that it is, given the nature of some of the questions.

    • Thanks Andy. Regarding anonymity, WebAssign has no mechanism for truly anonymous surveys but I’ve asked them to consider it. However, it does have a mechanism for viewing submissions anonymously and in scrambled order such that while I’m reading, I never know who wrote that I’m reading. The only way for me to tell is to either look at assignments individually or to not use the anonymous scrambling features. I describe this to students and to my knowledge, it doesn’t bother them. They tend to be sufficiently outspoken that I think they would protest if it did. I also tell them that I don’t look at the end of semester evals until well after the semester ends. In fact, I’ve not yet looked at this spring’s results.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.