Philosophy of the Cheat-Proof Exam
By writing exams focused less on lower Bloom's levels and more on analysis, evaluation and creation, my hope is that I'm getting closer and closer to crafting a truly cheat-proof exam. At the heart of the cheat-proof exam are two core aspects of course philosophy. First, what practicing professionals in the discipline do is called collaboration, not cheating. In fact, I actively promote collaboration during some exams - my only requirement of students is that they cite all sources (including personal notes, internet sites, and fellow students) when recording answers to questions. Otherwise, I tell them, copying is cheating. Two, when testing for higher Bloom's levels, it is easier to write cheat-proof exam questions: particularly those that would only have unique answers (such as essay-style response questions). In a genetics class, for example, I like the approach of starting questions with something like, "Access the NCBI database and find the sequence of any gene." Given the vast number of DNA sequences in this database, the probability that two students happen to choose to write an answer involving the same gene is infinitesimal (or smaller), so if this actually occurred, I'd have a strong case for proving
The Hitch
Of course, you've already spotted the trade-off. It is relatively easy to write a cheat-proof exam. It is more effort to grade them. As such, whether you decide to adopt the approach of writing cheat-proof exams depends almost entirely on your predisposition to spending time grading answers to a question where there is no single correct answer. My genetics classes usually have about 95 students in them, but I still include cheat-proof questions.
Solutions to the Problem
- It is guaranteed that writing exams that incorporate higher Bloom's levels will take more effort to grade. However, there are a few approaches that can make grading more efficient. First, write multiple-choice questions that still involve calculation, interpretation, analysis to determine the correct answer in the first place. Multiple-choice doesn't have to be limited to factual recall questions.
- I should also disclose that I also adjusted my grading scheme to account for the increased difficulty of exams that are designed to incorporate all levels of Bloom's taxonomy. There are six Bloom's levels, and I try to distribute point values evenly between them (I combine "evaluate" and "create" at the top of the tree). Thus, I align letter grades with Bloom's levels. Students that score 0-20% of possible points earn an F - they tend only to be able to perform factual recall. Between 20-40% of points on an exam earn a D; 40-60% a C; 60-80% a B; 80-100% an A. To earn an A, then, students have to be able to earn the upper-most 20% of points, which means they have to successfully answer the questions I write that require creating knowledge or critically evaluating information. In other words, I use Bloom's taxonomy to define my expectation of what students have to achieve to earn letter grades.
- Use group exams. This allows me to feel like I can write much more difficult questions, because groups can opt to distribute workload or at least to discuss options for solving a problem before selecting one. This has been (as expected) much more efficient for grading. If I have groups of four students working on an exam, I grade one fourth the number of responses.
- Leverage the tablet for collecting drawn responses. I don't know if it is true, or just my perception, but I feel like grading written responses takes longer than graphic (drawn) responses. If a picture is worth a thousand words, and if it is faster to assess a student-created picture than it is to read a thousand of their words (guaranteed!), then it is much more efficient to grade a visual answer to a question. In the sciences, one of the easiest ways to incorporate such a question is to ask a student to draw a plot (or diagram of results) of what they would expect in the situation that:________.
- Now to the title of the post! Last term (the first time I attempted the "open-note, open-internet" tablet-based exam), I only gave students limited feedback. The way I collect student exams from their tablets is that they submit them (via Google Classroom) as PDF files to me. This has so many benefits, including my ability to efficiently store student records and to easily transport them from office to home for grading (because we all know that grading often happens at home in the wee hours of the morning, right?). However, I have yet to find the best way to give students feedback on their exams…
Digital Student Feedback on PDF Exams
What I do currently is to add "sticky note" text fields to the student PDF, and then I send those edited PDFs back to each student. This is a very time-intensive process, and the only information I put in those notes is: if a student earns less than 100% of points available for each question, I leave a note that indicates what score they earned. That's all of the feedback students received last term. Because I video-record and upload video exam keys, and because it is more useful for student learning for them to have to do the work to find out where they missed points on a question, I actively do not try to provide extensive feedback to students.
However, this semester, I'm making a slight change: I'm not even providing this much feedback. I'm finally, after reading about them for years, implementing "exam wrappers." The core idea here is that you provide a small assignment that requires students to consciously reflect on their performance on an exam. Often this takes the form of asking students, at the end of an exam (or just after they learn how they scored), what they think they should do differently to prepare for the next exam. Today, this is what I told my students about how they get feedback on their exam performance:
- I want to motivate students to reflect on where they didn't meet my expectations
- I post exam keys (both a static PDF as well as the video key, in which I narrate and write/draw out answers to questions)
- Students have a digital copy of the exam they submitted
- Students only know their total score on the exam
- I show the entire class the letter grade distribution for the exam
- I also show data on the percent of students earning all of the points available for each question
- I expect students to compare their answers to the answer key and try to assess which question(s) they think the lost the most points on
- Each student has the opportunity to write/draw a response to me in which he/she chooses a single question on the exam (the one he/she thinks is most likely the one they lost the most points on) and writes a description of how/why they think they did not score all of the points. I said that, if the explanation is accurate, then the student can earn up to half of the missed points on that question.
The Bottom Line
This is a new approach for me, so I'll post updates here on whether students take advantage of this opportunity, and also whether it winds up helping the students master the material. As I said in class today, my main concern is that students understand critical concepts (and be able to demonstrate that understanding) by the final exam. My philosophy is that, for the questions where most of the class doesn't perform well, I'll bring a related question back for the final exam. I want to foster student growth over the term. And, I tell the students this. So they should know, at this point in time, which topics will be showing up on the final exam. And, if they're willing to devote the work necessary to master those concepts, I think those students are the students who have truly earned A grades.
In sum, although you might not agree with this teaching/grading philosophy, I hope you will at least agree that there is value in three aspects of my design for cheat-proof exams:
- Students should be assessed in authentic situations (which, for many disciplines, is going to involve accessing online materials and/or collaborating with others)
- Digital workflows for distributing and collecting exams can improve efficiency
- Our goal as educators is to make information relevant to students and to help motivate them to learn the material by the end of the course
Although my tablet exam approach is certainly a work in progress, it is meant to achieve these three goals. Time will tell!
No comments:
Post a Comment
Have an insightful comment, best practice, or concern to share? Please do!