Wednesday, October 14, 2015

Cheat-Proofing Exams III: Post-exam analysis

Now with two open-internet, tablet-based exams under our belts in Genetics class this term (each combining an individual exam and a group exam), it is time to briefly address concerns about holding exams where students have access to all data, everywhere. How (and what) do the students do?

Let's look at my most recent grade distribution (individual and group exam scores combined):


My students seem to be doing outstandingly well (but too well?), and we could (but won't here) enter in the discussion about whether an ideal grade distribution (like the bell curve) exists and should be sought.

There are three points to keep in mind about the student outcomes thus far:

First, I should point out that my letter grade schema is different than most:

100-80% = A
60-80% = B
40-60% = C
20-40% = D
0-20% = F

If I had used a more traditional 10% letter grade division (which I have in every previous class), I'd have a much more bell-shaped curve.

Second: of course the instructor can craft the exams to be as difficult as s/he wishes. After both exams so far this term, the vast majority of students have reported that they accessed electronic resources (notes, the internet) less or much less than they had expected, mostly because they didn't feel that they had time during the exam. From this perspective, possible conclusions might be that access to electronic resources:
1) might only be an advantage to the top students, who already excel at test-taking and perhaps the course material, offering only them the ability to double-check answers they're unsure of
2) doesn't impact grades, because few (if any) students use the available resource

In either case, I feel like I'm winning: either I'm reinforcing the concept of double-checking one's work (a good practice for anybody to use) and improving metacognition (students aren't checking work they're reasonably sure they knew the answer on), or my tests are long and/or hard enough that students generally realize there is no benefit to spending time looking up notes.

I can also report that, from my own observations during the tests, I rarely saw students using their tablets to access resources. When I did, most of the students were looking back at lecture slides and notes.

Third: because I'm a scientist and need something to measure, I looked at whether there is a statistical correlation between student test grades and the degree to which students self-reported accessing notes during the exam (as expected vs less than expected). If "cheating" were an issue, then I might predict that students who used digital resources as expected would have higher scores than those who felt like they had no time to use those resources. However, a t-test produces p = 0.95. That is, there is absolutely no difference in the distribution of grades between the group that used digital resources as expected vs. those who used them less than they thought they would. My conclusion? Written thoughtfully (specifically: incorporating high-level Bloom's taxonomy questions), open-internet exams don't necessarily lead to cheating.

In sum, my current take on developing cheat-proof exams is:

  1. Give students the opportunity to use the resources they are used to using (textbook, notes, the internet, whatever) - otherwise, the exam situation is totally unlike what they'll face anywhere else (in grad school, as an employee, as a citizen…)
  2. Incorporate higher-level Bloom's questions that really just inherently can't be cheated (or, if they were, it would be obvious to the instructor - I'm thinking plagiarism on written response answers, here)
  3. Make the exams deliberately long, and grade consistently but harshly.
  4. Embrace collaborative group work. I feel that I can ask much more difficult questions of a group than I would of an individual student, and I have been pleasantly surprised at the results.
As an example of this fourth point: on my recent test, I wrote a group exam where the group had to select two of three questions to answer. One of the questions was something that we had never practiced in class, requiring the group to use PubMed to access the abstract of a published journal article and analyze some data found in it. To my great surprise, most of the groups chose to answer the novel question over either of the other two (involving question formats we had more directly practiced in class), and most (if not all) groups arrived at the correct interpretations.

If you need more convincing about the qualities of group exams, check out this video from one of my exams, comparing the class during an individual portion and group portion. From what I've read about the quality of learning based on peer instruction, I am comfortable seeing that students are much more engaged (and probably learning more!) during the group exam than the individual exam. What a better use of class time!


Finally, I'll reiterate an important point about digital exams: the workflow is more efficient in some regards; less so in others. By distributing the PDF exam using Google Classroom, each student has their own version of the PDF file, which automatically includes the student name in the filename. All of the PDFs are returned to me in a single folder on Google Drive, meaning I don't have to download and rename e-mail attachments from 75 students. Most importantly to me, digital exams means I always have a copy of each student's exam, even after I return a scored copy to each student. This can be useful in all sorts of ways (including programmatic assessment, detecting patterns that might reveal cheating…assuming one is worried about such things, and formative assessment for the instructor).

Now, if I could only figure out how to more efficiently (and still privately) return PDF exams with my annotations (scores, notes) to students without writing 75 individual e-mails…

No comments:

Post a Comment

Have an insightful comment, best practice, or concern to share? Please do!