Wednesday, March 8, 2017

Decibel Analysis for Research in Teaching (DART)

Something exciting happened this morning - I learned about a hot-off-the-presses, easy-to-use, and free, web-based tool that addresses a long-standing deficit widely lamented among my STEM educator, assessment, and ed-tech colleagues.

The title, the first thing I read (h/t Mary-Pat Stein, @cellstein on Twitter, from CSU Fullerton), gave me that hopeful heartbeat-skip:

"Classroom sound can be used to classify teaching practices in college science courses."

Owens et al. PNAS (2017) Online advance [www.pnas.org/cgi/doi/10.1073/pnas.1618693114]

Here's the summary of the abstract: using audio recordings from full classroom sessions, Owens et al. (a LOT of et al.; research from Kimberly Tanner's group at San Francisco State U.) reasoned that they could use an algorithm to classify what is happening in the classroom at each point in time.

The program is called DART: Decibel Analysis for Research in Teaching. You can find it here, register to use it for free, and be drag-and-dropping your audio or video files for analysis in three minutes or less! The web tool extracts the audio and analyses an hour's worth of audio in a couple of minutes (for me, this morning).

Opportunities


As Owens' manuscript suggests, this brings opportunity for institution-wide, automated analysis of teaching practices without having to have person-time in the classroom or watching (and coding) course videos. This is a game-changer. Read the manuscript for more potential benefits.

The reason I was so excited about DART this morning is because I pride myself on incorporating active learning in my courses at Fresno State. Plus (the BIG plus), I have years of lecture capture recordings that I could be analyzing RIGHT NOW! So, before getting ready for work this morning, I threw a few of my .mp4 file exports from ExplainEverything at DART.

Thus, a key benefit is those of us with stockpiles of audio can get straight to analysis. Today.

Further, with audio recording devices being dead cheap (ranging from dedicated digital audio recorders to cell phones, laptops, tablets…), everybody can (and should!) start analyzing their teaching style using this technique. Today. Except…

Obstacles


There is one really critical…I hesitate to say "shortcoming," perhapscaveat with DART (and Owens et al. acknowledge this). It only codes audio into three different types (and mainly two, at that): 1) when a single voice is speaking, and 2) when multiple voices are speaking at once. As always, the devil is in the details. How you interpret this information is up to you! The gist is that multiple voices probably indicates active learning (in one sense): "students talking to each other in groups," while a single voice is possibly "lecture." The third type is 3) nobody talking (e.g. students might be working on reflective writing), although the authors note that DART rarely codes audio in this manner, perhaps because this is rarely encountered in courses.

However, we all know that there is not a simple dichotomy between "lecture" and "active learning." Thus, just because your classroom is noisy does not mean you're using proven pedagogical practices to improve student learning! Not shocking, right?

What I found in my initial analyses of my own lectures did not actually surprise me. First, evidence suggests that faculty are not great at accurately self-reporting what they spend time doing in class (refer to the development of COPUS and RTOP for evidence and substantiation of the need for trained third-party observers to code classroom activities). I am no different. From my last week of upper-division genetics courses, DART suggested that 100% of class time was single-voice. Wait. I swear I had some active-learning taking place in those classes!

I'm concerned, and I'm not. Yes, I do probably over-exaggerate the amount of time I spend facilitate small-group work in class. However, this does not mean that I don't teach using active learning (as I understand the phrase). I do a lot of question-asking and -answering, and formative assessment using polling software (e.g. Socrative). To some, this is active learning, but it isn't physically (or vocally) active. Thus, DART doesn't give me any information about the times during class that help me distinguish when I'm "lecturing" (single voice: me) vs. when a student is asking a question (single voice: student, indicating engagement) vs. when I am answering that question.

I immediately realized, especially after looking over the DART reports for some of my classes where I know I incorporated small-group activities and discussion, that I have made my lecture capture workflow too efficient to be useful with DART. Whenever we have "audible active learning," like small-group discussions, I have routinely turned off the lecture capture recording. The reason is simple: then I don't have to spend time after class trimming out those non-useful parts of the video before uploading the video for my students to access.

So, it turns out I can't have my cake and eat it, too.

The future


How does DART immediately impact me, and what I do in the classroom? Today, I performed a little experiment, suggested on the DART website, re: how to obtain quality audio for analysis. As usual, I carried my iPad around the classroom with me during class, recording audio with screen capture. I also planted my phone on a podium at the front of the classroom, and recorded all fifty minutes of audio from that position.

I just finished analyzing both recordings. As I knew would happen, when I switched apps from Explain Everything (my app for presentation + lecture capture) into Google Sheets, which we used for our in-class activity today, Explain Everything stops recording. Thus, once again, tablet-based screen capture isn't the optimal approach for analysis of teaching using DART. However, the phone-recorded audio worked as promised. Below are the graphics from the DART report:



The first thirty minutes of class were spent mostly in lecture mode. I was doing most of the talking:

  • lecture
  • solving practice problems
  • answering students questions

See for yourself here: my lecture capture video from today.

And, about 32 minutes into the recording, the students were working on using Google Sheets to calculate chi-square values (multiple voices).

So, DART works! If I continue using dart, then I'll keep using my phone (or, more likely, another tablet computer) to record audio. In fact, the most likely solution will be that I'll set up a tablet to record live audio + video, so that my entire class is captured, regardless of what I'm doing on my instructor tablet.

To close, my advice is to use DART with a specific purpose, and with the caveats above, in mind. It has limited usefulness beyond discriminating a noisy classroom from one where a single voice dominates, and how those data should be interpreted requires more evidence, in my opinion. Although it is a tool that could be used for institutional-scale research, I will advocate its use simply for formative assessment for individual instructors. At least now we have a solid tool for analysis of lecture capture, addressing that concern I hinted at the top of this post: how to efficiently get a broad sense for instructor classroom activities.

It was certainly eye-opening to look at some of my own DART analyses, but whether I will change my teaching style is unclear. I like the diversity of activities normally performed in my classes, regardless of whether the majority are of the "audible active learning" type. There are clear values to incorporating audible active learning (e.g. think-pair-share and small-group exercises, where peer discussion and teaching occur), and at the very least, the publication of DART has made me stop and reflect on what the right balance is for me and for my courses. How about for you? I hope you'll give DART a try and see what you discover!

No comments:

Post a Comment

Have an insightful comment, best practice, or concern to share? Please do!