Wednesday, March 8, 2017

Decibel Analysis for Research in Teaching (DART)

Something exciting happened this morning - I learned about a hot-off-the-presses, easy-to-use, and free, web-based tool that addresses a long-standing deficit widely lamented among my STEM educator, assessment, and ed-tech colleagues.

The title, the first thing I read (h/t Mary-Pat Stein, @cellstein on Twitter, from CSU Fullerton), gave me that hopeful heartbeat-skip:

"Classroom sound can be used to classify teaching practices in college science courses."

Owens et al. PNAS (2017) Online advance [www.pnas.org/cgi/doi/10.1073/pnas.1618693114]

Here's the summary of the abstract: using audio recordings from full classroom sessions, Owens et al. (a LOT of et al.; research from Kimberly Tanner's group at San Francisco State U.) reasoned that they could use an algorithm to classify what is happening in the classroom at each point in time.

The program is called DART: Decibel Analysis for Research in Teaching. You can find it here, register to use it for free, and be drag-and-dropping your audio or video files for analysis in three minutes or less! The web tool extracts the audio and analyses an hour's worth of audio in a couple of minutes (for me, this morning).

Opportunities


As Owens' manuscript suggests, this brings opportunity for institution-wide, automated analysis of teaching practices without having to have person-time in the classroom or watching (and coding) course videos. This is a game-changer. Read the manuscript for more potential benefits.

The reason I was so excited about DART this morning is because I pride myself on incorporating active learning in my courses at Fresno State. Plus (the BIG plus), I have years of lecture capture recordings that I could be analyzing RIGHT NOW! So, before getting ready for work this morning, I threw a few of my .mp4 file exports from ExplainEverything at DART.

Thus, a key benefit is those of us with stockpiles of audio can get straight to analysis. Today.

Further, with audio recording devices being dead cheap (ranging from dedicated digital audio recorders to cell phones, laptops, tablets…), everybody can (and should!) start analyzing their teaching style using this technique. Today. Except…

Obstacles


There is one really critical…I hesitate to say "shortcoming," perhapscaveat with DART (and Owens et al. acknowledge this). It only codes audio into three different types (and mainly two, at that): 1) when a single voice is speaking, and 2) when multiple voices are speaking at once. As always, the devil is in the details. How you interpret this information is up to you! The gist is that multiple voices probably indicates active learning (in one sense): "students talking to each other in groups," while a single voice is possibly "lecture." The third type is 3) nobody talking (e.g. students might be working on reflective writing), although the authors note that DART rarely codes audio in this manner, perhaps because this is rarely encountered in courses.

However, we all know that there is not a simple dichotomy between "lecture" and "active learning." Thus, just because your classroom is noisy does not mean you're using proven pedagogical practices to improve student learning! Not shocking, right?

What I found in my initial analyses of my own lectures did not actually surprise me. First, evidence suggests that faculty are not great at accurately self-reporting what they spend time doing in class (refer to the development of COPUS and RTOP for evidence and substantiation of the need for trained third-party observers to code classroom activities). I am no different. From my last week of upper-division genetics courses, DART suggested that 100% of class time was single-voice. Wait. I swear I had some active-learning taking place in those classes!

I'm concerned, and I'm not. Yes, I do probably over-exaggerate the amount of time I spend facilitate small-group work in class. However, this does not mean that I don't teach using active learning (as I understand the phrase). I do a lot of question-asking and -answering, and formative assessment using polling software (e.g. Socrative). To some, this is active learning, but it isn't physically (or vocally) active. Thus, DART doesn't give me any information about the times during class that help me distinguish when I'm "lecturing" (single voice: me) vs. when a student is asking a question (single voice: student, indicating engagement) vs. when I am answering that question.

I immediately realized, especially after looking over the DART reports for some of my classes where I know I incorporated small-group activities and discussion, that I have made my lecture capture workflow too efficient to be useful with DART. Whenever we have "audible active learning," like small-group discussions, I have routinely turned off the lecture capture recording. The reason is simple: then I don't have to spend time after class trimming out those non-useful parts of the video before uploading the video for my students to access.

So, it turns out I can't have my cake and eat it, too.

The future


How does DART immediately impact me, and what I do in the classroom? Today, I performed a little experiment, suggested on the DART website, re: how to obtain quality audio for analysis. As usual, I carried my iPad around the classroom with me during class, recording audio with screen capture. I also planted my phone on a podium at the front of the classroom, and recorded all fifty minutes of audio from that position.

I just finished analyzing both recordings. As I knew would happen, when I switched apps from Explain Everything (my app for presentation + lecture capture) into Google Sheets, which we used for our in-class activity today, Explain Everything stops recording. Thus, once again, tablet-based screen capture isn't the optimal approach for analysis of teaching using DART. However, the phone-recorded audio worked as promised. Below are the graphics from the DART report:



The first thirty minutes of class were spent mostly in lecture mode. I was doing most of the talking:

  • lecture
  • solving practice problems
  • answering students questions

See for yourself here: my lecture capture video from today.

And, about 32 minutes into the recording, the students were working on using Google Sheets to calculate chi-square values (multiple voices).

So, DART works! If I continue using dart, then I'll keep using my phone (or, more likely, another tablet computer) to record audio. In fact, the most likely solution will be that I'll set up a tablet to record live audio + video, so that my entire class is captured, regardless of what I'm doing on my instructor tablet.

To close, my advice is to use DART with a specific purpose, and with the caveats above, in mind. It has limited usefulness beyond discriminating a noisy classroom from one where a single voice dominates, and how those data should be interpreted requires more evidence, in my opinion. Although it is a tool that could be used for institutional-scale research, I will advocate its use simply for formative assessment for individual instructors. At least now we have a solid tool for analysis of lecture capture, addressing that concern I hinted at the top of this post: how to efficiently get a broad sense for instructor classroom activities.

It was certainly eye-opening to look at some of my own DART analyses, but whether I will change my teaching style is unclear. I like the diversity of activities normally performed in my classes, regardless of whether the majority are of the "audible active learning" type. There are clear values to incorporating audible active learning (e.g. think-pair-share and small-group exercises, where peer discussion and teaching occur), and at the very least, the publication of DART has made me stop and reflect on what the right balance is for me and for my courses. How about for you? I hope you'll give DART a try and see what you discover!

Thursday, March 2, 2017

Videoconferencing with students

Two challenges I've faced as a faculty member have been


  1. How to provide students equal access to me outside of class
  2. How to ensure that students also have equal access to information other students obtain from me outside of class

I've written extensively about how lecture capture can help all students (those who attended a class session and those who didn't) by providing a resource for catching up with missed work and for reviewing course material.

In the last half-year, I have learned some incredibly useful things that I'm using to address both problems.

First, I learned that Fresno State, like many other CSU campuses and other campuses, has institutional support for Zoom. Zoom, like Skype and other platforms, is a way to teleconference/video conference using mobile devices and computers - anything with a microphone and/or camera and the ability to launch the Zoom app. I've used Zoom on an iPhone, iPad, and MacBook, for example - but it isn't limited to Apple products.

Zoom is easy to use and has some very useful built-in features, like:

  • Recording Zoom meetings (allows "office-hour capture")
  • A shared, collaborative whiteboard that all meeting participants can edit at the same time


This semester, I've adopted a dual approach to using Zoom with my students. This has let me at least partially address my two initial challenges:

  1. I let students join my in-person office hours from Zoom, in case they're not physically able to make it to my office when I happen to have scheduled my office hours. I also give students the option of scheduling Zoom meetings with me at other (non-office-hour) times in some special situations.
  2. I record the office hour Zoom session and post it online (e.g. at YouTube) for other students to benefit from.


Here are a few quick best practices for using Zoom during office hours

With students gathered in my office, and online, Zoom offers the ability for all of us to interact with the same digital whiteboard. If somebody asks about how to analyze a particular pedigree, for example, I could draw an example on my tablet, and the rest of the students (in person and online) can all interact with my pedigree sketch simultaneously, each seeing the others' additions. Thus, I encourage all of the students who physically attend office hours also to bring a mobile device and sign into the videoconference. This is mainly because they can then collaborate on that shared whiteboard. An important detail here is to make sure that each student uses the button in the Zoom app to mute their device microphones (otherwise feedback abounds!)

As the initiator ("Host") of the Zoom meeting, I sign in from my laptop, and have the laptop camera generally aimed across my office, so that students not physically present can get one "big picture" view of who else is present as the laptop records the meeting.

Because I find it a little awkward to use some of the whiteboard annotation tools (e.g. a pen tool) using my laptop trackpad, I prefer also to log into my own Zoom meeting from my tablet - then I use the tablet for making whiteboard annotations while my laptop is recording the contents of the meeting. This is another benefit of using a "real" computer to start the Zoom meeting: when you're done recording the meeting, then the video is exported to a local file (as opposed to exported to "the cloud") that you can edit, if you want, and later upload to YouTube or other hosting site.

It was relatively straightforward to write the above, but here's a supplementary video that I hope will give you a better idea of what it is like to use Zoom and inspire to use it (especially if you have institutional access to Zoom!)