Nifty instructional technology

I’ve been taking my daughter on a tour of U.S. universities, because next autumn, she’ll be applying to them.  And some of these universities let you sit in on some of their classes: among the ones we visited, Stanford, Princeton, MIT, and Harvard do.  And much of it is what you’d expect: the historians are excellent story tellers, and the statisticians are always struggling to pass an appreciation of their complex subject on to people who would rather just use it as a tool.

But there was one nifty gadget at MIT (where else?).  The obvious bit are little 12-button “remote controls” at all the tables.  There’s one per student and a few extras.   The professor uses them to do mini-surveys to see how many people are following the lecture.   He pops up a slide with a multiple-choice question, a timer ticks down for 30 seconds, then up pops the percentage of people who voted for each answer.

To the system, students are anonymous: this is not part of the grading process.  It is there to provide feedback for the professor; it is part of the learning process.  (People often confuse grading and learning, perhaps because both happen in schools, but they have very different goals and are often in conflict.)

The system was impressive: the first time, 35% of the students got it right (chance=25%), and the professor groaned and said “I’m not going to ask how many of you did  the reading.” so he did a whiteboard example  on the topic.  (The guy had a very good heart-felt groan, and he did a good job of adapting the class on the fly to the survey results.) And the example worked (or perhaps the students got their brains in gear) because the later mini-surveys got about 85% correct (these were different questions, but the same general topic).

This system sounds like good educational technology.   You can  know in 1 minute whether everyone is confused or not.  We had four questions during a 90 minute class.    The class were comfortable with the system; they’d have quick little discussions with their neighbours, and 80% or 90% would answer.  It worked extremely smoothly.

Knowing whether people are confused or not should be very useful.  Even in small classes where you can read body language and ask questions, it can be hard to tell.  In a lecture, it is harder.  It is easy to imagine that it could improve teaching by 10%, 20%, maybe more, just by reducing the amount of time that people spend in a confused state.  (Though you’ll never eliminate all confusion.  In this class, we actually had two heart-felt groans, but the second one happened after the professor did the survey problem wrong, and before he sorted himself out.  Oh well.)