I listened to the audio from a lecture at the AI Summit 2011 called “Unmoderated, Remote Usability Testing: Good or Evil?” It was an excellent lecture by Kyle Soucy and I recommend giving it a listen if the subject matter interests you. Even better, you can find it in iTunes under podcasts.
My main takeaway? Do a pilot test. Specifically when conducting remote, unmoderated usability tests… do a pilot test every time. When moderating a usability test in-person, or even moderating it remotely, you can correct the course of a study if the wording of a task is tricky or the participant goes off-track. You can adjust as you go.
But as I watched several recordings of my remote usability study I conducted last week… I spotted a huge error, and I couldn’t correct it at all. I could only sit there and cringe, again and again, as each participant responded in the same way, looking for information that wasn’t there, and giving me data that was meaningless and unhelpful. I wasted their time. I wasted my time.
Of course before releasing the test I ran through it myself — several times — but it’s not the same as having someone else look at it. You can spot your own bad grammar, but you can’t always spot your own confusing wording.
Next time, I’m doing a pilot test.
Note: This blog was written during Usability II week 3.
Comment