This week we had to select a quantitative metric and defend it for use in a usability study. Since I had more time to give the assignment this week, I picked something I’ve never quite understood: error rate.
(Going forward, I’m going to italicize the quantitative metrics for clarity.)
In doing this, I learned that I didn’t want to use error rate at all! Error rate seemed to be a combination of recording website errors as well as time on task. Although this study was theoretical, choosing error rate felt too complicated. I’d rather stick to time on task, given the choice. I think it would yield results as helpful, without all the work associated with recording and analyzing errors.
It makes me wonder why people use error rate at all?
Anyway, using this article as a guide, I suggested measuring errors alone. Their incidence, not their frequency in a given time. This method wasn’t explicitly mentioned in the assignment instructions, but we were allowed to strike out on our own, if we wanted.
I highly recommend that article, by the way, if you don’t understand what UXD is talking about when it’s talking about errors. I only had a vague idea. I had a lot cleared up for me.
I would cite the section on the “four causes of errors,” but Medium isn’t great for formatting large blocks of text without looking like you wrote the thing yourself. The #3 point on User Interface Problems is this:
Errors caused by the interface are the ones we’re most interested in as we can usually do something about these. If users continue to click on a heading that’s not clickable (mistake) or look for a product in the wrong part of the navigation then there’s probably something about the design that we can improve.
I liked his metric on “Opportunity for errors,” and that’s what I ultimately recommended.
Note: This blog was written during Usability week 4.