On July 18th, the Triangle UXPA hosted an all-day workshop by Carol Barnum at the offices of Railinc in Cary. Entitled UX Tune-Up, the workshop offered a deep dive on two of the most important user experience tools, heuristic evaluation and usability testing. Barnum guessed that most attendees would be self-taught UX practitioners--something she confirmed at the outset with an informal survey--who would benefit from focusing on these essential tools.
Barnum founded and directs the Usability Center at Southern Polytechnic State University in Atlanta, Georgia, where she teaches usability and user experience practice, and consults for clients such as Delta Airlines and Cox Communications.
I’ve embedded the slides from the workshop below, but here’s a brief overview of what was covered:
Heuristic Evaluation (and Expert Review)Barnum described the classic method of heuristic evaluation, which 3-5 usability experts “walking in the user’s shoes” through a website or product, using a heuristic. A heuristic, or guide, is basically a list of points to refer to in your evaluation, the most popular usability heuristic being Jakob Nielsen’s 10 Heuristics for User Interface Design. Barnum had attendees work in groups to do a heuristic evaluation of an example site, giving us a greater sense of the process.
Here are a few takeaways from the morning that stood out to me:
- Barnum emphasized the need to have a user profile (or persona) and a scenario from which perspective to perform the evaluation.
- Heuristic evaluation, when done less formally, e.g. with less evaluators or without heuristics, is usually referred to as “expert review.”
- Consider other heuristics, or making your own, depending on the project. Barnum also recommended Whitney Quesenbery’s 5 E’s.
- When presenting findings from an evaluation or expert review, rank them by severity and tie each to the heuristic or principles it violates.
For the afternoon, Barnum shifted the workshop’s focus to usability testing. She described her work in the Usability Center at SPSU, going through her methods, sharing some examples of findings, and discussing the wide variety of approaches to testing. Barnum expressed support for small studies with 4-6 users, pointing to some great resources for these:
Barnum fielded many questions on the various stages of the usability process, from screening and recruiting, to the test protocol, to reporting findings. I’d encourage you to go through her slides for more detail on the process.
Overall, this was a great workshop--very focused on getting attendees a solid understanding of two essential tools. Barnum left us with a mantra for usability testing: test early, small, and often.
She also shared one other tool, which I hadn’t known about before, that she uses in conjunction with usability testing: Microsoft’s Product Reaction Cards. These are 118 cards, each with one adjective, that the tester spreads out in random order on a table; the user then chooses the few cards that s/he feels best describe the product being tested. Barnum said the cards are “like magic” in that users very often gravitate toward the same words. For more on these, see:
For those looking to go a little deeper on these methods, Carol has a book on the topic: Usability Testing Essentials: Ready, Set...Test!
Thanks to Carol for sharing her expertise, and thanks to Railinc for hosting the event!