Fun, Delight, and Desirability: Testing for Intangibles
Ever tried to slip questions about aesthetics or subjective preferences into a round of usability testing? Although it’s against best practice, it’s not as uncommon as you’d think.
Companies want (and sometimes need) user feedback to validate visual design decisions, but there’s no standard way of getting that particular information.
Sometimes, companies resort to re-purposing well-known methods of research like usability testing and focus groups in order to get users’ approval on visual design. Often, their approach is as simple as asking participants which design they prefer and why.
But if you’ve ever tried it, you know that the answer isn’t always as easy as asking the question. In fact, asking the question often results in contradictory findings, confusing recommendations… and ultimately, lots and lots of noise. Many users don’t know why a design makes them feel a certain way, and those that do often struggle to articulate it. The few who can provide reasons often introduce feedback too subjective to wager a business decision on.
A recent article at UXmatters.com addresses the practice of asking usability participants which visual design option they prefer. The author shares my belief that such feedback creates more noise than it does clarity — but he proposes an interesting alternative and offers a case study on how the method played out at his company.
- In the creative brief, identify the top 3-5 descriptors that the creative design would ideally invoke.
- After information design is complete, prepare multiple creative concepts which include web page comps with the same layout and content but differing aesthetics.
- Arrange one set of test participants for each visual concept you intend to test.
- Develop a set of adjectives that participants might use to describe the site. Be sure that these adjectives are words your participants will understand, are salient to your research and include a mix of descriptions that people might consider positive (around 60%) and negative (around 40%). Microsoft used nearly 120 descriptors in their testing.
- Ask each participant to view a comp, then select the three to five of these adjectives they thought best described it.
- Ask them why they selected each of the words they did.
- Track the responses.
Although the posts I found are not explicit about the ideal number of participants, the idea is that it doesn’t take long for key takeaways to bubble up. Soon, you should understand whether your design is successful at invoking descriptors like the ones you targeted.
I’ve often been faced with this research dilemma, but I have never found the solution – so I’m eager to try this method. Have you ever tried a test technique similar to this one?