July 8, 2015
1 min read

Avoiding bias in the oh-so-human world of user testing

"Dear Optimal WorkshopMy question is about biasing users with the wording of questions. It seems that my co-workers and I spend too much time debating the wording of task items in usability tests or questions on surveys. Do you have any 'best practices' for wordings that evoke unbiased feedback from users?" — Dominic

Dear Dominic, Oh I feel your pain! I once sat through a two hour meeting that was dominated by a discussion on the merits of question marks!It's funny how wanting to do right by users and clients can tangle us up like fine chains in an old jewellery box. In my mind, we risk provoking bias when any aspect of our research (from question wording to test environment) influences participants away from an authentic response. So there are important things to consider outside of the wording of questions as well. I'll share my favorite tips, and then follow it up with a must-read resource or two.

Balance your open and closed questions

The right balance of open and closed questions is essential to obtaining unbiased feedback from your users. Ask closed questions only when you want a very specific answer like 'How old are you?' or 'Are you employed?' and ask open questions when you want to gain an understanding of what they think or feel. For example, don’t ask the participant'Would you be pleased with that?' (closed question). Instead, ask 'How do you feel about that?' or even better 'How do you think that might work?' Same advice goes for surveys, and be sure to give participants enough space to respond properly — fifty characters isn’t going to cut it.

Avoid using words that are linked to an emotion

The above questions lead me to my next point — don’t use words like ‘happy’. Don’t ask if they like or dislike something. Planting emotion based words in a survey or usability test is an invite for them to tell you what they think you want to hear . No one wants to be seen as being disagreeable. If you word a question like this, chances are they will end up agreeing with the question itself, not the content or meaning behind it...does that make sense? Emotion based questions only serve to distract from the purpose of the testing — leave them at home.

Keep it simple and avoid jargon

No one wants to look stupid by not understanding the terms used in the question. If it’s too complicated, your user might just agree or tell you what they think you want to hear to avoid embarrassment. Another issue with jargon is that some terms may have multiple meanings which can trigger a biased reaction depending on the user’s understanding of the term. A friend of mine once participated in user testing where they were asked if what they were seeing made them feel ‘aroused’. From a psychology perspective, that means you’re awake and reacting to stimuli.

From the user's perspective? I’ll let you fill in the blanks on that one. Avoid using long, wordy sentences when asking questions or setting tasks in surveys and usability testing. I’ve seen plenty of instances of overly complicated questions that make the user tune out (trust me, you would too!). And because people don't tend to admit their attention has wandered during a task, you risk getting a response that lacks authenticity — maybe even one that aims to please (just a thought...).

Encourage participants to share their experiences (instead of tying them up in hypotheticals)

Instead of asking your user what they think they would do in a given scenario, ask them to share an example of a time when they actually did do it. Try asking questions along the lines of 'Can you tell me about a time when you….?' or 'How many times in the last 12 months have you...?' Asking them to recall an experience they had allows you to gain factual insights from your survey or usability test, not hypothetical maybes that are prone to bias.

Focus the conversation by asking questions in a logical order

If you ask usability testing or survey questions in an order that doesn’t quite follow a logical flow, the user may think that the order holds some sort of significance which in turn may change the way they respond. It’s a good idea to ensure that the questions tell a story and follow a logical progression for example the steps in a process — don’t ask me if I’d be interested in registering for a service if you haven’t introduced the concept yet (you’d be surprised how often this happens!). For further reading on this, be sure to check out this great article from usertesting.com.

More than words — the usability testing experience as a whole

Reducing bias by asking questions the right way is really just one part of the picture. You can also reduce bias by influencing the wider aspects of the user testing process, and ensuring the participant is comfortable and relaxed.

Don’t let the designer facilitate the testing

This isn’t always possible, but it’s a good idea to try to get someone else to facilitate the usability testing on your design (and choose to observe if you like). This will prevent you from bringing your own bias into the room, and participants will be more comfortable being honest when the designer isn't asking the questions. I've seen participants visibly relax when I've told them I'm not the designer of a particular website, when it's apparent they've arrived expecting that to be the case.

Minimize discomfort and give observers a role

The more comfortable your participants are, with both the tester and the observer, the more they can be themselves. There are labs out there with two-way mirrors to hide observers, but in all honesty the police interrogation room isn’t always the greatest look! I prefer to have the observer in the testing room, while being conscious that participants may instinctively be uncomfortable with being observed. I’ve seen observer guidelines that insist observers (in the room) stay completely silent the entire time, but I think that can be pretty creepy for participants! Here's what works best (in my humble opinion).

The facilitator leads the testing session, of course, but the observer is able to pipe up occasionally, mostly for clarification purposes, and certainly join in the welcoming, 'How's the weather?' chit chat before the session begins. In fact, when I observe usability testing, I like to be the one who collects the participant from the foyer. I’m the first person they see and it’s my job to make them feel welcome and comfortable, so when they find out I'll be observing, they know me already. Anything you can do to make the participant feel at home will increase the authenticity of their responses.

A note to finish

At the end of the day the reality is we’re all susceptible to bias. Despite your best efforts you’re never going to eradicate it completely, but just being aware of and understanding it goes a long way to reducing its impacts. Usability testing is, after all, something we design. I’ll leave you with this quote from Jeff Sauro's must-read article on 9 biases to watch out for in usability testing:

"We do the best we can to simulate a scenario that is as close to what users would actually do .... However, no amount of realism in the tasks, data, software or environment can change the fact that the whole thing is contrived. This doesn't mean it's not worth doing."

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Avoiding bias in the oh-so-human world of user testing

"Dear Optimal WorkshopMy question is about biasing users with the wording of questions. It seems that my co-workers and I spend too much time debating the wording of task items in usability tests or questions on surveys. Do you have any 'best practices' for wordings that evoke unbiased feedback from users?" — Dominic

Dear Dominic, Oh I feel your pain! I once sat through a two hour meeting that was dominated by a discussion on the merits of question marks!It's funny how wanting to do right by users and clients can tangle us up like fine chains in an old jewellery box. In my mind, we risk provoking bias when any aspect of our research (from question wording to test environment) influences participants away from an authentic response. So there are important things to consider outside of the wording of questions as well. I'll share my favorite tips, and then follow it up with a must-read resource or two.

Balance your open and closed questions

The right balance of open and closed questions is essential to obtaining unbiased feedback from your users. Ask closed questions only when you want a very specific answer like 'How old are you?' or 'Are you employed?' and ask open questions when you want to gain an understanding of what they think or feel. For example, don’t ask the participant'Would you be pleased with that?' (closed question). Instead, ask 'How do you feel about that?' or even better 'How do you think that might work?' Same advice goes for surveys, and be sure to give participants enough space to respond properly — fifty characters isn’t going to cut it.

Avoid using words that are linked to an emotion

The above questions lead me to my next point — don’t use words like ‘happy’. Don’t ask if they like or dislike something. Planting emotion based words in a survey or usability test is an invite for them to tell you what they think you want to hear . No one wants to be seen as being disagreeable. If you word a question like this, chances are they will end up agreeing with the question itself, not the content or meaning behind it...does that make sense? Emotion based questions only serve to distract from the purpose of the testing — leave them at home.

Keep it simple and avoid jargon

No one wants to look stupid by not understanding the terms used in the question. If it’s too complicated, your user might just agree or tell you what they think you want to hear to avoid embarrassment. Another issue with jargon is that some terms may have multiple meanings which can trigger a biased reaction depending on the user’s understanding of the term. A friend of mine once participated in user testing where they were asked if what they were seeing made them feel ‘aroused’. From a psychology perspective, that means you’re awake and reacting to stimuli.

From the user's perspective? I’ll let you fill in the blanks on that one. Avoid using long, wordy sentences when asking questions or setting tasks in surveys and usability testing. I’ve seen plenty of instances of overly complicated questions that make the user tune out (trust me, you would too!). And because people don't tend to admit their attention has wandered during a task, you risk getting a response that lacks authenticity — maybe even one that aims to please (just a thought...).

Encourage participants to share their experiences (instead of tying them up in hypotheticals)

Instead of asking your user what they think they would do in a given scenario, ask them to share an example of a time when they actually did do it. Try asking questions along the lines of 'Can you tell me about a time when you….?' or 'How many times in the last 12 months have you...?' Asking them to recall an experience they had allows you to gain factual insights from your survey or usability test, not hypothetical maybes that are prone to bias.

Focus the conversation by asking questions in a logical order

If you ask usability testing or survey questions in an order that doesn’t quite follow a logical flow, the user may think that the order holds some sort of significance which in turn may change the way they respond. It’s a good idea to ensure that the questions tell a story and follow a logical progression for example the steps in a process — don’t ask me if I’d be interested in registering for a service if you haven’t introduced the concept yet (you’d be surprised how often this happens!). For further reading on this, be sure to check out this great article from usertesting.com.

More than words — the usability testing experience as a whole

Reducing bias by asking questions the right way is really just one part of the picture. You can also reduce bias by influencing the wider aspects of the user testing process, and ensuring the participant is comfortable and relaxed.

Don’t let the designer facilitate the testing

This isn’t always possible, but it’s a good idea to try to get someone else to facilitate the usability testing on your design (and choose to observe if you like). This will prevent you from bringing your own bias into the room, and participants will be more comfortable being honest when the designer isn't asking the questions. I've seen participants visibly relax when I've told them I'm not the designer of a particular website, when it's apparent they've arrived expecting that to be the case.

Minimize discomfort and give observers a role

The more comfortable your participants are, with both the tester and the observer, the more they can be themselves. There are labs out there with two-way mirrors to hide observers, but in all honesty the police interrogation room isn’t always the greatest look! I prefer to have the observer in the testing room, while being conscious that participants may instinctively be uncomfortable with being observed. I’ve seen observer guidelines that insist observers (in the room) stay completely silent the entire time, but I think that can be pretty creepy for participants! Here's what works best (in my humble opinion).

The facilitator leads the testing session, of course, but the observer is able to pipe up occasionally, mostly for clarification purposes, and certainly join in the welcoming, 'How's the weather?' chit chat before the session begins. In fact, when I observe usability testing, I like to be the one who collects the participant from the foyer. I’m the first person they see and it’s my job to make them feel welcome and comfortable, so when they find out I'll be observing, they know me already. Anything you can do to make the participant feel at home will increase the authenticity of their responses.

A note to finish

At the end of the day the reality is we’re all susceptible to bias. Despite your best efforts you’re never going to eradicate it completely, but just being aware of and understanding it goes a long way to reducing its impacts. Usability testing is, after all, something we design. I’ll leave you with this quote from Jeff Sauro's must-read article on 9 biases to watch out for in usability testing:

"We do the best we can to simulate a scenario that is as close to what users would actually do .... However, no amount of realism in the tasks, data, software or environment can change the fact that the whole thing is contrived. This doesn't mean it's not worth doing."

Learn more
1 min read

Dan Dixon and Stéphan Willemse: HCD is dead, long live HCD

There is strong backlash about the perceived failures of Human Centred Design (HCD) and its contribution to contemporary macro problems. There seems to be a straightforward connection: HCD and Design Thinking have been adopted by organizations and are increasingly part of product/experience development, especially in big tech. However the full picture is more complex, and HCD does have some issues.

Dan Dixon, UX and Design Research Director and Stéphan Willemse, Strategy Director/Head of Strategy, both from the Digital Arts Network, recently spoke at UX New Zealand, the leading UX and IA conference in New Zealand hosted by Optimal Workshop, about the evolution and future of HCD.

In their talk, Dan and Stéphan cover the history of HCD, its use today, and its limitations, before presenting a Post HCD future. What could it be, and how should it be different? Dan and Stéphan help us to step outside of ourselves as we meet new problems with new ways of Design Thinking.

Dan Dixon and Stéphan Willemse bios

Dan is a long-term practitioner of human-centred experience design and has a wealth of experience in discovery and qual research. He’s worked in academic, agency and client-side roles in both the UK and NZ, covering diverse fields such as digital, product design, creative technology and game design. His history has blended a background in the digital industry with creative technology teaching and user experience research. He has taken pragmatic real-world knowledge into a higher education setting as well as bringing deeper research skills from academia into commercial design projects. In higher education, as well as talks and workshops, Dan has been teaching and sharing these skills for the last 16 years. 

Stéphan uses creativity, design and strategy to help organizations innovate towards positive, progressive futures. He works across innovation, experience design, emerging technologies, cultural intelligence and futures projects with clients including Starbucks, ANZ, Countdown, TradeMe and the public sector. He holds degrees in PPE, Development Studies, Education and an Executive MBA. However, he doesn’t like wearing a suit and his idea of the perfect board meeting is at a quiet surf break. He thinks ideas are powerful and that his young twins ask the best questions about the world we live in.

Contact Details:

Email: dan.dixon@digitalartsnetwork.com

Find Dan on LinkedIn  

HCD IS DEAD, LONG LIVE HCD 👑

Dan and Stéphan take us through the evolving landscape of Human Centred Design (HCD) and Design Thinking. Can HCD effectively respond to the challenges of the modern era, and can we get ahead of the unintended consequences of our design? They examine the inputs and processes of design, not just the output, to scrutinize the very essence of design practice.

A brief history of HCD

In the 1950s and 1960s, designers began exploring the application of scientific processes to design, aiming to transform it into a systematic problem-solving approach. Later in the 1960s, design thinkers in Scandinavia initiated the shift towards cooperative and participative design practices. Collaboration and engagement with diverse stakeholders became integral to design processes. Then, the 1970s and 1980s marked a shift in perspective, viewing design as a fundamentally distinct way of approaching problems. 

Moving into the late 1980s and 1990s, design thinking expanded to include user-centered design, and the idea of humans and technology becoming intertwined. Then the 2000s witnessed a surge in design thinking, where human-centered design started to make its mark.

Limitations of the “design process”

Dan and Stéphan discuss the “design squiggle”, a concept that portrays the messy and iterative nature of design, starting chaotically and gradually converging toward a solution. For 20 years, beginning in the early 90s, this was a popular way to explain how the design process feels. However, in the past 10 years or so, efforts to teach and pass down design processes have become common practice. Here enter concepts like the “double diamond” and “pattern problem”, which seek to be repeatable and process-driven. These neat processes, however, demand rigid adherence to specific design methods, which can ultimately stifle innovation. 

Issues with HCD and its evolution

The critique of such rigid design processes, which developed alongside HCD, highlights the need to acknowledge that humans are just one element in an intricate network of actors. By putting ourselves at the center of our design processes and efforts, we already limit our design. Design is just as much about the ecosystem surrounding any given problem as it is about the user. A limitation of HCD is that we humans are not actually at the center of anything except our own minds. So, how can we address this limitation?

Post-anthropocentric design starts to acknowledge that we are far less rational than we believe ourselves to be. It captures the idea that there are no clear divisions between ‘being human’ and everything else. This concept has become important as we adopt more and more technology into our lives, and we’re getting more enmeshed in it. 

Post-human design extends this further by removing ourselves from the center of design and empathizing with “things”, not just humans. This concept embraces the complexity of our world and emphasizes how we need to think about the problem just as much as we think about the solution. In other words, post-human design encourages us to “live” in our design problem(s) and consider multiple interventions.

Finally, Dan and Stéphan discuss the concept of Planetary design, which stresses that everything we create, and everything we do, has the possibility to impact everything else in the world. In fact, our designs do impact everything else, and we need to try and be aware of all possibilities.

Integrating new ways of thinking about design

To think beyond HCD and to foster innovation in design, we can begin by embracing emerging design practices and philosophies such as "life-centered design," "Society-centered design," and "Humanity-centered design." These emerging practices have toolsets that are readily available online and can be seamlessly integrated into your design approach, helping us to break away from traditional, often linear, methodologies. Or, taking a more proactive stance, we can craft our own unique design tools and frameworks. 

Why it matters 🎯

To illustrate how design processes can evolve to meet current and future challenges of our time, Dan and Stéphan present their concept of “Post human-centered design” (Post HCD). At its heart, it seeks to take what's great about HCD and build upon it, all while understanding its issues/limitations.

Dan and Stéphan put forward, as a starting point, some challenges for designers to consider as we move our practice to its next phase.

Suggested Post HCD principles:

  • Human to context: Moving from human-centered to a context-centred or context sensitive point of view.
  • Design Process to Design Behaviour: Not being beholden to design processes like the “double diamond”. Instead of thinking about designing for problems, we should design for behaviors instead. 
  • Problem-solutions to Interventions: Thinking more broadly about interventions in the problem space, rather than solutions to the problems
  • Linear to Dynamic: Understand ‘networks’ and complex systems.
  • Repeated to Reflexive: Challenging status quo processes and evolving with challenges that we’re trying to solve.

The talk wraps up by encouraging designers to incorporate some of this thinking into everyday practice. Some key takeaways are: 

  • Expand your web of context: Don’t just think about things having a center, think about networks.
  • Have empathy for “things”: Consider how you might then have empathy for all of those different things within that network, not just the human elements of the network.
  • Design practice is exploration and design exploration is our practice: Ensure that we're exploring both our practice as well as the design problem.
  • Make it different every time: Every time we design, try to make it different, don't just try and repeat the same loop over and over again.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.