May 30, 2019
6 min

Getting started with usability testing

Optimal Workshop

Summary: Usability testing is crucial to product success, and there’s really no excuse not to do it. Here’s a quick guide to prepare you for your first usability test.

Back in 1994, an online shopping website called Amazon launched to little fanfare. With just a handful of products, it seemed like yet another internet company destined to join the rank and file like so many others in the dot-com bubble. But Amazon had a very different path to follow. The website soon grew rapidly in scale and value, and within 5 years was a leader in online shopping. As any internet user now knows, Amazon does a whole lot more than just sell books.

It wasn’t clever advertising that led to the massive success of Amazon (although that certainly helped), but a drive to place usability at the core of the company’s focus. According to Forbes, Amazon CEO Jeff Bezos invested 100 times more in the customer experience than advertising during Amazon’s first year of operation, meaning everything from how customers make their way through the website to how they get support from Amazon staff. Reinforcing the value of the customer experience, Nielsen Norman Group found that organizations that spend just 10% of their budgets on usability will see, on average, a 135% increase in their desired metrics.

You can’t afford to skip over usability testing. It’s essential to give products and services the best chance of succeeding – and here’s how you get started.

Unlike some other more exploratory user research methods, usability testing is very much focused on testing a prototype. It’s true that you can test a finished product, but it’s best practice to test prototypes as you develop them, iterate and improve, and then test again. This regular cadence of test/develop/feedback means the users’ point of view is constantly fed back to the people who need to understand it.

We won't dive into tools and how to actually design a prototype, but the key thing is that the prototype should be informed by an understanding of the customer and how they're going to be using the product/service (e.g., generative research). Again, the best way to kick off the testing loop is to design something relatively rough and conceptual, and then test and improve before developing detailed designs.

This leads in nicely to our next point, which is...

What is it that you want to find out about how people use your prototype? Setting clear goals for your testing will, in turn, help you to define some clear metrics. While sometimes it’s immediately obvious what you need to test, in other cases (especially if you’re new to the idea of usability testing) it can be a bit more of a struggle. This is where it’s best to engage with those people who have a stake in the outcome of your research.

You can also turn to your existing users. Look at how they currently use your product and use this to decide on key tasks for them to perform. Keep in mind that you want to test how a person completes a particular task – you’re not testing a particular feature. Your focus should be on how the feature supports the thing that a person is trying to do.

Now for the part that often stumps new user researchers: figuring out metrics. It’s an understandably confusing process at first. There are a number of metrics that you can use to measure usability.

  • Time on task: The amount of time participants spend on a task before successfully completing it.
  • Successful task completion: The percentage of tasks that participants successfully complete.
  • Critical errors: Errors that prevent participants from completing tasks.
  • Non-critical errors: Errors that do not affect a participant’s ability to complete a task.

Here are some examples of goals (and metrics) for usability testing:

  • Goal: Does our online shopping website’s checkout process make it easy for customers to add additional products?
  • Metric: Successful task completion. What percentage of users were able to complete this process?
  • Goal: Is it easy for customers to contact support staff through our mobile app?
  • Metric: Time on task. How long did it take participants to complete this task successfully?

It’s best to nail down your goals and metrics before you start working on the plan for how you’ll run your usability tests. You can also use usability scales to help with this. This article from UX Collective provides a great rundown on some of the most popular metrics you can use in your testing.

With an understanding of what you want to learn, it’s time to start putting together your testing plan. This is basically a list of the tasks you’ll give to participants in order to evaluate your prototype, as well as the script that you’ll follow to ensure you’re covering everything you need to during each testing session. Yes, even experienced user researchers bring along a script!

For your tasks, take a look at your prototype with a specific eye towards the areas that you’d like to test. If you want to test a sign-up flow, for example, figure out the ideal path that someone should take to successfully complete this process and break it down step by step. Why is this important? Well, if you have an idea of the steps needed to complete a task, you can evaluate the prototype by seeing how users make their way through them.

Here’s an example of what that sign-up flow could look like for a website, step by step:

  • Click ‘Register’ on the homepage, taken to the sign-up page
  • Add personal details
  • Click ‘Next’ at the bottom of page, taken to email opt-in page
  • Click to sign up for different emails
  • Click ‘Next’ at the bottom of the page, taken to account page

Chances are, there will be a number of ways to complete your task, and it’s important to account for each of them. Don’t assume that just because you’ve designed a fancy navigation, users will head there first when tasked with finding a specific page – they may head straight down to the footer. This means your list should account for these alternative steps.

It's always a good idea to run a pre-test with someone to make sure the tasks make sense, the script flows, and also time the session to make sure it fits in with what you planned.

Once you’ve come up with your tasks, it’s time to move onto the script. As we mentioned above, even experienced user researchers like to have a script on hand during usability tests. Here’s an example script that you can tweak and refine for your own purposes. Over time, you’ll probably find that you develop a template that works really well for you.

Hey [Name]. First of all, I just want to thank you for coming along to this study. Before we get started, I’d like to give you a quick overview of how everything is going to work – please let me know if you have any questions!
The format for this session will be quite simple. I’ve got a series of tasks I’d like you to work through, and I’ll ask a few questions as we go. Before each task, I’ll give you some context around why you’d even be doing this and your goal.
I just want you to know that we’re not testing you – we’re testing the [website/app]. There are no wrong answers. Everything you do is valuable learning for us.
Lastly, I just want to think aloud as much as possible as you work through the tasks. For example, if you’re looking for something, say “I’m looking for X thing”. This will just help me to better evaluate how well the [website/app] is working.
Ready to get started? Any other questions? Let’s go!

Note: You may also want to consider mentioning asking for recording consent if you plan to record the session.

If you’d like an alternative template, Steve Krug has one on his website here.

Take some time to get your script to a point where you feel comfortable going through it, and then turn your attention to the tasks you’ll be running during your session.

To start off, give a high-level explanation/story that your participant can use to get into the right frame of mind, and then add any detail about why they might be trying to complete that particular task.

Lastly, there’s the final wrap-up/debrief. Allocate some time at the end of your session to ask for any final comments and to thank the participant for coming along.

Having a solid script and a list of tasks is really just one part of the usability testing equation. It’s equally important to take good notes during sessions and then document these for future reference – and analysis.

Unfortunately, it’s often a time-consuming process. To help with capturing notes effectively, it's important to use the right tool. A qualitative notetaking tool like Reframer can help you not only capture your insights all in one place, but also make it easy to analyze your data and spot themes. Also, using tags helps you to filter observations based on each tag later in analysis, to help you identify pain points and moments of delight.

You can learn more about Reframer and the other tools on the Optimal Workshop platform here, or try them out for yourself.


Once you pull out the themes from your research, you can feed this back into the design and iterate from there. Ideally, this is a continuous loop – and it’s one of the best ways to ensure you’ve got a regular stream of customer data feeding into your product development.

There’s no need to be afraid of the usability testing process. It’s a key link in the chain of developing usable, successful products. Sure, those first few sessions can be quite nerve wracking, but there’s a good chance your participants are going to be more anxious than you are. Plus, you’ll be bringing a script, so everything you need to say is mapped out for you.

If you ever need a reminder of just how valuable usability testing can be, you only need to take one look at Amazon.

Publishing date
May 30, 2019
Share this article

Seeing is believing

Dive into our platform, explore our tools, and discover how easy it can be to conduct effective UX research.