February 27, 2023
1 min read

The Role of Usability Metrics in User-Centered Design

The term ‘usability’ captures sentiments of how usable, useful, enjoyable, and intuitive a website or app is perceived by users. By its very nature, usability is somewhat subjective. But what we’re really looking for when we talk about usability is how well a website can be used to achieve a specific task or goal. Using this definition we can analyze usability metrics (standard units of measurement) to understand how well user experience design is performing.

Usability metrics provide helpful insights before and after any digital product is launched. They help us form a deeper understanding of how we can design with the user front of mind. This user-centered design approach is considered the best-practice in building effective information architecture and user experiences that help websites, apps, and software meet and exceed users' needs.

In this article, we’ll highlight key usability metrics, how to measure and understand them, and how you can apply them to improve user experience.

Understanding Usability Metrics

Usability metrics aim to understand three core elements of usability, namely: effectiveness, efficiency, and satisfaction. A variety of research techniques offer designers an avenue for quantifying usability. Quantifying usability is key because we want to measure and understand it objectively, rather than making assumptions.

Types of Usability Metrics

There are a few key metrics that we can measure directly if we’re looking to quantify effectiveness, efficiency, and satisfaction. Here are four common examples:

  • Success rate: Also known as ‘completion rate’, success rate is the percentage of users who were able to successfully complete the tasks.
  • Time-based efficiency: Also known as ‘time on task’, time-based efficiency measures how much time a user needs to complete a certain task.
  • Number of errors: Sounds like what it is! It measures the average number of times where an error occurred per user when performing a given task.
  • Post-task satisfaction: Measures a user's general impression or satisfaction after completing (or not completing) a given task.

How to Collect Usability Metrics


Usability metrics are outputs from research techniques deployed when conducting usability testing. Usability testing in web design, for example, involves assessing how a user interacts with the website by observing (and listening to) users completing defined tasks, such as purchasing a product or signing up for newsletters.

Conducting usability testing and collecting usability metrics usually involves:

  • Defining a set of tasks that you want to test
  • Recruitment of test participants
  • Observing participants (remotely or in-person)
  • Recording detailed observations
  • Follow-up satisfaction survey or questionnaire

Tools such Reframer are helpful in conducting usability tests remotely, and they enable live collaboration of multiple team members. It is extremely handy when trying to record and organize those insightful observations! Using paper prototypes is an inexpensive way to test usability early in the design process.

The Importance of Usability Metrics in User-Centered Design

User-centered design challenges designers to put user needs first. This means in order to deploy user-centered design, you need to understand your user. This is where usability testing and metrics add value to website and app performance; they provide direct, objective insight into user behavior, needs, and frustrations. If your user isn’t getting what they want or expect, they’ll simply leave and look elsewhere.

Usability metrics identify which parts of your design aren’t hitting the mark. Recognizing where users might be having trouble completing certain actions, or where users are regularly making errors, are vital insights when implementing user-centered design. In short, user-centered design relies on data-driven user insight.

But why hark on about usability metrics and user-centered design? Because at the heart of most successful businesses is a well-solved user problem. Take Spotify, for example, which solved the problem of dodgy, pirated digital files being so unreliable. People liked access to free digital music, but they had to battle viruses and fake files to get it. With Spotify, for a small monthly fee, or the cost of listening to a few ads, users have the best of both worlds. The same principle applies to user experience - identify recurring problems, then solve them.

Best Practices for Using Usability Metrics

Usability metrics should be analyzed by design teams of every size. However, there are some things to bear in mind when using usability metrics to inform design decisions:

  • Defining success: Usability metrics are only valuable if they are being measured against clearly defined benchmarks. Many tasks and processes are unique to each business, so use appropriate comparisons and targets; usually in the form of an ‘optimized’ user (a user with high efficiency).
  • Real user metrics: Be sure to test with participants that represent your final user base. For example, there’s little point in testing your team, who will likely be intimately aware of your business structure, terminology, and internal workflows.
  • Test early: Usability testing and subsequent usability metrics provide the most impact early on in the design process. This usually means testing an early prototype or even a paper prototype. Early testing helps to avoid any significant, unforeseen challenges that could be difficult to rewind in your information architecture.
  • Regular testing: Usability metrics can change over time as user behavior and familiarity with digital products evolve. You should also test and analyze the usability of new feature releases on your website or app.

Remember, data analysis is only as good as the data itself. Give yourself the best chance of designing exceptional user experiences by collecting, researching, and analyzing meaningful and accurate usability metrics.

Conclusion

Usability metrics are a guiding light when it comes to user experience. As the old saying goes, “you can’t manage what you can’t measure”. By including usability metrics in your design process, you invite direct user feedback into your product. This is ideal because we want to leave any assumptions or guesswork about user experience at the door.

User-centered design inherently relies on constant user research. Usability metrics such as success rate, time-based efficiency, number of errors, and post-task satisfaction will highlight potential shortcomings in your design. Subsequently, they identify where improvements can be made, AND they lay down a benchmark to check whether any resulting updates addressed the issues.

Ready to start collecting and analyzing usability metrics? Check out our guide to planning and running effective usability tests to get a head start!

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

My journey running a design sprint

Recently, everyone in the design industry has been talking about design sprints. So, naturally, the team at Optimal Workshop wanted to see what all the fuss was about. I picked up a copy of The Sprint Book and suggested to the team that we try out the technique.

In order to keep momentum, we identified a current problem and decided to run the sprint only two weeks later. The short notice was a bit of a challenge, but in the end we made it work. Here’s a run down of how things went, what worked, what didn’t, and lessons learned.

A sprint is an intensive focused period of time to get a product or feature designed and tested with the goal of knowing whether or not the team should keep investing in the development of the idea. The idea needs to be either validated or not validated by the end of the sprint. In turn, this saves time and resource further down the track by being able to pivot early if the idea doesn’t float.

If you’re following The Sprint Book you might have a structured 5 day plan that looks likes this:

  • Day 1 - Understand: Discover the business opportunity, the audience, the competition, the value proposition and define metrics of success.
  • Day 2 - Diverge: Explore, develop and iterate creative ways of solving the problem, regardless of feasibility.
  • Day 3 - Converge: Identify ideas that fit the next product cycle and explore them in further detail through storyboarding.
  • Day 4 - Prototype: Design and prepare prototype(s) that can be tested with people.
  • Day 5 - Test: User testing with the product's primary target audience.
Design sprint cycle
 With a Design Sprint, a product doesn't need to go full cycle to learn about the opportunities and gather feedback.

When you’re running a design sprint, it’s important that you have the right people in the room. It’s all about focus and working fast; you need the right people around in order to do this and not have any blocks down the path. Team, stakeholder and expert buy-in is key — this is not a task just for a design team!After getting buy in and picking out the people who should be involved (developers, designers, product owner, customer success rep, marketing rep, user researcher), these were my next steps:

Pre-sprint

  1. Read the book
  2. Panic
  3. Send out invites
  4. Write the agenda
  5. Book a meeting room
  6. Organize food and coffee
  7. Get supplies (Post-its, paper, Sharpies, laptops, chargers, cameras)

Some fresh smoothies for the sprinters made by our juice technician
 Some fresh smoothies for the sprinters made by our juice technician

The sprint

Due to scheduling issues we had to split the sprint over the end of the week and weekend. Sprint guidelines suggest you hold it over Monday to Friday — this is a nice block of time but we had to do Thursday to Thursday, with the weekend off in between, which in turn worked really well. We are all self confessed introverts and, to be honest, the thought of spending five solid days workshopping was daunting. At about two days in, we were exhausted and went away for the weekend and came back on Monday feeling sociable and recharged again and ready to examine the work we’d done in the first two days with fresh eyes.

Design sprint activities

During our sprint we completed a range of different activities but here’s a list of some that worked well for us. You can find out more information about how to run most of these over at The Sprint Book website or checkout some great resources over at Design Sprint Kit.

Lightning talks

We kicked off our sprint by having each person give a quick 5-minute talk on one of these topics in the list below. This gave us all an overview of the whole project and since we each had to present, we in turn became the expert in that area and engaged with the topic (rather than just listening to one person deliver all the information).

Our lightning talk topics included:

  • Product history - where have we come from so the whole group has an understanding of who we are and why we’ve made the things we’ve made.
  • Vision and business goals - (from the product owner or CEO) a look ahead not just of the tools we provide but where we want the business to go in the future.
  • User feedback - what have users been saying so far about the idea we’ve chosen for our sprint. This information is collected by our User Research and Customer Success teams.
  • Technical review - an overview of our tech and anything we should be aware of (or a look at possible available tech). This is a good chance to get an engineering lead in to share technical opportunities.
  • Comparative research - what else is out there, how have other teams or products addressed this problem space?

Empathy exercise

I asked the sprinters to participate in an exercise so that we could gain empathy for those who are using our tools. The task was to pretend we were one of our customers who had to present a dendrogram to some of our team members who are not involved in product development or user research. In this frame of mind, we had to talk through how we might start to draw conclusions from the data presented to the stakeholders. We all gained more empathy for what it’s like to be a researcher trying to use the graphs in our tools to gain insights.

How Might We

In the beginning, it’s important to be open to all ideas. One way we did this was to phrase questions in the format: “How might we…” At this stage (day two) we weren’t trying to come up with solutions — we were trying to work out what problems there were to solve. ‘We’ is a reminder that this is a team effort, and ‘might’ reminds us that it’s just one suggestion that may or may not work (and that’s OK). These questions then get voted on and moved into a workshop for generating ideas (see Crazy 8s).Read a more detailed instructions on how to run a ‘How might we’ session on the Design Sprint Kit website.

Crazy 8s

This activity is a super quick-fire idea generation technique. The gist of it is that each person gets a piece of paper that has been folded 8 times and has 8 minutes to come up with eight ideas (really rough sketches). When time is up, it’s all pens down and the rest of the team gets to review each other's ideas.In our sprint, we gave each person Post-it notes, paper, and set the timer for 8 minutes. At the end of the activity, we put all the sketches on a wall (this is where the art gallery exercise comes in).

Mila our data scientist sketching intensely during Crazy 8s
 Mila our data scientist sketching intensely during Crazy 8s

A close up of some sketches from the team
 A close up of some sketches from the team

Art gallery/Silent critique

The art gallery is the place where all the sketches go. We give everyone dot stickers so they can vote and pull out key ideas from each sketch. This is done silently, as the ideas should be understood without needing explanation from the person who made them. At the end of it you’ve got a kind of heat map, and you can see the ideas that stand out the most. After this first round of voting, the authors of the sketches get to talk through their ideas, then another round of voting begins.

Mila putting some sticky dots on some sketches
 Mila putting some sticky dots on some sketches

Bowie, our head of security/office dog, even took part in the sprint...kind of.
 Bowie, our head of security, even took part in the sprint...kind of

Usability testing and validation

The key part of a design sprint is validation. For one of our sprints we had two parts of our concept that needed validating. To test one part we conducted simple user tests with other members of Optimal Workshop (the feature was an internal tool). For the second part we needed to validate whether we had the data to continue with this project, so we had our data scientist run some numbers and predictions for us.

6-dan-design-sprintOur remote worker Rebecca dialed in to watch one of our user tests live
 Our remote worker Rebecca dialed in to watch one of our user tests live
"I'm pretty bloody happy" — Actual feedback.
 Actual feedback

Challenges and outcomes

One of our key team members, Rebecca, was working remotely during the sprint. To make things easier for her, we set up 2 cameras: one pointed to the whiteboard, the other was focused on the rest of the sprint team sitting at the table. Next to that, we set up a monitor so we could see Rebecca.

Engaging in workshop activities is a lot harder when working remotely. Rebecca would get around this by completing the activities and take photos to send to us.

8-rebecca-design-sprint
 For more information, read this great Medium post about running design sprints remotely

Lessons

  • Lightning talks are a great way to have each person contribute up front and feel invested in the process.
  • Sprints are energy intensive. Make sure you’re in a good place with plenty of fresh air with comfortable chairs and a break out space. We like to split the five days up so that we get a weekend break.
  • Give people plenty of notice to clear their schedules. Asking busy people to take five days from their schedule might not go down too well. Make sure they know why you’d like them there and what they should expect from the week. Send them an outline of the agenda. Ideally, have a chat in person and get them excited to be part of it.
  • Invite the right people. It’s important that you get the right kind of people from different parts of the company involved in your sprint. The role they play in day-to-day work doesn’t matter too much for this. We’re all mainly using pens and paper and the more types of brains in the room the better. Looking back, what we really needed on our team was a customer support team member. They have the experience and knowledge about our customers that we don’t have.
  • Choose the right sprint problem. The project we chose for our first sprint wasn’t really suited for a design sprint. We went in with a well defined problem and a suggested solution from the team instead of having a project that needed fresh ideas. This made the activities like ‘How Might We’ seem very redundant. The challenge we decided to tackle ended up being more of a data prototype (spreadsheets!). We used the week to validate assumptions around how we can better use data and how we can write a script to automate some internal processes. We got the prototype working and tested but due to the nature of the project we will have to run this experiment in the background for a few months before any building happens.

Overall, this design sprint was a great team bonding experience and we felt pleased with what we achieved in such a short amount of time. Naturally, here at Optimal Workshop, we're experimenters at heart and we will keep exploring new ways to work across teams and find a good middle ground.

Further reading

Learn more
1 min read

A comprehensive look at usability testing

Usability testing has an important role in UX and if you’re new to it, this article gives you a solid introduction to it with practical tips, a checklist for success and a look at our remote testing tool, Treejack.

Concepts Of usability testing 👩🏾💻

Usability testing is the process of evaluating a product or service with users prior to implementation. The goal of usability testing is to identify any usability issues before the product or service is released into the world for use. Usability testing is a research activity that results in both quantitative and qualitative insights and can be used to gauge user satisfaction.A typical usability testing session is moderated and involves a participant, a facilitator and an observer. The facilitator leads the session and the observer takes notes while the participant completes the task based scenario.

While this is common, usability testing is scalable and the possible approaches are endless giving you the flexibility to work with the resources you have available—sometimes one person performs the role of facilitator and observer!Location also varies for usability testing. For example, you might conduct your testing in a lab environment or you might talk to users in a specific environment. It also worth noting that not all usability testing sessions are moderated—more about this later.Usability testing usually occurs multiple times during the design process and can be conducted anytime you have a design you would like to test.

User research activities like a focus group for example, are conducted early in the design process to explore and gain understanding before ideas are generated. Usability testing is about testing those ideas as early and as often as possible. From a fully functioning digital prototype to a simple hand drawn wireframe on paper, nothing is too unrefined or too rough to be tested.

Developing a usability test plan 🛠️

Before you start a round of usability testing, you need to develop a usability test plan. The usability test plan will keep you organised and is an opportunity to define roles and set clear expectations upfront. The first step in developing this is to hold a meeting with your team and stakeholders to discuss what you are going to do and how you plan to achieve it. Following this meeting, a document outlining the usability test plan as it was discussed is created and shared with the group for review. Any changes suggested by the group are then added to the final document for approval from the relevant stakeholders.

What to include in your usability test plan:

  • The goal, scope and intent of the usability testing
  • Constraints impacting upon testing
  • Details on what will be tested eg wireframes
  • Schedule and testing location
  • Associated costs eg participant recruitment
  • Facilitator and observer details for each session
  • Session details
  • Participant recruitment approach
  • Equipment
  • Details of any documentation to be produced eg a report

Usability testing questions 🤔

Once you have developed your test plan, you need to create a list of questions and task based scenarios for the testing session. These form the structure for your testing and provide a framework of consistency across all testing sessions in the study.The questions serve as a warm up to ease the participant into the session and can also provide insights on the user that you may not have had before. These questions can be a combination of open and closed questions and are especially useful if you are also developing personas for example. Some examples of what you might ask include:

  • Tell me about a recent experience you had with this product/service
  • Do you currently use this product/service?
  • Do you own a tablet device?

The purpose of the task based scenarios is to simulate a real life experience as closely as possible. They provide a contextual setting for the participant to frame their approach and they need to be realistic—your participant needs an actionable starting point to work from. A good starting point for task based scenario development would be to look at a use case.It is also important that you avoid using language that provides clues to the solution or leads your participant as this can produce inaccurate results. An example of a task based scenario would be:You’re planning a Christmas vacation to New Zealand for your family of two adults and 4 children. Find the lowest priced airfares for your trip.

Usability testing software: Tree testing 🌲🌳🌿

Treejack is a remote information architecture (IA) validation tool that shows you exactly where users are getting lost in your content. Knowing this will enable you to design a structure for your website that makes sense to users before moving on to the user interface (UI) design.Treejack works like a card sort in reverse. Imagine you have just completed a card sort with your users to determine your IA and you are now working backwards to test that thinking against real world task based scenarios. Treejack does this using a text-based version of your IA that is free from distracting visual aids such as navigation and colour allowing you to determine if your structure is usable from the ground up. A Treejack study is structured around task based scenarios and comes with the option to include pre and post study questionnaires.

treejack task image
Usability testing with Treejack

As a remote testing tool, Treejack is unmoderated and provides the opportunity to reach a much larger audience because all you have to do is share a link to the study with your participants to gain insights. You also have the option of handing the task of targeted participant recruitment over to Optimal Workshop.Once launched and shared with participants, Treejack takes care of itself by recording the results as they come in giving you the freedom to multitask while you wait for the testing to finish.

The results produced by Treejack are not only detailed and comprehensive but are also quite beautiful. The story of your participants’ journey through your testing activity is told through pietrees. A pietree is a detailed pathway map that shows where your participants went at each fork in the road and their destinations. They allow you to pinpoint exactly where the issues lie and are a powerful way to communicate the results to your team and stakeholders.

bananacom pie tree
Treejack presents your results using pietrees

Treejack also provides insights into where your participants landed their first click and records detailed information on pathways followed by each individual participant.

bananacom paths
Treejack records full details of the paths followed by every participant

Usability testing checklist 📋

The following checklist will help ensure your usability testing process runs smoothly:

  • Meet with team and stakeholders
  • Determine goals, scope and intent of usability testing
  • Decide how many sessions will be conducted
  • Create usability testing schedule
  • Select facilitators and observers for each session if applicable
  • Develop and complete a usability test plan
  • Determine test questions and scenarios
  • Recruit participants for testing
  • Gather equipment required for testing if applicable
  • Book testing location if applicable
  • Keep a list of useful contact details close by in case you need to contact anyone during testing
  • Complete a dry run of a testing session with a team member to ensure everything works before actual testing begins
  • Organise debrief meetings with observers after each testing session
  • Set aside time to analyse the findings
  • Document and present findings to team and relevant stakeholders

Related reading 📚🎧☕

Learn more
1 min read

Chris Green: Jobs To Be Done methodology and its role in driving customer choice

Innovation is at the core of revenue growth - finding new ways to create and capture value. The reason most innovations fail is not because they don’t work (organizations are very good at building products and services with features and benefits), they fail because they don’t create value on dimensions that drive customer choice. If you don’t understand the causal drivers of customer choice, then you’re largely shooting in the dark and at risk of creating something that customers don’t choose above the alternative market solutions.

Chris Green, Head of CX and Innovation at Purple Shirt, recently spoke at UX New Zealand, the leading UX and IA conference in New Zealand hosted by Optimal Workshop, about the Jobs to be Done (JTBD) methodology and uncovering the causal drivers of customer choice in innovation.

In his talk, Chris talks us through JTBD methodology, how to use it, and how it will change the way you think about markets and competition.

Background on Chris Green

Chris has a long and deep background in strategy and innovation. Chris cut his strategy teeth in the UK before moving to New Zealand in 2000 where he led various strategy teams for organisations like Vodafone, Vector, and TelstraClear. He moved to Australia in 2011 where he started to develop his expertise in the emerging field of innovation. He sharpened his innovation knowledge and skills by studying under Professor Clayton Christensen (the godfather of modern innovation theory) at Harvard University and went on to lead one of Australia's leading innovation consultancies where he helped organizations run innovation projects and build innovation capability.

Chris returned to New Zealand at the end of 2021 to lead the innovation practice of Purple Shirt, a UX design consultancy with offices in Auckland and Christchurch. In his spare time, you'll find Chris out on the water learning about foiling boats and boards.

Contact Details:

Email: chris@purpleshirt.co.nz

LinkedIn: https://www.linkedin.com/in/chris-green-kiwi/

Jobs To Be Done methodology and its role in driving customer choice

In this talk, Chris is specifically speaking about UX research in the context of building new products and services, not optimizing existing ones. He answers a critical question - how can we improve our odds of success when we launch something new to market?

Performing UX research for products and services that already exist is very different from totally new ones. Why? Generally, it’s because customers of existing products are good at recommending improvements for things that they already know and use. They are good at this because they have user experience to draw from. The famous Henry Ford quote illustrates this well; “If I’d asked our customers what they wanted, they would have told me faster horses.”

Just because customers are giving researchers helpful and constructive feedback on a product or service, it doesn’t mean you should implement these improvements. In a user-focused discipline, this can sound counterintuitive, but when it comes to new products and services, UX researchers should be careful in relying on user feedback absolutely. 

Chris argues that customer feedback can sometimes lead us in the wrong direction. Assuming that a customer will choose our product if we simply implement their feedback is problematic. Chris stresses the difference between implementing changes that drive improvement versus implementing changes that drive customer choice. They aren’t the same thing. Many businesses continually release new features, but rarely do these new features drive or improve consumer choice. Yes, a new feature may make the product better than before, but does it make it so much better that it makes customers choose your product over others? 

Causal drivers of choice 🤔

When researching new products the most critical thing to understand is this - what causes someone to choose one product over another? If you don’t know the answer, you’re guessing about your product design from the very beginning. 

Traditionally, market research (typically driven by marketing departments) has been poor at finding causation. Market research tends to find correlations between customer attributes and customer behavior (e.g. people in a certain age bracket buy a certain product), but these correlations are quite shallow and do little to inform true drivers of choice. A lack of causal studies can be explained by the fact that they are difficult to conduct. They need to uncover deeper, root causes of human behavior, rather than high-level trends to be truly useful.

So, how can find causal drivers of choice? And why does it matter?

Why it matters 🔥

The best method for uncovering the causal drivers of choice was invented by Professor Clayton Christensen. Chris describes him as the guru of modern innovation theory. He invented Disruption Theory and Jobs to be Done (JTBD) methodology. His fundamental insight was this – researchers shouldn’t be worried about the customer, instead, they should be interested in what they’re trying to achieve. 

Christensen’s JTBD methodology is about understanding the various things that people need to complete in certain contexts. He argues that we, as consumers and customers, all look to “hire” products and services from businesses to get things done. We make a decision to buy, hire, or lease products or services into our lives in order to make progress on something we’re trying to achieve. 

These jobs to be done can be split broadly into three categories (which aren’t mutually exclusive):

  • Functional: Tasks that I want to complete
  • Emotional: How I want to feel
  • Social: How I want to be seen

Value creation opportunities arise when the currently available solutions (products/services in the market) are not getting the jobs done well. This “gap” essentially represents struggles and challenges that get in the way of progress. The gap is our opportunity to build something new that helps people get their jobs done better.

Chris uses Dropbox as a good example of an innovative company filling the gap and addressing a need for people. People found themselves “hiring” different solutions or workarounds to access their files anywhere (e.g. by emailing themselves and using USBs). Dropbox created a solution that addressed this by allowing people to store their files online and access them anywhere. This solution got the job done better by being more convenient, secure, and reliable.

The strategic relevance of “jobs” 🙌💼🎯

Using the JTBD methodology helps to change how you see the competitive landscape, thereby providing an opportunity to see growth where none might have seemed possible. 

Chris uses Snickers and MilkyWay chocolate bars as examples of similar products that on the surface seem to compete against each other. Both seem to sit in the same category, are bought in the same aisle, and have similar ingredients. However, looking at them through a “jobs” lens, they address two slightly different jobs. A Snickers is bought when you need fuel and is more a replacement for a sandwich, apple, or Red Bull (i.e. it is a product “hired” to prepare for the future/get an energy hit). A MilkyWay on the other hand is bought to make people feel better, eat emotionally, and is more of a replacement for ice cream or wine (i.e. a product “hired” to cope with the past).

Chris’s talk helps us to think more strategically about our design journey. To develop truly new and innovative products and services, don’t just take your users' feedback at face value. Look beyond what they’re telling you and try to see the jobs that they’re really trying to accomplish. 

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.