February 27, 2023

The Role of Usability Metrics in User-Centered Design

The term ‘usability’ captures sentiments of how usable, useful, enjoyable, and intuitive a website or app is perceived by users. By its very nature, usability is somewhat subjective. But what we’re really looking for when we talk about usability is how well a website can be used to achieve a specific task or goal. Using this definition we can analyze usability metrics (standard units of measurement) to understand how well user experience design is performing.

Usability metrics provide helpful insights before and after any digital product is launched. They help us form a deeper understanding of how we can design with the user front of mind. This user-centered design approach is considered the best-practice in building effective information architecture and user experiences that help websites, apps, and software meet and exceed users' needs.

In this article, we’ll highlight key usability metrics, how to measure and understand them, and how you can apply them to improve user experience.

Understanding Usability Metrics

Usability metrics aim to understand three core elements of usability, namely: effectiveness, efficiency, and satisfaction. A variety of research techniques offer designers an avenue for quantifying usability. Quantifying usability is key because we want to measure and understand it objectively, rather than making assumptions.

Types of Usability Metrics

There are a few key metrics that we can measure directly if we’re looking to quantify effectiveness, efficiency, and satisfaction. Here are four common examples:

  • Success rate: Also known as ‘completion rate’, success rate is the percentage of users who were able to successfully complete the tasks.
  • Time-based efficiency: Also known as ‘time on task’, time-based efficiency measures how much time a user needs to complete a certain task.
  • Number of errors: Sounds like what it is! It measures the average number of times where an error occurred per user when performing a given task.
  • Post-task satisfaction: Measures a user's general impression or satisfaction after completing (or not completing) a given task.

How to Collect Usability Metrics


Usability metrics are outputs from research techniques deployed when conducting usability testing. Usability testing in web design, for example, involves assessing how a user interacts with the website by observing (and listening to) users completing defined tasks, such as purchasing a product or signing up for newsletters.

Conducting usability testing and collecting usability metrics usually involves:

  • Defining a set of tasks that you want to test
  • Recruitment of test participants
  • Observing participants (remotely or in-person)
  • Recording detailed observations
  • Follow-up satisfaction survey or questionnaire

Tools such Reframer are helpful in conducting usability tests remotely, and they enable live collaboration of multiple team members. It is extremely handy when trying to record and organize those insightful observations! Using paper prototypes is an inexpensive way to test usability early in the design process.

The Importance of Usability Metrics in User-Centered Design

User-centered design challenges designers to put user needs first. This means in order to deploy user-centered design, you need to understand your user. This is where usability testing and metrics add value to website and app performance; they provide direct, objective insight into user behavior, needs, and frustrations. If your user isn’t getting what they want or expect, they’ll simply leave and look elsewhere.

Usability metrics identify which parts of your design aren’t hitting the mark. Recognizing where users might be having trouble completing certain actions, or where users are regularly making errors, are vital insights when implementing user-centered design. In short, user-centered design relies on data-driven user insight.

But why hark on about usability metrics and user-centered design? Because at the heart of most successful businesses is a well-solved user problem. Take Spotify, for example, which solved the problem of dodgy, pirated digital files being so unreliable. People liked access to free digital music, but they had to battle viruses and fake files to get it. With Spotify, for a small monthly fee, or the cost of listening to a few ads, users have the best of both worlds. The same principle applies to user experience - identify recurring problems, then solve them.

Best Practices for Using Usability Metrics

Usability metrics should be analyzed by design teams of every size. However, there are some things to bear in mind when using usability metrics to inform design decisions:

  • Defining success: Usability metrics are only valuable if they are being measured against clearly defined benchmarks. Many tasks and processes are unique to each business, so use appropriate comparisons and targets; usually in the form of an ‘optimized’ user (a user with high efficiency).
  • Real user metrics: Be sure to test with participants that represent your final user base. For example, there’s little point in testing your team, who will likely be intimately aware of your business structure, terminology, and internal workflows.
  • Test early: Usability testing and subsequent usability metrics provide the most impact early on in the design process. This usually means testing an early prototype or even a paper prototype. Early testing helps to avoid any significant, unforeseen challenges that could be difficult to rewind in your information architecture.
  • Regular testing: Usability metrics can change over time as user behavior and familiarity with digital products evolve. You should also test and analyze the usability of new feature releases on your website or app.

Remember, data analysis is only as good as the data itself. Give yourself the best chance of designing exceptional user experiences by collecting, researching, and analyzing meaningful and accurate usability metrics.

Conclusion

Usability metrics are a guiding light when it comes to user experience. As the old saying goes, “you can’t manage what you can’t measure”. By including usability metrics in your design process, you invite direct user feedback into your product. This is ideal because we want to leave any assumptions or guesswork about user experience at the door.

User-centered design inherently relies on constant user research. Usability metrics such as success rate, time-based efficiency, number of errors, and post-task satisfaction will highlight potential shortcomings in your design. Subsequently, they identify where improvements can be made, AND they lay down a benchmark to check whether any resulting updates addressed the issues.

Ready to start collecting and analyzing usability metrics? Check out our guide to planning and running effective usability tests to get a head start!

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Chris Green: Jobs To Be Done methodology and its role in driving customer choice

Innovation is at the core of revenue growth - finding new ways to create and capture value. The reason most innovations fail is not because they don’t work (organizations are very good at building products and services with features and benefits), they fail because they don’t create value on dimensions that drive customer choice. If you don’t understand the causal drivers of customer choice, then you’re largely shooting in the dark and at risk of creating something that customers don’t choose above the alternative market solutions.

Chris Green, Head of CX and Innovation at Purple Shirt, recently spoke at UX New Zealand, the leading UX and IA conference in New Zealand hosted by Optimal Workshop, about the Jobs to be Done (JTBD) methodology and uncovering the causal drivers of customer choice in innovation.

In his talk, Chris talks us through JTBD methodology, how to use it, and how it will change the way you think about markets and competition.

Background on Chris Green

Chris has a long and deep background in strategy and innovation. Chris cut his strategy teeth in the UK before moving to New Zealand in 2000 where he led various strategy teams for organisations like Vodafone, Vector, and TelstraClear. He moved to Australia in 2011 where he started to develop his expertise in the emerging field of innovation. He sharpened his innovation knowledge and skills by studying under Professor Clayton Christensen (the godfather of modern innovation theory) at Harvard University and went on to lead one of Australia's leading innovation consultancies where he helped organizations run innovation projects and build innovation capability.

Chris returned to New Zealand at the end of 2021 to lead the innovation practice of Purple Shirt, a UX design consultancy with offices in Auckland and Christchurch. In his spare time, you'll find Chris out on the water learning about foiling boats and boards.

Contact Details:

Email: chris@purpleshirt.co.nz

LinkedIn: https://www.linkedin.com/in/chris-green-kiwi/

Jobs To Be Done methodology and its role in driving customer choice

In this talk, Chris is specifically speaking about UX research in the context of building new products and services, not optimizing existing ones. He answers a critical question - how can we improve our odds of success when we launch something new to market?

Performing UX research for products and services that already exist is very different from totally new ones. Why? Generally, it’s because customers of existing products are good at recommending improvements for things that they already know and use. They are good at this because they have user experience to draw from. The famous Henry Ford quote illustrates this well; “If I’d asked our customers what they wanted, they would have told me faster horses.”

Just because customers are giving researchers helpful and constructive feedback on a product or service, it doesn’t mean you should implement these improvements. In a user-focused discipline, this can sound counterintuitive, but when it comes to new products and services, UX researchers should be careful in relying on user feedback absolutely. 

Chris argues that customer feedback can sometimes lead us in the wrong direction. Assuming that a customer will choose our product if we simply implement their feedback is problematic. Chris stresses the difference between implementing changes that drive improvement versus implementing changes that drive customer choice. They aren’t the same thing. Many businesses continually release new features, but rarely do these new features drive or improve consumer choice. Yes, a new feature may make the product better than before, but does it make it so much better that it makes customers choose your product over others? 

Causal drivers of choice 🤔

When researching new products the most critical thing to understand is this - what causes someone to choose one product over another? If you don’t know the answer, you’re guessing about your product design from the very beginning. 

Traditionally, market research (typically driven by marketing departments) has been poor at finding causation. Market research tends to find correlations between customer attributes and customer behavior (e.g. people in a certain age bracket buy a certain product), but these correlations are quite shallow and do little to inform true drivers of choice. A lack of causal studies can be explained by the fact that they are difficult to conduct. They need to uncover deeper, root causes of human behavior, rather than high-level trends to be truly useful.

So, how can find causal drivers of choice? And why does it matter?

Why it matters 🔥

The best method for uncovering the causal drivers of choice was invented by Professor Clayton Christensen. Chris describes him as the guru of modern innovation theory. He invented Disruption Theory and Jobs to be Done (JTBD) methodology. His fundamental insight was this – researchers shouldn’t be worried about the customer, instead, they should be interested in what they’re trying to achieve. 

Christensen’s JTBD methodology is about understanding the various things that people need to complete in certain contexts. He argues that we, as consumers and customers, all look to “hire” products and services from businesses to get things done. We make a decision to buy, hire, or lease products or services into our lives in order to make progress on something we’re trying to achieve. 

These jobs to be done can be split broadly into three categories (which aren’t mutually exclusive):

  • Functional: Tasks that I want to complete
  • Emotional: How I want to feel
  • Social: How I want to be seen

Value creation opportunities arise when the currently available solutions (products/services in the market) are not getting the jobs done well. This “gap” essentially represents struggles and challenges that get in the way of progress. The gap is our opportunity to build something new that helps people get their jobs done better.

Chris uses Dropbox as a good example of an innovative company filling the gap and addressing a need for people. People found themselves “hiring” different solutions or workarounds to access their files anywhere (e.g. by emailing themselves and using USBs). Dropbox created a solution that addressed this by allowing people to store their files online and access them anywhere. This solution got the job done better by being more convenient, secure, and reliable.

The strategic relevance of “jobs” 🙌💼🎯

Using the JTBD methodology helps to change how you see the competitive landscape, thereby providing an opportunity to see growth where none might have seemed possible. 

Chris uses Snickers and MilkyWay chocolate bars as examples of similar products that on the surface seem to compete against each other. Both seem to sit in the same category, are bought in the same aisle, and have similar ingredients. However, looking at them through a “jobs” lens, they address two slightly different jobs. A Snickers is bought when you need fuel and is more a replacement for a sandwich, apple, or Red Bull (i.e. it is a product “hired” to prepare for the future/get an energy hit). A MilkyWay on the other hand is bought to make people feel better, eat emotionally, and is more of a replacement for ice cream or wine (i.e. a product “hired” to cope with the past).

Chris’s talk helps us to think more strategically about our design journey. To develop truly new and innovative products and services, don’t just take your users' feedback at face value. Look beyond what they’re telling you and try to see the jobs that they’re really trying to accomplish. 

Learn more
1 min read

Why information architecture is important for designers

Sitting inside any beautifully crafted and designed digital product, there must be a fully functional and considered information architecture.

As much as information architecture shouldn’t be developed in a vacuum. Neither should the design and look of digital products. In fact, a large proportion of the function of digital designers is devoted to supporting users locating content they need and driving them towards content that the product owners want them to find.

Incorporating visual markers to make sure that certain content is distinct from the rest or creating layers that demonstrate the diverse content on a product.

If you do not have quality content, it is impossible to design a quality digital product. It all comes back to creating a user experience that makes sense and is designed to make task completion simple. And this relates back to designing the product with the content planned for it in mind.

8 Principles of information architecture, according to Dan Brown 🏗️

As a designer, the more you know about information architecture, the better the products you design will meet your user requirements and deliver what they need. If you work with an information architect, even better. If you’re still learning about information architecture the 8 Principles according to Dan Brown is a great place to begin.

If you haven’t come across Dan Brown yet, you have more than likely come across his 8 principles. Dan Brown is one of the UX world's most prolific experts with a career that spans most areas of UX designs. He’s written 3 books on the subject and experience across a multitude of high profile projects. Aiding large organizations to make the most of their user experience.

  1. The principle of objects: Content should be treated as a living, breathing thing. It has lifecycles, behaviors, and attributes.
  2. The principle of choices: Less is more. Keep the number of choices to a minimum.
  3. The principle of disclosure: Show a preview of information that will help users understand what kind of information is hidden if they dig deeper.
  4. The principle of examples: Show examples of content when describing the content of the categories.
  5. The principle of front doors: Assume that at least 50% of users will use a different entry point than the home page.
  6. The principle of multiple classifications: Offer users several different classification schemes to browse the site’s content.
  7. The principle of focused navigation: Keep navigation simple and never mix different things.
  8. The principle of growth: Assume that the content on the website will grow. Make sure the website is scalable.

It’s highly likely that you’ve already used some, or all, of these IA principles in your designs. Don’t be shy about mastering them, or at the very least be familiar. They can only help you become a better user experience designer.

Wrap up 🌯

Mastering the 8 principles, according to IA expert Dan Brown will see you mastering the complex tasks of information architecture. Understanding IA is key to creating digital designs with a content structure that is functional, logical and just what your users need to navigate your product. Design without good IA doesn’t work as well, just as a content structure without a well designed interface will not engage users.

Learn more
1 min read

The Role of Usability Metrics in User-Centered Design

The term ‘usability’ captures sentiments of how usable, useful, enjoyable, and intuitive a website or app is perceived by users. By its very nature, usability is somewhat subjective. But what we’re really looking for when we talk about usability is how well a website can be used to achieve a specific task or goal. Using this definition we can analyze usability metrics (standard units of measurement) to understand how well user experience design is performing.

Usability metrics provide helpful insights before and after any digital product is launched. They help us form a deeper understanding of how we can design with the user front of mind. This user-centered design approach is considered the best-practice in building effective information architecture and user experiences that help websites, apps, and software meet and exceed users' needs.

In this article, we’ll highlight key usability metrics, how to measure and understand them, and how you can apply them to improve user experience.

Understanding Usability Metrics

Usability metrics aim to understand three core elements of usability, namely: effectiveness, efficiency, and satisfaction. A variety of research techniques offer designers an avenue for quantifying usability. Quantifying usability is key because we want to measure and understand it objectively, rather than making assumptions.

Types of Usability Metrics

There are a few key metrics that we can measure directly if we’re looking to quantify effectiveness, efficiency, and satisfaction. Here are four common examples:

  • Success rate: Also known as ‘completion rate’, success rate is the percentage of users who were able to successfully complete the tasks.
  • Time-based efficiency: Also known as ‘time on task’, time-based efficiency measures how much time a user needs to complete a certain task.
  • Number of errors: Sounds like what it is! It measures the average number of times where an error occurred per user when performing a given task.
  • Post-task satisfaction: Measures a user's general impression or satisfaction after completing (or not completing) a given task.

How to Collect Usability Metrics


Usability metrics are outputs from research techniques deployed when conducting usability testing. Usability testing in web design, for example, involves assessing how a user interacts with the website by observing (and listening to) users completing defined tasks, such as purchasing a product or signing up for newsletters.

Conducting usability testing and collecting usability metrics usually involves:

  • Defining a set of tasks that you want to test
  • Recruitment of test participants
  • Observing participants (remotely or in-person)
  • Recording detailed observations
  • Follow-up satisfaction survey or questionnaire

Tools such Reframer are helpful in conducting usability tests remotely, and they enable live collaboration of multiple team members. It is extremely handy when trying to record and organize those insightful observations! Using paper prototypes is an inexpensive way to test usability early in the design process.

The Importance of Usability Metrics in User-Centered Design

User-centered design challenges designers to put user needs first. This means in order to deploy user-centered design, you need to understand your user. This is where usability testing and metrics add value to website and app performance; they provide direct, objective insight into user behavior, needs, and frustrations. If your user isn’t getting what they want or expect, they’ll simply leave and look elsewhere.

Usability metrics identify which parts of your design aren’t hitting the mark. Recognizing where users might be having trouble completing certain actions, or where users are regularly making errors, are vital insights when implementing user-centered design. In short, user-centered design relies on data-driven user insight.

But why hark on about usability metrics and user-centered design? Because at the heart of most successful businesses is a well-solved user problem. Take Spotify, for example, which solved the problem of dodgy, pirated digital files being so unreliable. People liked access to free digital music, but they had to battle viruses and fake files to get it. With Spotify, for a small monthly fee, or the cost of listening to a few ads, users have the best of both worlds. The same principle applies to user experience - identify recurring problems, then solve them.

Best Practices for Using Usability Metrics

Usability metrics should be analyzed by design teams of every size. However, there are some things to bear in mind when using usability metrics to inform design decisions:

  • Defining success: Usability metrics are only valuable if they are being measured against clearly defined benchmarks. Many tasks and processes are unique to each business, so use appropriate comparisons and targets; usually in the form of an ‘optimized’ user (a user with high efficiency).
  • Real user metrics: Be sure to test with participants that represent your final user base. For example, there’s little point in testing your team, who will likely be intimately aware of your business structure, terminology, and internal workflows.
  • Test early: Usability testing and subsequent usability metrics provide the most impact early on in the design process. This usually means testing an early prototype or even a paper prototype. Early testing helps to avoid any significant, unforeseen challenges that could be difficult to rewind in your information architecture.
  • Regular testing: Usability metrics can change over time as user behavior and familiarity with digital products evolve. You should also test and analyze the usability of new feature releases on your website or app.

Remember, data analysis is only as good as the data itself. Give yourself the best chance of designing exceptional user experiences by collecting, researching, and analyzing meaningful and accurate usability metrics.

Conclusion

Usability metrics are a guiding light when it comes to user experience. As the old saying goes, “you can’t manage what you can’t measure”. By including usability metrics in your design process, you invite direct user feedback into your product. This is ideal because we want to leave any assumptions or guesswork about user experience at the door.

User-centered design inherently relies on constant user research. Usability metrics such as success rate, time-based efficiency, number of errors, and post-task satisfaction will highlight potential shortcomings in your design. Subsequently, they identify where improvements can be made, AND they lay down a benchmark to check whether any resulting updates addressed the issues.

Ready to start collecting and analyzing usability metrics? Check out our guide to planning and running effective usability tests to get a head start!

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.