The Role of Usability Metrics in User-Centered Design
The term ‘usability’ captures sentiments of how usable, useful, enjoyable, and intuitive a website or app is perceived by users. By its very nature, usability is somewhat subjective. But what we’re really looking for when we talk about usability is how well a website can be used to achieve a specific task or goal. Using this definition we can analyze usability metrics (standard units of measurement) to understand how well user experience design is performing.
Usability metrics provide helpful insights before and after any digital product is launched. They help us form a deeper understanding of how we can design with the user front of mind. This user-centered design approach is considered the best-practice in building effective information architecture and user experiences that help websites, apps, and software meet and exceed users' needs.
In this article, we’ll highlight key usability metrics, how to measure and understand them, and how you can apply them to improve user experience.
Understanding Usability Metrics
Usability metrics aim to understand three core elements of usability, namely: effectiveness, efficiency, and satisfaction. A variety of research techniques offer designers an avenue for quantifying usability. Quantifying usability is key because we want to measure and understand it objectively, rather than making assumptions.
Types of Usability Metrics
There are a few key metrics that we can measure directly if we’re looking to quantify effectiveness, efficiency, and satisfaction. Here are four common examples:
Success rate: Also known as ‘completion rate’, success rate is the percentage of users who were able to successfully complete the tasks.
Time-based efficiency: Also known as ‘time on task’, time-based efficiency measures how much time a user needs to complete a certain task.
Number of errors: Sounds like what it is! It measures the average number of times where an error occurred per user when performing a given task.
Post-task satisfaction: Measures a user's general impression or satisfaction after completing (or not completing) a given task.
How to Collect Usability Metrics
Usability metrics are outputs from research techniques deployed when conducting usability testing. Usability testing in web design, for example, involves assessing how a user interacts with the website by observing (and listening to) users completing defined tasks, such as purchasing a product or signing up for newsletters.
Conducting usability testing and collecting usability metrics usually involves:
Defining a set of tasks that you want to test
Recruitment of test participants
Observing participants (remotely or in-person)
Recording detailed observations
Follow-up satisfaction survey or questionnaire
Tools such Reframer are helpful in conducting usability tests remotely, and they enable live collaboration of multiple team members. It is extremely handy when trying to record and organize those insightful observations! Using paper prototypes is an inexpensive way to test usability early in the design process.
The Importance of Usability Metrics in User-Centered Design
User-centered design challenges designers to put user needs first. This means in order to deploy user-centered design, you need to understand your user. This is where usability testing and metrics add value to website and app performance; they provide direct, objective insight into user behavior, needs, and frustrations. If your user isn’t getting what they want or expect, they’ll simply leave and look elsewhere.
Usability metrics identify which parts of your design aren’t hitting the mark. Recognizing where users might be having trouble completing certain actions, or where users are regularly making errors, are vital insights when implementing user-centered design. In short, user-centered design relies on data-driven user insight.
But why hark on about usability metrics and user-centered design? Because at the heart of most successful businesses is a well-solved user problem. Take Spotify, for example, which solved the problem of dodgy, pirated digital files being so unreliable. People liked access to free digital music, but they had to battle viruses and fake files to get it. With Spotify, for a small monthly fee, or the cost of listening to a few ads, users have the best of both worlds. The same principle applies to user experience - identify recurring problems, then solve them.
Best Practices for Using Usability Metrics
Usability metrics should be analyzed by design teams of every size. However, there are some things to bear in mind when using usability metrics to inform design decisions:
Defining success: Usability metrics are only valuable if they are being measured against clearly defined benchmarks. Many tasks and processes are unique to each business, so use appropriate comparisons and targets; usually in the form of an ‘optimized’ user (a user with high efficiency).
Real user metrics: Be sure to test with participants that represent your final user base. For example, there’s little point in testing your team, who will likely be intimately aware of your business structure, terminology, and internal workflows.
Test early: Usability testing and subsequent usability metrics provide the most impact early on in the design process. This usually means testing an early prototype or even a paper prototype. Early testing helps to avoid any significant, unforeseen challenges that could be difficult to rewind in your information architecture.
Regular testing: Usability metrics can change over time as user behavior and familiarity with digital products evolve. You should also test and analyze the usability of new feature releases on your website or app.
Remember, data analysis is only as good as the data itself. Give yourself the best chance of designing exceptional user experiences by collecting, researching, and analyzing meaningful and accurate usability metrics.
Conclusion
Usability metrics are a guiding light when it comes to user experience. As the old saying goes, “you can’t manage what you can’t measure”. By including usability metrics in your design process, you invite direct user feedback into your product. This is ideal because we want to leave any assumptions or guesswork about user experience at the door.
User-centered design inherently relies on constant user research. Usability metrics such as success rate, time-based efficiency, number of errors, and post-task satisfaction will highlight potential shortcomings in your design. Subsequently, they identify where improvements can be made, AND they lay down a benchmark to check whether any resulting updates addressed the issues.
Ready to start collecting and analyzing usability metrics? Check out our guide to planning and running effective usability tests to get a head start!
Ever wondered what the relationship is between information architecture (IA) and UX? Simply put, IA is the foundation of UX. We outline why.
What is Information Architecture? 🛠️
According to Abby Covert, a leader in the field of information architecture, IA is ‘the way we arrange the parts to make sense of the whole.’ This can relate to a website, a retail store or an app. And you could even consider the way a library is sorted to be information architecture. For the purposes of this article, we will focus on digital products (apps or websites).
Well-organized information architecture is fundamentally important to the success of your product. As a designer, knowing the content you are delivering and how, is fundamental to creating a UX that performs. Working with the needs of the organization and meeting the requirements of the users in a meaningful and delightful way. Organizing and structuring the information so that navigating, searching, and understanding your product is seamless is ultimately what UX design is all about. Arranging the parts to make sense of the whole, you could say.
While design is about creating visual pointers for users to find their way, information architecture can be broken down into 3 main areas to consider when building a great user experience:
Navigation: How people make their way through information
Labels: How information is named and represented.
Search: How people will look for information (keywords, categories)
When put like this it does seem pretty straightforward. Maybe even simple? But these tasks need to be straightforward for your users. Putting thought, time, and research at the front of your design and build can increase your chances of delivering an intuitive product. In fact, at any point in your product’s life cycle, it’s worth testing and reviewing these 3 areas.
Key things to consider to build an effective IA for UX 🏗
Developing a well-thought-out and researched information architecture for your product could be considered a foundation step to creating a great UX product. To help you on your way, here are 6 key things to consider when building effective information architecture for a great user experience.
Define the goals of your organization: Before starting your IA plan, uncover what is the purpose of your product and how this will align with the goals of your stakeholders.
Figure out your user’s goals: Who do you want to use your product? Create scenarios, discuss with probable users and find out what they’ll use your product for and how they’ll use it.
Study your competitors: Take note of websites, apps and other digital products that are similar to yours and look at their information architecture from a UX point of view. How does the design work with the IA. Is it simple to navigate? Easy to find what to do next? Look at how key information is designed and displayed.
Draw a site map: Once the IA is planned and developed and the content is ready, it’s time to figure out how users are going to access all of your information. Spend time planning navigation that is not too complex that will help users to browse your product easily.
Cross browser testing: Your information architecture behavior may vary from one browser so it’s worth doing some cross-browser compatibility testing. It would be very disappointing to work so hard to get the best UX with your product, only to be let down because of browser variances.
Usability testing: End users are the perfect people to let you know how your product is performing. Set up a testing/staging environment and test on external users. Observing your participants while they move their way through your product uninterrupted and listening to their opinions can shed light on the successes (and failures) in a very insightful way.
Wrap Up 🌯
Information architecture is the foundation of designing a great product that meets (or even exceeds) your users’ needs, wants, and desires. By balancing an organization’s needs with insight into what users actually want, you’re well equipped to design an information architecture that helps build a product that delivers a positive user experience. Research, test, research, and test again should be the mantra throughout the development, design, and implementation of your product and beyond.
Recently, everyone in the design industry has been talking about design sprints. So, naturally, the team at Optimal Workshop wanted to see what all the fuss was about. I picked up a copy of The Sprint Book and suggested to the team that we try out the technique.
In order to keep momentum, we identified a current problem and decided to run the sprint only two weeks later. The short notice was a bit of a challenge, but in the end we made it work. Here’s a run down of how things went, what worked, what didn’t, and lessons learned.
A sprint is an intensive focused period of time to get a product or feature designed and tested with the goal of knowing whether or not the team should keep investing in the development of the idea. The idea needs to be either validated or not validated by the end of the sprint. In turn, this saves time and resource further down the track by being able to pivot early if the idea doesn’t float.
If you’re following The Sprint Book you might have a structured 5 day plan that looks likes this:
Day 1 - Understand: Discover the business opportunity, the audience, the competition, the value proposition and define metrics of success.
Day 2 - Diverge: Explore, develop and iterate creative ways of solving the problem, regardless of feasibility.
Day 3 - Converge: Identify ideas that fit the next product cycle and explore them in further detail through storyboarding.
Day 4 - Prototype: Design and prepare prototype(s) that can be tested with people.
Day 5 - Test: User testing with the product's primary target audience.
With a Design Sprint, a product doesn't need to go full cycle to learn about the opportunities and gather feedback.
When you’re running a design sprint, it’s important that you have the right people in the room. It’s all about focus and working fast; you need the right people around in order to do this and not have any blocks down the path. Team, stakeholder and expert buy-in is key — this is not a task just for a design team!After getting buy in and picking out the people who should be involved (developers, designers, product owner, customer success rep, marketing rep, user researcher), these were my next steps:
Pre-sprint
Read the book
Panic
Send out invites
Write the agenda
Book a meeting room
Organize food and coffee
Get supplies (Post-its, paper, Sharpies, laptops, chargers, cameras)
Some fresh smoothies for the sprinters made by our juice technician
The sprint
Due to scheduling issues we had to split the sprint over the end of the week and weekend. Sprint guidelines suggest you hold it over Monday to Friday — this is a nice block of time but we had to do Thursday to Thursday, with the weekend off in between, which in turn worked really well. We are all self confessed introverts and, to be honest, the thought of spending five solid days workshopping was daunting. At about two days in, we were exhausted and went away for the weekend and came back on Monday feeling sociable and recharged again and ready to examine the work we’d done in the first two days with fresh eyes.
Design sprint activities
During our sprint we completed a range of different activities but here’s a list of some that worked well for us. You can find out more information about how to run most of these over at The Sprint Book website or checkout some great resources over at Design Sprint Kit.
Lightning talks
We kicked off our sprint by having each person give a quick 5-minute talk on one of these topics in the list below. This gave us all an overview of the whole project and since we each had to present, we in turn became the expert in that area and engaged with the topic (rather than just listening to one person deliver all the information).
Our lightning talk topics included:
Product history - where have we come from so the whole group has an understanding of who we are and why we’ve made the things we’ve made.
Vision and business goals - (from the product owner or CEO) a look ahead not just of the tools we provide but where we want the business to go in the future.
User feedback - what have users been saying so far about the idea we’ve chosen for our sprint. This information is collected by our User Research and Customer Success teams.
Technical review - an overview of our tech and anything we should be aware of (or a look at possible available tech). This is a good chance to get an engineering lead in to share technical opportunities.
Comparative research - what else is out there, how have other teams or products addressed this problem space?
Empathy exercise
I asked the sprinters to participate in an exercise so that we could gain empathy for those who are using our tools. The task was to pretend we were one of our customers who had to present a dendrogram to some of our team members who are not involved in product development or user research. In this frame of mind, we had to talk through how we might start to draw conclusions from the data presented to the stakeholders. We all gained more empathy for what it’s like to be a researcher trying to use the graphs in our tools to gain insights.
How Might We
In the beginning, it’s important to be open to all ideas. One way we did this was to phrase questions in the format: “How might we…” At this stage (day two) we weren’t trying to come up with solutions — we were trying to work out what problems there were to solve. ‘We’ is a reminder that this is a team effort, and ‘might’ reminds us that it’s just one suggestion that may or may not work (and that’s OK). These questions then get voted on and moved into a workshop for generating ideas (see Crazy 8s).Read a more detailed instructions on how to run a ‘How might we’ session on the Design Sprint Kit website.
Crazy 8s
This activity is a super quick-fire idea generation technique. The gist of it is that each person gets a piece of paper that has been folded 8 times and has 8 minutes to come up with eight ideas (really rough sketches). When time is up, it’s all pens down and the rest of the team gets to review each other's ideas.In our sprint, we gave each person Post-it notes, paper, and set the timer for 8 minutes. At the end of the activity, we put all the sketches on a wall (this is where the art gallery exercise comes in).
Mila our data scientist sketching intensely during Crazy 8s
A close up of some sketches from the team
Art gallery/Silent critique
The art gallery is the place where all the sketches go. We give everyone dot stickers so they can vote and pull out key ideas from each sketch. This is done silently, as the ideas should be understood without needing explanation from the person who made them. At the end of it you’ve got a kind of heat map, and you can see the ideas that stand out the most. After this first round of voting, the authors of the sketches get to talk through their ideas, then another round of voting begins.
Mila putting some sticky dots on some sketches
Bowie, our head of security, even took part in the sprint...kind of
Usability testing and validation
The key part of a design sprint is validation. For one of our sprints we had two parts of our concept that needed validating. To test one part we conducted simple user tests with other members of Optimal Workshop (the feature was an internal tool). For the second part we needed to validate whether we had the data to continue with this project, so we had our data scientist run some numbers and predictions for us.
Our remote worker Rebecca dialed in to watch one of our user tests live
Actual feedback
Challenges and outcomes
One of our key team members, Rebecca, was working remotely during the sprint. To make things easier for her, we set up 2 cameras: one pointed to the whiteboard, the other was focused on the rest of the sprint team sitting at the table. Next to that, we set up a monitor so we could see Rebecca.
Engaging in workshop activities is a lot harder when working remotely. Rebecca would get around this by completing the activities and take photos to send to us.
Lightning talks are a great way to have each person contribute up front and feel invested in the process.
Sprints are energy intensive. Make sure you’re in a good place with plenty of fresh air with comfortable chairs and a break out space. We like to split the five days up so that we get a weekend break.
Give people plenty of notice to clear their schedules. Asking busy people to take five days from their schedule might not go down too well. Make sure they know why you’d like them there and what they should expect from the week. Send them an outline of the agenda. Ideally, have a chat in person and get them excited to be part of it.
Invite the right people. It’s important that you get the right kind of people from different parts of the company involved in your sprint. The role they play in day-to-day work doesn’t matter too much for this. We’re all mainly using pens and paper and the more types of brains in the room the better. Looking back, what we really needed on our team was a customer support team member. They have the experience and knowledge about our customers that we don’t have.
Choose the right sprint problem. The project we chose for our first sprint wasn’t really suited for a design sprint. We went in with a well defined problem and a suggested solution from the team instead of having a project that needed fresh ideas. This made the activities like ‘How Might We’ seem very redundant. The challenge we decided to tackle ended up being more of a data prototype (spreadsheets!). We used the week to validate assumptions around how we can better use data and how we can write a script to automate some internal processes. We got the prototype working and tested but due to the nature of the project we will have to run this experiment in the background for a few months before any building happens.
Overall, this design sprint was a great team bonding experience and we felt pleased with what we achieved in such a short amount of time. Naturally, here at Optimal Workshop, we're experimenters at heart and we will keep exploring new ways to work across teams and find a good middle ground.
The term ‘usability’ captures sentiments of how usable, useful, enjoyable, and intuitive a website or app is perceived by users. By its very nature, usability is somewhat subjective. But what we’re really looking for when we talk about usability is how well a website can be used to achieve a specific task or goal. Using this definition we can analyze usability metrics (standard units of measurement) to understand how well user experience design is performing.
Usability metrics provide helpful insights before and after any digital product is launched. They help us form a deeper understanding of how we can design with the user front of mind. This user-centered design approach is considered the best-practice in building effective information architecture and user experiences that help websites, apps, and software meet and exceed users' needs.
In this article, we’ll highlight key usability metrics, how to measure and understand them, and how you can apply them to improve user experience.
Understanding Usability Metrics
Usability metrics aim to understand three core elements of usability, namely: effectiveness, efficiency, and satisfaction. A variety of research techniques offer designers an avenue for quantifying usability. Quantifying usability is key because we want to measure and understand it objectively, rather than making assumptions.
Types of Usability Metrics
There are a few key metrics that we can measure directly if we’re looking to quantify effectiveness, efficiency, and satisfaction. Here are four common examples:
Success rate: Also known as ‘completion rate’, success rate is the percentage of users who were able to successfully complete the tasks.
Time-based efficiency: Also known as ‘time on task’, time-based efficiency measures how much time a user needs to complete a certain task.
Number of errors: Sounds like what it is! It measures the average number of times where an error occurred per user when performing a given task.
Post-task satisfaction: Measures a user's general impression or satisfaction after completing (or not completing) a given task.
How to Collect Usability Metrics
Usability metrics are outputs from research techniques deployed when conducting usability testing. Usability testing in web design, for example, involves assessing how a user interacts with the website by observing (and listening to) users completing defined tasks, such as purchasing a product or signing up for newsletters.
Conducting usability testing and collecting usability metrics usually involves:
Defining a set of tasks that you want to test
Recruitment of test participants
Observing participants (remotely or in-person)
Recording detailed observations
Follow-up satisfaction survey or questionnaire
Tools such Reframer are helpful in conducting usability tests remotely, and they enable live collaboration of multiple team members. It is extremely handy when trying to record and organize those insightful observations! Using paper prototypes is an inexpensive way to test usability early in the design process.
The Importance of Usability Metrics in User-Centered Design
User-centered design challenges designers to put user needs first. This means in order to deploy user-centered design, you need to understand your user. This is where usability testing and metrics add value to website and app performance; they provide direct, objective insight into user behavior, needs, and frustrations. If your user isn’t getting what they want or expect, they’ll simply leave and look elsewhere.
Usability metrics identify which parts of your design aren’t hitting the mark. Recognizing where users might be having trouble completing certain actions, or where users are regularly making errors, are vital insights when implementing user-centered design. In short, user-centered design relies on data-driven user insight.
But why hark on about usability metrics and user-centered design? Because at the heart of most successful businesses is a well-solved user problem. Take Spotify, for example, which solved the problem of dodgy, pirated digital files being so unreliable. People liked access to free digital music, but they had to battle viruses and fake files to get it. With Spotify, for a small monthly fee, or the cost of listening to a few ads, users have the best of both worlds. The same principle applies to user experience - identify recurring problems, then solve them.
Best Practices for Using Usability Metrics
Usability metrics should be analyzed by design teams of every size. However, there are some things to bear in mind when using usability metrics to inform design decisions:
Defining success: Usability metrics are only valuable if they are being measured against clearly defined benchmarks. Many tasks and processes are unique to each business, so use appropriate comparisons and targets; usually in the form of an ‘optimized’ user (a user with high efficiency).
Real user metrics: Be sure to test with participants that represent your final user base. For example, there’s little point in testing your team, who will likely be intimately aware of your business structure, terminology, and internal workflows.
Test early: Usability testing and subsequent usability metrics provide the most impact early on in the design process. This usually means testing an early prototype or even a paper prototype. Early testing helps to avoid any significant, unforeseen challenges that could be difficult to rewind in your information architecture.
Regular testing: Usability metrics can change over time as user behavior and familiarity with digital products evolve. You should also test and analyze the usability of new feature releases on your website or app.
Remember, data analysis is only as good as the data itself. Give yourself the best chance of designing exceptional user experiences by collecting, researching, and analyzing meaningful and accurate usability metrics.
Conclusion
Usability metrics are a guiding light when it comes to user experience. As the old saying goes, “you can’t manage what you can’t measure”. By including usability metrics in your design process, you invite direct user feedback into your product. This is ideal because we want to leave any assumptions or guesswork about user experience at the door.
User-centered design inherently relies on constant user research. Usability metrics such as success rate, time-based efficiency, number of errors, and post-task satisfaction will highlight potential shortcomings in your design. Subsequently, they identify where improvements can be made, AND they lay down a benchmark to check whether any resulting updates addressed the issues.
Ready to start collecting and analyzing usability metrics? Check out our guide to planning and running effective usability tests to get a head start!