There’s no doubt usability is a key element of all great user experiences, how do we apply and test usability principles for a website? This article looks at usability principles in web design, how to test it, practical tips for success and a look at our remote testing tool, Treejack.
A definition of usability for websites 🧐📖
Web usability is defined as the extent to which a website can be used to achieve a specific task or goal by a user. It refers to the quality of the user experience and can be broken down into five key usability principles:
Ease of use: How easy is the website to use? How easily are users able to complete their goals and tasks? How much effort is required from the user?
Learnability: How easily are users able to complete their goals and tasks the first time they use the website?
Efficiency: How quickly can users perform tasks while using your website?
User satisfaction: How satisfied are users with the experience the website provides? Is the experience a pleasant one?
Impact of errors: Are users making errors when using the website and if so, how serious are the consequences of those errors? Is the design forgiving enough make it easy for errors to be corrected?
Why is web usability important? 👀
Aside from the obvious desire to improve the experience for the people who use our websites, web usability is crucial to your website’s survival. If your website is difficult to use, people will simply go somewhere else. In the cases where users do not have the option to go somewhere else, for example government services, poor web usability can lead to serious issues. How do we know if our website is well-designed? We test it with users.
Testing usability: What are the common methods? 🖊️📖✏️📚
There are many ways to evaluate web usability and here are the common methods:
Moderated usability testing: Moderated usability testing refers to testing that is conducted in-person with a participant. You might do this in a specialised usability testing lab or perhaps in the user’s contextual environment such as their home or place of business. This method allows you to test just about anything from a low fidelity paper prototype all the way up to an interactive high fidelity prototype that closely resembles the end product.
Moderated remote usability testing: Moderated remote usability testing is very similar to the previous method but with one key difference- the facilitator and the participant/s are not in the same location. The session is still a moderated two-way conversation just over skype or via a webinar platform instead of in person. This method is particularly useful if you are short on time or unable to travel to where your users are located, e.g. overseas.
Unmoderated remote usability testing: As the name suggests, unmoderated remote usability testing is conducted without a facilitator present. This is usually done online and provides the flexibility for your participants to complete the activity at a time that suits them. There are several remote testing tools available ( including our suite of tools ) and once a study is launched these tools take care of themselves collating the results for you and surfacing key findings using powerful visual aids.
Guerilla testing: Guerilla testing is a powerful, quick and low cost way of obtaining user feedback on the usability of your website. Usually conducted in public spaces with large amounts of foot traffic, guerilla testing gets its name from its ‘in the wild’ nature. It is a scaled back usability testing method that usually only involves a few minutes for each test but allows you to reach large amounts of people and has very few costs associated with it.
Heuristic evaluation: A heuristic evaluation is conducted by usability experts to assess a website against recognized usability standards and rules of thumb (heuristics). This method evaluates usability without involving the user and works best when done in conjunction with other usability testing methods eg Moderated usability testing to ensure the voice of the user is heard during the design process.
Tree testing: Also known as a reverse card sort, tree testing is used to evaluate the findability of information on a website. This method allows you to work backwards through your information architecture and test that thinking against real world scenarios with users.
First click testing: Research has found that 87% of users who start out on the right path from the very first click will be able to successfully complete their task while less than half ( 46%) who start down the wrong path will succeed. First click testing is used to evaluate how well a website is supporting users and also provides insights into design elements that are being noticed and those that are being ignored.
Hallway testing: Hallway testing is a usability testing method used to gain insights from anyone nearby who is unfamiliar with your project. These might be your friends, family or the people who work in another department down the hall from you. Similar to guerilla testing but less ‘wild’. This method works best at picking up issues early in the design process before moving on to testing a more refined product with your intended audience.
Online usability testing tool: Tree testing 🌲🌳🌿
Tree testing is a remote usability testing tool that uses tree testing to help you discover exactly where your users are getting lost in the structure of your website. Treejack uses a simplified text-based version of your website structure removing distractions such as navigation and visual design allowing you to test the design from its most basic level.
Like any other tree test, it uses task based scenarios and includes the opportunity to ask participants pre and post study questions that can be used to gain further insights. Tree testing is a useful tool for testing those five key usability principles mentioned earlier with powerful inbuilt features that do most of the heavy lifting for you. Tree testing records and presents the following for each task:
complete details of the pathways followed by each participant
the time taken to complete each task
first click data
the directness of each result
visibility on when and where participants skipped a task
Participant paths data in our tree testing tool 🛣️
The level of detail recorded on the pathways followed by your participants makes it easy for you to determine the ease of use, learnability, efficiency and impact of errors of your website. The time taken to complete each task and the directness of each result also provide insights in relation to those four principles and user satisfaction can be measured through the results to your pre and post survey questions.
The first click data brings in the added benefits of first click testing and knowing when and where your participants gave up and moved on can help you identify any issues.Another thing tree testing does well is the way it brings all data for each task together into one comprehensive overview that tells you everything you need to know at a glance. Tree testing's task overview- all the key information in one placeIn addition to this, tree testing also generates comprehensive pathway maps called pietrees.
Each junction in the pathway is a piechart showing a statistical breakdown of participant activity at that point in the site structure including details about: how many were on the right track, how many were following the incorrect path and how many turned around and went back. These beautiful diagrams tell the story of your usability testing and are useful for communicating the results to your stakeholders.
Test early and often: Usability testing isn’t something that only happens at the end of the project. Start your testing as soon as possible and iterate your design based on findings. There are so many different ways to test an idea with users and you have the flexibility to scale it back to suit your needs.
Try testing with paper prototypes: Just like there are many usability testing methods, there are also several ways to present your designs to your participant during testing. Fully functioning high fidelity prototypes are amazing but they’re not always feasible (especially if you followed the previous tip of test early and often). Paper prototypes work well for usability testing because your participant can draw on them and their own ideas- they’re also more likely to feel comfortable providing feedback on work that is less resolved! You could also use paper prototypes to form the basis for collaborative design sessions with your users by showing them your idea and asking them to redesign or design the next page/screen.
Run a benchmarking round of testing: Test the current state of the design to understand how your users feel about it. This is especially useful if you are planning to redesign an existing product or service and will save you time in the problem identification stages.
Bring stakeholders and clients into the testing process: Hearing how a product or service is performing direct from a user can be quite a powerful experience for a stakeholder or client. If you are running your usability testing in a lab with an observation room, invite them to attend as observers and also include them in your post session debriefs. They’ll gain feedback straight from the source and you’ll gain an extra pair of eyes and ears in the observation room. If you’re not using a lab or doing a different type of testing, try to find ways to include them as observers in some way. Also, don’t forget to remind them that as observers they will need to stay silent for the entire session beyond introducing themselves so as not to influence the participant - unless you’ve allocated time for questions.
Make the most of available resources: Given all the usability testing options out there, there’s really no excuse for not testing a design with users. Whether it’s time, money, human resources or all of the above making it difficult for you, there’s always something you can do. Think creatively about ways to engage users in the process and consider combining elements of different methods or scaling down to something like hallway testing or guerilla testing. It is far better to have a less than perfect testing method than to not test at all.
Never analyse your findings alone: Always analyse your usability testing results as a team or with at least one other person. Making sense of the results can be quite a big task and it is easy to miss or forget key insights. Bring the team together and affinity diagram your observations and notes after each usability testing session to ensure everything is captured. You could also use Reframer to record your observations live during each session because it does most of the analysis work for you by surfacing common themes and patterns as they emerge. Your whole team can use it too saving you time.
Engage your stakeholders by presenting your findings in creative ways: No one reads thirty page reports anymore. Help your stakeholders and clients feel engaged and included in the process by delivering the usability testing results in an easily digestible format that has a lasting impact. You might create an A4 size one page summary, or maybe an A0 size wall poster to tell everyone in the office the story of your usability testing or you could create a short video with snippets taken from your usability testing sessions (with participant permission of course) to communicate your findings. Remember you’re also providing an experience for your clients and stakeholders so make sure your results are as usable as what you just tested.
If you missed our live training, don’t worry, we’ve got you covered! In this session, our product experts Katie and Aidan discuss why, how and when to benchmark an existing structure using Treejack.
All the way back in 2014, the web passed a pretty significant milestone: 1 billion websites. Of course, fewer than 200 million of these are actually active as of 2019, but there’s an important underlying point. People love to create. If the current digital age that we live in has taught us anything, it’s that it’s never been as easy to get information and ideas out into the world.
Understandably, this ability has been used – and often misused. Overloaded, convoluted websites are par for the course, with a common tactic for website renewal being to simply update them with a new coat of paint while ignoring the swirling pile of outdated and poorly organized content below.
So what are you supposed to do when trying to address this problem on your own website or digital project? Well, there’s a fairly robust technique called top tasks management. Here, we’ll go over exactly what it is and how you can use it.
Getting to grips with top tasks
Ideally, all websites would be given regular, comprehensive reviews. Old content could be revisited and analyzed to see whether it’s still actually serving a purpose. If not, it could be reworked or just removed entirely. Based on research, content creators could add new content to address user needs. Of course, this is just the ideal. The reality is that there’s never really enough time or resource to manage the growing mass of digital content in this way. The solution is to hone in on what your users actually use your website for and tailor the experience accordingly by looking at top tasks.
What are top tasks? They're basically a small set of tasks (typically around 5, but up to 10 is OK too) that are most important to your users. The thinking goes that if you get these core tasks right, your website will be serving the majority of your users and you’ll be more likely to retain them. Ignore top tasks (and any sort of task analysis), and you’ll likely find users leaving your website to find something else that better fits their needs.
The counter to top tasks is tiny tasks. These are everything on a website that’s not all that important for the people actually using it. Commonly, tiny tasks are driven more by the organization’s needs than those of the users. Typically, the more important a task is to a user, the less information there is to support it. On the other hand, the less important a task is to a user, the more information there is. Tiny tasks stem very much from ‘organization first’ thinking, wherein user needs are placed lower on the list of considerations.
According to Jerry McGovern (who penned an excellent write-up of top tasks on A List Apart), the top tasks model says “Focus on what really matters (the top tasks) and defocus on what matters less (the tiny tasks).”
How to identify top tasks
Figuring out your top tasks is an important step in clearing away the fog and identifying what actually matters to your users. We’ll call this stage of the process task discovery, and these are the steps:
Gather: Work with your organization to gather a list of all customer tasks
Refine: Take this list of tasks to a smaller group of stakeholders and work it down into a shortlist
User feedback: Go out to your users and get a representative sample to vote on them
Finalise: Assemble a table of tasks with the one with the highest number of votes at the top and the lowest number of votes at the bottom
We’ll go into detail on the above steps, explaining the best way of handling each one. Keep in mind that this process isn’t something you’ll be able to complete in a week – it’s more likely a 6 to 8-week project, depending on the size of your website, how large your user base is and the receptiveness of your organization to help out.
Step 1: Gather – Figure out the long list of tasks
The first part of the task process is to get out into the wider organization and discover what your users are actually trying to accomplish on your website or by using your products. It’s all about getting into the minds of your users – trying to see the world through their eyes, effectively.
If you’re struggling to think of places where you might find customer tasks, here are some of the best sources:
Analytics: Take a deep dive into the analytics of your website or product to find out how people are using them. For websites, you’ll want to look at pages with high traffic and common downloads or interactions. The same applies to products – although the data you have access to will depend on the analytics systems in place.
Customer support teams: Your own internal support teams can be a great source of user tasks. Support teams commonly spend all day speaking to users, and as a result, are able to build up a cohesive understanding of the types of tasks users commonly attempt.
Sales teams: Similarly, sales teams are another good source of task data. Sales teams typically deal with people before they become your users, but a part of their job is to understand the problems they’re trying to solve and how your website or product can help.
Direct customer feedback: Check for surveys your organization has run in the past to see whether any task data already exists.
Social media: Head to Twitter, Facebook and LinkedIn to see what people are talking about with regards to your industry. What tasks are being mentioned?
It’s important to note that you need to cast a wide net when gathering task data. You can’t just rely on analytics data. Why? Well, downloads and page visits only reflect what you have, but not what your users might actually be searching for.
As for search, Jerry McGovern explains why it doesn’t actually tell the entire story: “When we worked on the BBC intranet, we found they had a feature called “Top Searches” on their homepage. The problem was that once they published the top searches list, these terms no longer needed to be searched for, so in time a new list of top searches emerged! Similarly, top tasks tend to get bookmarked, so they don’t show up as much in search. And the better the navigation, the more likely the site search is to reflect tiny tasks.”
At the end of the initial task-gathering stage you should be left with around 300 to 500 tasks. Of course, this can scale up or down depending on the size of the website or product.
Step 2: Refine – Create your shortlist
Now that you’ve got your long list of tasks, it’s time to trim them back until you’ve got a shortlist of 100 or less. Keep in mind that working through your long list of tasks is going to take some time, so plan for this process to take at least 4 weeks (but likely more).
It’s important to involve stakeholders from across the organization during the shortlist process. Bring in people from support, sales, product, marketing and leadership areas of the organization. In addition to helping you to create a more concise and usable list, the shortlist process helps your stakeholders to think about areas of overlap and where they may need to work together.
When working your list down to something more usable, try and consolidate and simplify. Stay away from product names as well as internal organization and industry jargon. With your tasks, you essentially want to focus on the underlying thing that a user is trying to do. If you were focusing on tasks for a bank, opt for “Transactions” instead of “Digital mobile payments”. Similarly, bring together tasks where possible. “Customer support”, “Help and support” and “Support center” can all be merged.
At a very technical level, it also helps to avoid lengthy tasks. Stick to around 7 to 8 words and try and avoid verbs, using them only when there’s really no other option. You’ll find that your task list becomes quite to navigate when tasks begin with “look”, “find” and “get”. Finally, stay away from specific audiences and demographics. You want to keep your tasks universal.
Step 3: User feedback – Get users to vote
With your shortlist created, it’s time to take it to your users. Using a survey tool like Optimal's Surveys, add in each one of your shortlisted tasks and have users rank 5 tasks on a scale from 1 to 5, with 5 being the most important and 1 being the least important.
If you’re thinking that your users will never take the time to work through such a long list, consider that the very length of the list means they’ll seek out the tasks that matter to them and ignore the ones that don’t.
A section of the customer survey in Questions.
Step 4: Finalize – Analyze your results
Now for the task analysis side of the project. What you want at the end of the user survey end of the project is a league table of entire shortlist of tasks. We’re going to use the example from Cisco’s top tasks project, which has been documented over at A List Apart by Gerry McGovern (who actually ran the project). The entire article is worth a read as it covers the process of running a top task project for a large organization.
Here’s what a league table of the top 20 tasks looks like from Cisco:
A league table of the top 20 tasks from Cisco’s top tasks project. Credit: Jerry McGovern.
Here’s the breakdown of the vote for Cisco’s tasks:
3 tasks got the first 25 percent of the vote
6 tasks got 25-50 percent of the vote
14 tasks got 50-75 percent of the vote
44 tasks got 75-100 percent of the vote
While the pattern may seem surprising, it’s actually not unusual. As Jerry explains: “We have done this process over 400 times and the same patterns emerge every single time.”
Final thoughts
Focusing on top tasks management is really a practice that needs to be conducted on a semi-regular basis. The approach benefits organizations in a multitude of ways, bringing different teams and people together to figure out how to best address why your users are coming to your website and what they actually need from you.
As we explained at the beginning of this article, top tasks is really about clearing away the fog and understanding on what really matters. Instead of spreading yourself thin and focusing on a host of tiny tasks, hone in on those top tasks that actually matter to your users.
Understanding how to improve your website
The top tasks approach is an effective way of giving you a clear idea of what you should be focusing on when designing or redesigning your website, but this should really just be one aspect of the work you do.
Utilizing a host of other UX research methods can give you a much more comprehensive idea of what’s working and what’s not. With card sorting, for example, you can learn how your users think the content on your website should be arranged. Then, with this data in hand, you can use tree testing to assemble draft structures of your website and test how people navigate their way through it. You can keep iterating on these structures to ensure you’ve created the most user-friendly navigation.
Take a look at our 101 guides to learn more about card sorting and tree testing, as well as the other user research methods you can use to make solid improvements to your website. If you’d rather just start putting methods into practice using user research tools, take our UX platform for a spin for free here.
Usability experts play an essential role in the user interface design process by evaluating the usability of digital products from a very important perspective - the users! Usability experts utilize various techniques such as heuristic evaluation, usability testing, and user research to gather data on how users interact with digital products and services. This data helps to identify design flaws and areas for improvement, leading to the development of user-friendly and efficient products.
Heuristic evaluation is a usability research technique used to evaluate the user interface design of a digital product based on a set of ‘heuristics’ or ‘usability principles’. These heuristics are derived from a set of established principles of user experience design - attributed to the landmark article “Improving a Human-Computer Dialogue” published by web usability pioneers Jakob Nielsen and Rolf Molich in 1990. The principles focus on the experiential aspects of a user interface.
In this article, we’ll discuss what heuristic evaluation is and how usability experts use the principles to create exceptional design. We’ll also discuss how usability testing works hand-in-hand with heuristic evaluation, and how minimalist design and user control impact user experience. So, let’s dive in!
Understanding Heuristic Evaluation
Heuristic evaluation helps usability experts to examine interface design against tried and tested rules of thumb. To conduct a heuristic evaluation, usability experts typically work through the interface of the digital product and identify any issues or areas for improvement based on these broad rules of thumb, of which there are ten. They broadly cover the key areas of design that impact user experience - not bad for an article published over 30 years ago!
The ten principles are:
Prevention error: Well-functioning error messages are good, but instead of messages, can these problems be removed in the first place? Remove the opportunity for slips and mistakes to occur.
Consistency and standards: Language, terms, and actions used should be consistent to not cause any confusion.
Control and freedom for users: Give your users the freedom and control to undo/redo actions and exit out of situations if needed.
System status visibility: Let your users know what’s going on with the site. Is the page they’re on currently loading, or has it finished loading?
Design and aesthetics: Cut out unnecessary information and clutter to enhance visibility. Keep things in a minimalist style.
Help and documentation: Ensure that information is easy to find for users, isn’t too large and is focused on your users’ tasks.
Recognition, not recall: Make sure that your users don’t have to rely on their memories. Instead, make options, actions and objects visible. Provide instructions for use too.
Provide a match between the system and the real world: Does the system speak the same language and use the same terms as your users? If you use a lot of jargon, make sure that all users can understand by providing an explanation or using other terms that are familiar to them. Also ensure that all your information appears in a logical and natural order.
Flexibility: Is your interface easy to use and it is flexible for users? Ensure your system can cater to users to all types, from experts to novices.
Help users to recognize, diagnose and recover from errors: Your users should not feel frustrated by any error messages they see. Instead, express errors in plain, jargon-free language they can understand. Make sure the problem is clearly stated and offer a solution for how to fix it.
Heuristic evaluation is a cost-effective way to identify usability issues early in the design process (although they can be performed at any stage) leading to faster and more efficient design iterations. It also provides a structured approach to evaluating user interfaces, making it easier to identify usability issues. By providing valuable feedback on overall usability, heuristic evaluation helps to improve user satisfaction and retention.
The Role of Usability Experts in Heuristic Evaluation
Usability experts play a central role in the heuristic evaluation process by providing feedback on the usability of a digital product, identifying any issues or areas for improvement, and suggesting changes to optimize user experience.
One of the primary goals of usability experts during the heuristic evaluation process is to identify and prevent errors in user interface design. They achieve this by applying the principles of error prevention, such as providing clear instructions and warnings, minimizing the cognitive load on users, and reducing the chances of making errors in the first place. For example, they may suggest adding confirmation dialogs for critical actions, ensuring that error messages are clear and concise, and making the navigation intuitive and straightforward.
Usability experts also use user testing to inform their heuristic evaluation. User testing involves gathering data from users interacting with the product or service and observing their behavior and feedback. This data helps to validate the design decisions made during the heuristic evaluation and identify additional usability issues that may have been missed. For example, usability experts may conduct A/B testing to compare the effectiveness of different design variations, gather feedback from user surveys, and conduct user interviews to gain insights into users' needs and preferences.
Conducting user testing with users that represent, as closely as possible, actual end users, ensures that the product is optimized for its target audience. Check out our tool Reframer, which helps usability experts collaborate and record research observations in one central database.
Minimalist Design and User Control in Heuristic Evaluation
Minimalist design and user control are two key principles that usability experts focus on during the heuristic evaluation process. A minimalist design is one that is clean, simple, and focuses on the essentials, while user control refers to the extent to which users can control their interactions with the product or service.
Minimalist design is important because it allows users to focus on the content and tasks at hand without being distracted by unnecessary elements or clutter. Usability experts evaluate the level of minimalist design in a user interface by assessing the visual hierarchy, the use of white space, the clarity of the content, and the consistency of the design elements. Information architecture (the system and structure you use to organize and label content) has a massive impact here, along with the content itself being concise and meaningful.
Incorporating minimalist design principles into heuristic evaluation can improve the overall user experience by simplifying the design, reducing cognitive load, and making it easier for users to find what they need. Usability experts may incorporate minimalist design by simplifying the navigation and site structure, reducing the number of design elements, and removing any unnecessary content (check out our tool Treejack to conduct site structure, navigation, and categorization research). Consistent color schemes and typography can also help to create a cohesive and unified design.
User control is also critical in a user interface design because it gives users the power to decide how they interact with the product or service. Usability experts evaluate the level of user control by looking at the design of the navigation, the placement of buttons and prompts, the feedback given to users, and the ability to undo actions. Again, usability testing plays an important role in heuristic evaluation by allowing researchers to see how users respond to the level of control provided, and gather feedback on any potential hiccups or roadblocks.
Usability Testing and Heuristic Evaluation
Usability testing and heuristic evaluation are both important components of the user-centered design process, and they complement each other in different ways.
Usability testing involves gathering feedback from users as they interact with a digital product. This feedback can provide valuable insights into how users perceive and use the user interface design, identify any usability issues, and help validate design decisions. Usability testing can be conducted in different forms, such as moderated or unmoderated, remote or in-person, and task-based or exploratory. Check out our usability testing 101 article to learn more.
On the other hand, heuristic evaluation is a method in which usability experts evaluate a product against a set of usability principles. While heuristic evaluation is a useful method to quickly identify usability issues and areas for improvement, it does not involve direct feedback from users.
Usability testing can be used to validate heuristic evaluation findings by providing evidence of how users interact with the product or service. For example, if a usability expert identifies a potential usability issue related to the navigation of a website during heuristic evaluation, usability testing can be used to see if users actually have difficulty finding what they need on the website. In this way, usability testing provides a reality check to the heuristic evaluation and helps ensure that the findings are grounded in actual user behavior.
Usability testing and heuristic evaluation work together in the design process by informing and validating each other. For example, a designer may conduct heuristic evaluation to identify potential usability issues and then use the insights gained to design a new iteration of the product or service. The designer can then use usability testing to validate that the new design has successfully addressed the identified usability issues and improved the user experience. This iterative process of designing, testing, and refining based on feedback from both heuristic evaluation and usability testing leads to a user-centered design that is more likely to meet user needs and expectations.
Conclusion
Heuristic evaluation is a powerful usability research technique that usability experts use to evaluate digital product interfaces based on a set of established principles of user experience design. After all these years, the ten principles of heuristic evaluation still cover the key areas of design that impact user experience, making it easier to identify usability issues early in the design process, leading to faster and more efficient design iterations. Usability experts play a critical role in the heuristic evaluation process by identifying design flaws and areas for improvement, using user testing to validate design decisions, and ensuring that the product is optimized for its intended users.
Minimalist design and user control are two key principles that usability experts focus on during the heuristic evaluation process. A minimalist design is clean, simple, and focuses on the essentials, while user control gives users the freedom and control to undo/redo actions and exit out of situations if needed. By following these principles, usability experts can create an exceptional design that enhances visibility, reduces cognitive load, and provides a positive user experience.
Ultimately, heuristic evaluation is a cost-effective way to identify usability issues at any point in the design process, leading to faster and more efficient design iterations, and improving user satisfaction and retention. How many of the ten heuristic design principles does your digital product satisfy?