October 31, 2024
10

Ready for take-off: Best practices for creating and launching remote user research studies

"Hi Optimal Work,I was wondering if there are some best practices you stick to when creating or sending out different UX research studies (i.e. Card sorts, Prototyye Test studies, etc)? Thank you! Mary"

Indeed I do! Over the years I’ve learned a lot about creating remote research studies and engaging participants. That experience has taught me a lot about what works, what doesn’t and what leaves me refreshing my results screen eagerly anticipating participant responses and getting absolute zip. Here are my top tips for remote research study creation and launch success!

Creating remote research studies

Use screener questions and post-study questions wisely

Screener questions are really useful for eliminating participants who may not fit the criteria you’re looking for but you can’t exactly stop them from being less than truthful in their responses. Now, I’m not saying all participants lie on the screener so they can get to the activity (and potentially claim an incentive) but I am saying it’s something you can’t control. To help manage this, I like to use the post-study questions to provide additional context and structure to the research.

Depending on the study, I might ask questions to which the answers might confirm or exclude specific participants from a specific group. For example, if I’m doing research on people who live in a specific town or area, I’ll include a location based question after the study. Any participant who says they live somewhere else is getting excluded via that handy toggle option in the results section. Post-study questions are also great for capturing additional ideas and feedback after participants complete the activity as remote research limits your capacity to get those — you’re not there with them so you can’t just ask. Post-study questions can really help bridge this gap. Use no more than five post-study questions at a time and consider not making them compulsory.

Do a practice run

No matter how careful I am, I always miss something! A typo, a card with a label in the wrong case, forgetting to update a new version of an information architecture after a change was made — stupid mistakes that we all make. By launching a practice version of your study and sharing it with your team or client, you can stop those errors dead in their tracks. It’s also a great way to get feedback from the team on your work before the real deal goes live. If you find an error, all you have to do is duplicate the study, fix the error and then launch. Just keep an eye on the naming conventions used for your studies to prevent the practice version and the final version from getting mixed up!

Sending out remote research studies

Manage expectations about how long the study will be open for

Something that has come back to bite me more than once is failing to clearly explain when the study will close. Understandably, participants can be left feeling pretty annoyed when they mentally commit to complete a study only to find it’s no longer available. There does come a point when you need to shut the study down to accurately report on quantitative data and you’re not going to be able to prevent every instance of this, but providing that information upfront will go a long way.

Provide contact details and be open to questions

You may think you’re setting yourself up to be bombarded with emails, but I’ve found that isn’t necessarily the case. I’ve noticed I get around 1-3 participants contacting me per study. Sometimes they just want to tell me they completed it and potentially provide additional information and sometimes they have a question about the project itself. I’ve also found that sometimes they have something even more interesting to share such as the contact details of someone I may benefit from connecting with — or something else entirely! You never know what surprises they have up their sleeves and it’s important to be open to it. Providing an email address or social media contact details could open up a world of possibilities.

Don’t forget to include the link!

It might seem really obvious, but I can’t tell you how many emails I received (and have been guilty of sending out) that are missing the damn link to the study. It happens! You’re so focused on getting that delivery right and it becomes really easy to miss that final yet crucial piece of information.

To avoid this irritating mishap, I always complete a checklist before hitting send:

  • Have I checked my spelling and grammar?
  • Have I replaced all the template placeholder content with the correct information?
  • Have I mentioned when the study will close?
  • Have I included contact details?
  • Have I launched my study and received confirmation that it is live?
  • Have I included the link to the study in my communications to participants?
  • Does the link work? (yep, I’ve broken it before)

General tips for both creating and sending out remote research studies

Know your audience

First and foremost, before you create or disseminate a remote research study, you need to understand who it’s going to and how they best receive this type of content. Posting it out when none of your followers are in your user group may not be the best approach. Do a quick brainstorm about the best way to reach them. For example if your users are internal staff, there might be an internal communications channel such as an all-staff newsletter, intranet or social media site that you can share the link and approach content to.

Keep it brief

And by that I’m talking about both the engagement mechanism and the study itself. I learned this one the hard way. Time is everything and no matter your intentions, no one wants to spend more time than they have to. Even more so in situations where you’re unable to provide incentives (yep, I’ve been there). As a rule, I always stick to no more than 10 questions in a remote research study and for card sorts, I’ll never include more than 60 cards. Anything more than that will see a spike in abandonment rates and of course only serve to annoy and frustrate your participants. You need to ensure that you’re balancing your need to gain insights with their time constraints.

As for the accompanying approach content, short and snappy equals happy! In the case of an email, website, other social media post, newsletter, carrier pigeon etc, keep your approach spiel to no more than a paragraph. Use an audience appropriate tone and stick to the basics such as: a high level sentence on what you’re doing, roughly how long the study will take participants to complete, details of any incentives on offer and of course don’t forget to thank them.

Set clear instructions

The default instructions in Optimal Workshop’s suite of tools are really well designed and I’ve learned to borrow from them for my approach content when sending the link out. There’s no need for wheel reinvention and it usually just needs a slight tweak to suit the specific study. This also helps provide participants with a consistent experience and minimizes confusion allowing them to focus on sharing those valuable insights!

Create a template

When you’re on to something that works — turn it into a template! Every time I create a study or send one out, I save it for future use. It still needs minor tweaks each time, but I use them to iterate my template.What are your top tips for creating and sending out remote user research studies? Comment below!

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Online card sorting: The comprehensive guide

When it comes to designing and testing in the world of information architecture, it’s hard to beat card sorting. As a usability testing method, card sorting is easy to set up, simple to recruit for and can supply you with a range of useful insights. But there’s a long-standing debate in the world of card sorting, and that’s whether it’s better to run card sorts in person (moderated) or remotely over the internet (unmoderated).

This article should give you some insight into the world of online card sorting. We've included an analysis of the benefits (and the downsides) as well as why people use this approach. Let's take a look!

How an online card sort works

Running a card sort remotely has quickly become a popular option just because of how time-intensive in-person card sorting is. Instead of needing to bring your participants in for dedicated card sorting sessions, you can simply set up your card sort using an online tool (like our very own OptimalSort) and then wait for the results to roll in.

So what’s involved in a typical online card sort? At a very high level, here’s what’s required. We’re going to assume you’re already set up with an online card sorting tool at this point.

  1. Define the cards: Depending on what you’re testing, add the items (cards) to your study. If you were testing the navigation menu of a hotel website, your cards might be things like “Home”, “Book a room”, “Our facilities” and “Contact us”.
  2. Work out whether to run a closed or open sort: Determine whether you’ll set the groups for participants to sort cards into (closed) or leave it up to them (open). You may also opt for a mix, where you create some categories but leave the option open for participants to create their own.
  3. Recruit your participants: Whether using a participant recruitment service or by recruiting through your own channels, send out invites to your online card sort.
  4. Wait for the data: Once you’ve sent out your invites, all that’s left to do is wait for the data to come in and then analyze the results.

That’s online card sorting in a nutshell – not entirely different from running a card sort in person. If you’re interested in learning about how to interpret your card sorting results, we’ve put together this article on open and hybrid card sorts and this one on closed card sorts.

Why is online card sorting so popular?

Online card sorting has a few distinct advantages over in-person card sorting that help to make it a popular option among information architects and user researchers. There are downsides too (as there are with any remote usability testing option), but we’ll get to those in a moment.

Where remote (unmoderated) card sorting excels:

  • Time savings: Online card sorting is essentially ‘set and forget’, meaning you can set up the study, send out invites to your participants and then sit back and wait for the results to come in. In-person card sorting requires you to moderate each session and collate the data at the end.
  • Easier for participants: It’s not often that researchers are on the other side of the table, but it’s important to consider the participant’s viewpoint. It’s much easier for someone to spend 15 minutes completing your online card sort in their own time instead of trekking across town to your office for an exercise that could take well over an hour.
  • Cheaper: In a similar vein, online card sorting is much cheaper than in-person testing. While it’s true that you may still need to recruit participants, you won’t need to reimburse people for travel expenses.
  • Analytics: Last but certainly not least, online card sorting tools (like OptimalSort) can take much of the analytical burden off you by transforming your data into actionable insights. Other tools will differ, but OptimalSort can generate a similarity matrix, dendrograms and a participant-centric analysis using your study data.

Where in-person (moderated) card sorting excels:

  • Qualitative insights: For all intents and purposes, online card sorting is the most effective way to run a card sort. It’s cheaper, faster and easier for you. But, there’s one area where in-person card sorting excels, and that’s qualitative feedback. When you’re sitting directly across the table from your participant you’re far more likely to learn about the why as well as the what. You can ask participants directly why they grouped certain cards together.

Online card sorting: Participant numbers

So that’s online card sorting in a nutshell, as well as some of the reasons why you should actually use this method. But what about participant numbers? Well, there’s no one right answer, but the general rule is that you need more people than you’d typically bring in for a usability test.

This all comes down to the fact that card sorting is what’s known as a generative method, whereas usability testing is an evaluation method. Here’s a little breakdown of what we mean by these terms:

Generative method: There’s no design, and you need to get a sense of how people think about the problem you’re trying to solve. For example, how people would arrange the items that need to go into your website’s navigation. As Nielsen Norman Group explains: “There is great variability in different people's mental models and in the vocabulary they use to describe the same concepts. We must collect data from a fair number of users before we can achieve a stable picture of the users' preferred structure and determine how to accommodate differences among users”.

Evaluation method: There’s already a design, and you basically need to work out whether it’s a good fit for your users. Any major problems are likely to crop up even after testing 5 or so users. For example, you have a wireframe of your website and need to identify any major usability issues.

Basically, because you’ll typically be using card sorting to generate a new design or structure from nothing, you need to sample a larger number of people. If you were testing an existing website structure, you could get by with a smaller group.

Where to from here?

Following on from our discussion of generative versus evaluation methods, you’ve really got a choice of 2 paths from here if you’re in the midst of a project. For those developing new structures, the best course of action is likely to be a card sort. However, if you’ve got an existing structure that you need to test in order to usability problems and possible areas of improvement, you’re likely best to run a tree test. We’ve got some useful information on getting started with a tree test right here on the blog.

Learn more
1 min read

Behind the scenes of UX work on Trade Me's CRM system

We love getting stuck into scary, hairy problems to make things better here at Trade Me. One challenge for us in particular is how best to navigate customer reaction to any change we make to the site, the app, the terms and conditions, and so on. Our customers are passionate both about the service we provide — an online auction and marketplace — and its place in their lives, and are rightly forthcoming when they're displeased or frustrated. We therefore rely on our Customer Service (CS) team to give customers a voice, and to respond with patience and skill to customer problems ranging from incorrectly listed items to reports of abusive behavior.

The CS team uses a Customer Relationship Management (CRM) system, Trade Me Admin, to monitor support requests and manage customer accounts. As the spectrum of Trade Me's services and the complexity of the public website have grown rapidly, the CRM system has, to be blunt, been updated in ways which have not always been the prettiest. Links for new tools and reports have simply been added to existing pages, and old tools for services we no longer operate have not always been removed. Thus, our latest focus has been to improve the user experience of the CRM system for our CS team.

And though on the surface it looks like we're working on a product with only 90 internal users, our changes will have flow on effects to tens of thousands of our members at any given time (from a total number of around 3.6 million members).

The challenges of designing customer service systems

We face unique challenges designing customer service systems. Robert Schumacher from GfK summarizes these problems well. I’ve paraphrased him here and added an issue of my own:

1. Customer service centres are high volume environments — Our CS team has thousands of customer interactions every day, and and each team member travels similar paths in the CRM system.

2. Wrong turns are amplified — With so many similar interactions, a system change that adds a minute more to processing customer queries could slow down the whole team and result in delays for customers.

3. Two people relying on the same system — When the CS team takes a phone call from a customer, the CRM system is serving both people: the CS person who is interacting with it, and the caller who directs the interaction. Trouble is, the caller can't see the paths the system is forcing the CS person to take. For example, in a previous job a client’s CS team would always ask callers two or three extra security questions — not to confirm identites, but to cover up the delay between answering the call and the right page loading in the system.

4. Desktop clutter — As a result of the plethora of tools and reports and systems, the desktop of the average CS team member is crowded with open windows and tabs. They have to remember where things are and also how to interact with the different tools and reports, all of which may have been created independently (ie. work differently). This presents quite the cognitive load.

5. CS team members are expert users — They use the system every day, and will all have their own techniques for interacting with it quickly and accurately. They've also probably come up with their own solutions to system problems, which they might be very comfortable with. As Schumacher says, 'A critical mistake is to discount the expert and design for the novice. In contact centers, novices become experts very quickly.'

6. Co-design is risky — Co-design workshops, where the users become the designers,  are all the rage, and are usually pretty effective at getting great ideas quickly into systems. But expert users almost always end up regurgitating the system they're familiar with, as they've been trained by repeated use of systems to think in fixed ways.

7. Training is expensive — Complex systems require more training so if your call centre has high churn (ours doesn’t – most staff stick around for years) then you’ll be spending a lot of money. …and the one I’ve added:

8. Powerful does not mean easy to learn — The ‘it must be easy to use and intuitive’ design rationale is often the cause of badly designed CRM systems. Designers mistakenly design something simple when they should be designing something powerful. Powerful is complicated, dense, and often less easy to learn, but once mastered lets staff really motor.

Our project focus

Our improvement of Trade Me Admin is focused on fixing the shattered IA and restructuring the key pages to make them perform even better, bringing them into a new code framework. We're not redesigning the reports, tools, code or even the interaction for most of the reports, as this will be many years of effort. Watching our own staff use Trade Me Admin is like watching someone juggling six or seven things.

The system requires them to visit multiple pages, hold multiple facts in their head, pattern and problem-match across those pages, and follow their professional intuition to get to the heart of a problem. Where the system works well is on some key, densely detailed hub pages. Where it works badly, staff have to navigate click farms with arbitrary link names, have to type across the URL to get to hidden reports, and generally expend more effort on finding the answer than on comprehending the answer.

Groundwork

The first thing that we did was to sit with CS and watch them work and get to know the common actions they perform. The random nature of the IA and the plethora of dead links and superseded reports became apparent. We surveyed teams, providing them with screen printouts and three highlighter pens to colour things as green (use heaps), orange (use sometimes) and red (never use). From this, we were able to immediately remove a lot of noise from the new IA. We also saw that specific teams used certain links but that everyone used a core set. Initially focussing on the core set, we set about understanding the tasks under those links.

The complexity of the job soon became apparent – with a complex system like Trade Me Admin, it is possible to do the same thing in many different ways. Most CRM systems are complex and detailed enough for there to be more than one way to achieve the same end and often, it’s not possible to get a definitive answer, only possible to ‘build a picture’. There’s no one-to-one mapping of task to link. Links were also often arbitrarily named: ‘SQL Lookup’ being an example. The highly-trained user base are dependent on muscle memory in finding these links. This meant that when asked something like: “What and where is the policing enquiry function?”, many couldn’t tell us what or where it was, but when they needed the report it contained they found it straight away.

Sort of difficult

Therefore, it came as little surprise that staff found the subsequent card sort task quite hard. We renamed the links to better describe their associated actions, and of course, they weren't in the same location as in Trade Me Admin. So instead of taking the predicted 20 minutes, the sort was taking upwards of 40 minutes. Not great when staff are supposed to be answering customer enquiries!

We noticed some strong trends in the results, with links clustering around some of the key pages and tasks (like 'member', 'listing', 'review member financials', and so on). The results also confirmed something that we had observed — that there is a strong split between two types of information: emails/tickets/notes and member info/listing info/reports.

We built and tested two IAs

pietree results tree testing

After card sorting, we created two new IAs, and then customized one of the IAs for each of the three CS teams, giving us IAs to test. Each team was then asked to complete two tree tests, with 50% doing one first and 50% doing the other first. At first glance, the results of the tree test were okay — around 61% — but 'Could try harder'. We saw very little overall difference between the success of the two structures, but definitely some differences in task success. And we also came across an interesting quirk in the results.

Closer analysis of the pie charts with an expert in Trade Me Admin showed that some ‘wrong’ answers would give part of the picture required. In some cases so much so that I reclassified answers as ‘correct’ as they were more right than wrong. Typically, in a real world situation, staff might check several reports in order to build a picture. This ambiguous nature is hard to replicate in a tree test which wants definitive yes or no answers. Keeping the tasks both simple to follow and comprehensive proved harder than we expected.

For example, we set a task that asked participants to investigate whether two customers had been bidding on each other's auctions. When we looked at the pietree (see screenshot below), we noticed some participants had clicked on 'Search Members', thinking they needed to locate the customer accounts, when the task had presumed that the customers had already been found. This is a useful insight into writing more comprehensive tasks that we can take with us into our next tests.  

What’s clear from analysis is that although it’s possible to provide definitive answers for a typical site’s IAs, for a CRM like Trade Me Admin this is a lot harder. Devising and testing the structure of a CRM has proved a challenge for our highly trained audience, who are used to the current system and naturally find it difficult to see and do things differently. Once we had reclassified some of the answers as ‘correct’ one of the two trees was a clear winner — it had gone from 61% to 69%. The other tree had only improved slightly, from 61% to 63%.

There were still elements with it that were performing sub-optimally in our winning structure, though. Generally, the problems were to do with labelling, where, in some cases, we had attempted to disambiguate those ‘SQL lookup’-type labels but in the process, confused the team. We were left with the dilemma of whether to go with the new labels and make the system initially harder to use for staff but easier to learn for new staff, or stick with the old labels, which are harder to learn. My view is that any new system is going to see an initial performance dip, so we might as well change the labels now and make it better.

The importance of carefully structuring questions in a tree test has been highlighted, particularly in light of the ‘start anywhere/go anywhere’ nature of a CRM. The diffuse but powerful nature of a CRM means that careful consideration of tree test answer options needs to be made, in order to decide ‘how close to 100% correct answer’ you want to get.

Development work has begun so watch this space

It's great to see that our research is influencing the next stage of the CRM system, and we're looking forward to seeing it go live. Of course, our work isn't over— and nor would we want it to be! Alongside the redevelopment of the IA, I've been redesigning the key pages from Trade Me Admin, and continuing to conduct user research, including first click testing using Chalkmark.

This project has been governed by a steadily developing set of design principles, focused on complex CRM systems and the specific needs of their audience. Two of these principles are to reduce navigation and to design for experts, not novices, which means creating dense, detailed pages. It's intense, complex, and rewarding design work, and we'll be exploring this exciting space in more depth in upcoming posts.

Learn more
1 min read

Usability Experts Unite: The Power of Heuristic Evaluation in User Interface Design

Usability experts play an essential role in the user interface design process by evaluating the usability of digital products from a very important perspective - the users! Usability experts utilize various techniques such as heuristic evaluation, usability testing, and user research to gather data on how users interact with digital products and services. This data helps to identify design flaws and areas for improvement, leading to the development of user-friendly and efficient products.

Heuristic evaluation is a usability research technique used to evaluate the user interface design of a digital product based on a set of ‘heuristics’ or ‘usability principles’. These heuristics are derived from a set of established principles of user experience design - attributed to the landmark article “Improving a Human-Computer Dialogue” published by web usability pioneers Jakob Nielsen and Rolf Molich in 1990. The principles focus on the experiential aspects of a user interface. 

In this article, we’ll discuss what heuristic evaluation is and how usability experts use the principles to create exceptional design. We’ll also discuss how usability testing works hand-in-hand with heuristic evaluation, and how minimalist design and user control impact user experience. So, let’s dive in!

Understanding Heuristic Evaluation


Heuristic evaluation helps usability experts to examine interface design against tried and tested rules of thumb. To conduct a heuristic evaluation, usability experts typically work through the interface of the digital product and identify any issues or areas for improvement based on these broad rules of thumb, of which there are ten. They broadly cover the key areas of design that impact user experience - not bad for an article published over 30 years ago!

The ten principles are:

  1. Prevention error: Well-functioning error messages are good, but instead of messages, can these problems be removed in the first place? Remove the opportunity for slips and mistakes to occur.
  2. Consistency and standards: Language, terms, and actions used should be consistent to not cause any confusion.
  3. Control and freedom for users: Give your users the freedom and control to undo/redo actions and exit out of situations if needed.
  4. System status visibility: Let your users know what’s going on with the site. Is the page they’re on currently loading, or has it finished loading?
  5. Design and aesthetics: Cut out unnecessary information and clutter to enhance visibility. Keep things in a minimalist style.
  6. Help and documentation: Ensure that information is easy to find for users, isn’t too large and is focused on your users’ tasks.
  7. Recognition, not recall: Make sure that your users don’t have to rely on their memories. Instead, make options, actions and objects visible. Provide instructions for use too.
  8. Provide a match between the system and the real world: Does the system speak the same language and use the same terms as your users? If you use a lot of jargon, make sure that all users can understand by providing an explanation or using other terms that are familiar to them. Also ensure that all your information appears in a logical and natural order.
  9. Flexibility: Is your interface easy to use and it is flexible for users? Ensure your system can cater to users to all types, from experts to novices.
  10. Help users to recognize, diagnose and recover from errors: Your users should not feel frustrated by any error messages they see. Instead, express errors in plain, jargon-free language they can understand. Make sure the problem is clearly stated and offer a solution for how to fix it.

Heuristic evaluation is a cost-effective way to identify usability issues early in the design process (although they can be performed at any stage) leading to faster and more efficient design iterations. It also provides a structured approach to evaluating user interfaces, making it easier to identify usability issues. By providing valuable feedback on overall usability, heuristic evaluation helps to improve user satisfaction and retention.

The Role of Usability Experts in Heuristic Evaluation

Usability experts play a central role in the heuristic evaluation process by providing feedback on the usability of a digital product, identifying any issues or areas for improvement, and suggesting changes to optimize user experience.

One of the primary goals of usability experts during the heuristic evaluation process is to identify and prevent errors in user interface design. They achieve this by applying the principles of error prevention, such as providing clear instructions and warnings, minimizing the cognitive load on users, and reducing the chances of making errors in the first place. For example, they may suggest adding confirmation dialogs for critical actions, ensuring that error messages are clear and concise, and making the navigation intuitive and straightforward.

Usability experts also use user testing to inform their heuristic evaluation. User testing involves gathering data from users interacting with the product or service and observing their behavior and feedback. This data helps to validate the design decisions made during the heuristic evaluation and identify additional usability issues that may have been missed. For example, usability experts may conduct A/B testing to compare the effectiveness of different design variations, gather feedback from user surveys, and conduct user interviews to gain insights into users' needs and preferences.

Conducting user testing with users that represent, as closely as possible, actual end users, ensures that the product is optimized for its target audience. Check out our tool Reframer, which helps usability experts collaborate and record research observations in one central database.

Minimalist Design and User Control in Heuristic Evaluation

Minimalist design and user control are two key principles that usability experts focus on during the heuristic evaluation process. A minimalist design is one that is clean, simple, and focuses on the essentials, while user control refers to the extent to which users can control their interactions with the product or service.

Minimalist design is important because it allows users to focus on the content and tasks at hand without being distracted by unnecessary elements or clutter. Usability experts evaluate the level of minimalist design in a user interface by assessing the visual hierarchy, the use of white space, the clarity of the content, and the consistency of the design elements. Information architecture (the system and structure you use to organize and label content) has a massive impact here, along with the content itself being concise and meaningful.

Incorporating minimalist design principles into heuristic evaluation can improve the overall user experience by simplifying the design, reducing cognitive load, and making it easier for users to find what they need. Usability experts may incorporate minimalist design by simplifying the navigation and site structure, reducing the number of design elements, and removing any unnecessary content (check out our tool Treejack to conduct site structure, navigation, and categorization research). Consistent color schemes and typography can also help to create a cohesive and unified design.

User control is also critical in a user interface design because it gives users the power to decide how they interact with the product or service. Usability experts evaluate the level of user control by looking at the design of the navigation, the placement of buttons and prompts, the feedback given to users, and the ability to undo actions. Again, usability testing plays an important role in heuristic evaluation by allowing researchers to see how users respond to the level of control provided, and gather feedback on any potential hiccups or roadblocks.

Usability Testing and Heuristic Evaluation

Usability testing and heuristic evaluation are both important components of the user-centered design process, and they complement each other in different ways.

Usability testing involves gathering feedback from users as they interact with a digital product. This feedback can provide valuable insights into how users perceive and use the user interface design, identify any usability issues, and help validate design decisions. Usability testing can be conducted in different forms, such as moderated or unmoderated, remote or in-person, and task-based or exploratory. Check out our usability testing 101 article to learn more.

On the other hand, heuristic evaluation is a method in which usability experts evaluate a product against a set of usability principles. While heuristic evaluation is a useful method to quickly identify usability issues and areas for improvement, it does not involve direct feedback from users.

Usability testing can be used to validate heuristic evaluation findings by providing evidence of how users interact with the product or service. For example, if a usability expert identifies a potential usability issue related to the navigation of a website during heuristic evaluation, usability testing can be used to see if users actually have difficulty finding what they need on the website. In this way, usability testing provides a reality check to the heuristic evaluation and helps ensure that the findings are grounded in actual user behavior.

Usability testing and heuristic evaluation work together in the design process by informing and validating each other. For example, a designer may conduct heuristic evaluation to identify potential usability issues and then use the insights gained to design a new iteration of the product or service. The designer can then use usability testing to validate that the new design has successfully addressed the identified usability issues and improved the user experience. This iterative process of designing, testing, and refining based on feedback from both heuristic evaluation and usability testing leads to a user-centered design that is more likely to meet user needs and expectations.

Conclusion

Heuristic evaluation is a powerful usability research technique that usability experts use to evaluate digital product interfaces based on a set of established principles of user experience design. After all these years, the ten principles of heuristic evaluation still cover the key areas of design that impact user experience, making it easier to identify usability issues early in the design process, leading to faster and more efficient design iterations. Usability experts play a critical role in the heuristic evaluation process by identifying design flaws and areas for improvement, using user testing to validate design decisions, and ensuring that the product is optimized for its intended users.

Minimalist design and user control are two key principles that usability experts focus on during the heuristic evaluation process. A minimalist design is clean, simple, and focuses on the essentials, while user control gives users the freedom and control to undo/redo actions and exit out of situations if needed. By following these principles, usability experts can create an exceptional design that enhances visibility, reduces cognitive load, and provides a positive user experience. 

Ultimately, heuristic evaluation is a cost-effective way to identify usability issues at any point in the design process, leading to faster and more efficient design iterations, and improving user satisfaction and retention. How many of the ten heuristic design principles does your digital product satisfy? 

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.