September 26, 2024
25 min

67 ways to use Optimal for user research

User research and design can be tough in this fast-moving world. Sometimes we can get so wrapped up in what we’re doing, or what we think we’re supposed to be doing, that we don’t take the time to look for other options and other ways to use the tools we already know and love. I’ve compiled this list over last few days (my brain hurts) by talking to a few customers and a few people around the office. I’m sure it's far from comprehensive. I’ve focused on quick wins and unique examples. I’ll start off with some obvious ones, and we’ll get a little more abstract, or niche, as we go. I hope you get some ideas flying as you read through, enjoy!

#1 Benchmark your information architecture (IA)

Without a baseline for your information architecture, you can’t easily tell if any changes you make have a positive effect. If you haven’t done so, benchmark your existing website on Tree testing now. Upload your site structure and get results the same day. Now you’ll have IA scores to beat each month. Easy.

#2 Find out precisely where people get lost

Use Tree testing Pietree to find out exactly where people are getting lost in your website structure and where they go instead. You can also use First-click testing for this if you’re only interested in the first click, and let’s face it, that is where you’ll get the biggest bang for your buck.

#3 Start at the start

If you’re just not sure where to begin then take a screenshot of your homepage, or any page that you think might have some issues and get going with First-click testing. Write up a string of things that people might want to do when they find themselves on this page and use these as your tasks. Surprise all your colleagues with a maddening heatmap showing where people actually clicked in response to your tasks. Now you’ll know have a better idea of which area of your site to focus a tree test or card sort on for your next step.

#4 A/B test your site structure

Tree testing is great for testing more than one content structure. It’s easy to run two separate Tree testing studies, even more than two. It’ll help you decide which structure you and your team should run with, and it won’t take you long to set them up. Learn more.

#5 Make collaborative design decisions

Use Optimal Sort to get your team involved and let their feedback feed your designs: logos, icons, banners, images, the list goes on. By creating a closed image sort with categories where your team can group designs based on their preferences, you can get some quick feedback to help you figure out where you should focus your efforts.

#6 Do your (market) research

Card sorting is a great UX research technique, but it can also be a fun way to involve your users in some market research. Get a better sense of what your users and customers actually want to see on your website, by conducting an image sort of potential products. By providing categories like ‘I would buy this’, ‘I wouldn’t buy this’ to indicate their preferences for each item, you can figure out what types of products appeal to your customers.

#7 Customer satisfaction surveys with surveys

The thoughts and feelings of your users are always important. A simple survey can help you take a deeper look at your checkout process, a recently launched product or service, or even on the packaging your product arrives in, your options are endless.


#8 Crowdsource content ideas

Whether you’re running a blog or a UX conference, Questions can help you generate content ideas and understand any knowledge gaps that might be out there. Figure out what your users and attendees like to read on your blog, or what they want to hear about at your event, and let this feed into what you offer.

#9 Do some sociological research

Using card sorting for sociological research is a great way to deepen your understanding of how different groups may categorize information. Rather than focusing solely on how your users interact with your product or service, consider broadening your research horizons to understand your audience’s mental models. For example, by looking at how young people group popular social media platforms, you can understand the relationships between them, and identify where your product may fit in the mix.

#10 Create tests to fit in your onboarding process

Onboarding new customers is crucial to keeping them engaged with your product, especially if it involves your users learning how to use it. You can set up a quick study to help your users stay on track with onboarding. For example, say your company provided online email marketing software. You can set up a First-click testing study using a photo of your app, with a task asking your participants where they’d click to see the open rates for a particular email that went out.


#11 Quantify the return on investment of UX

Some people, including UX Agony Aunt, define return on UX as time saved, money made, and people engaged. By attaching a value to the time spent completing tasks, or to successful completion of tasks, you can approximate an ROI or at least illustrate the difference between two options.


#12 Collate all your user testing notes using qualitative Insights

Making sense of your notes from qualitative research activities can be simultaneously exciting and overwhelming. It’s fun being out on the field and jotting down observations on a notepad, or sitting in on user interviews and documenting observations into a spreadsheet. You can now easily import all your user research and give it some traceability.


#13 Establish which tags or filters people consider to be the most important

Create a card sort with your search filters or tags as labels, and have participants rank them according to how important they consider them to be. Analytics can tell you half of the story (where people actually click), so the card sort can give another side: a better idea of what people actually think or want.

#14 Reduce content on landing pages to what people access regularly

Before you run an open card sort to generate new category ideas, you can run a closed card sort to find out if you have any redundant content. Say you wanted to simplify the homepage of your intranet. You can ask participants to sort cards (containing homepage links) based on how often they use them. You could compare this card sort data with analytics from your intranet and see if people’s actual behavior and perception are well aligned.

#15 Crowd-source the values you want your team/brand/product to represent

Card sorting is a well-established technique in the ‘company values’ realm, and there are some great resources online to help you and your team brainstorm the values you represent. These ‘in-person’ brainstorm sessions are great, and you can run a remote closed card sort to support your findings. And if you want feedback from more than a small group of people (if your company has, say, more than 15 staff) you can run a remote closed card sort on its own. Use Microsoft’s Reaction Card Method as card inspiration.

#16 Input your learnings and observations from a UX conference with qualitative insights

If you're lucky enough to attend a UX conference, you can now share the experience with your colleagues. You can easily jot down ideas quotes and key takeaways in a Reframer project and keep your notes organized by using a new session for each presenter Bonus, if you’re part of a team, they can watch the live feed rolling into Reframer!


#17 Find out what actions people take across time

Use card sorting to understand when your participants are most likely to perform certain activities over the course of a day, week, or over the space of a year. Create categories that represent time, for example, ‘January to March’, ‘April to June’, ‘July to September’, and ‘October to December’, and ask your participants to sort activities according to the time they are most likely to do them (go on vacation, do their taxes, make big purchases, and so on). While there may be more arduous and more accurate methods for gathering this data, sometimes you need quick insights to help you make the right decisions.


#18 Gather quantitative data on prioritizing project tasks or product features

Closed card sorting can give you data that you might usually gather in team meetings or in Post-its on the wall, or that you might get through support channels. You can model your method on other prioritization techniques, including Eisenhower’s Decision Matrix, for example.

#19 Test your FAQs page with new users

Your support and knowledge base within your website can be just as important as any other core action on your website. If your support site is lacking in navigation and UX, this will no doubt increase support tickets and resources. Make sure your online support section is up to scratch. Here’s an article on how to do it quickly.

#20 Figure out if your icons need labels

Figure out if your icons are doing their job by testing whether your users are understanding them as intended. Uploading icons you currently use, or plan to use in your interface to First-click testing, and ask your users to identify their meaning by making use of post-task questions.

#21 Give your users some handy quick tools

In some cases, users may use your website with very specific goals in mind. Giving your users access to quick toos as soon as they land on your website is a great way to ensure they are able to get what they need done easily. Look at your analytics for things people do often that take several clicks to find, and check whether they can find your ‘quick tool’ in a single click using First-click testing.

#22 Benchmark the IA of your competition

We all have some sort of competitors, and researchers also need to pay attention to what they get up too. Make life easy in your reporting by benchmarking their IA and then reviewing it each quarter for the board and leaders to be wowed with. Also, not a perfect comparison, as users and separate sites have different flows, but compare your success scores with theirs. Makes your work feel like the Olympics with the healthy competition going on.

#23 Improve website conversions

Make the marketing team’s day by doing a fast improvement on some core conversions on your website. Now, there are loads of ways to improve conversions for a check out cart or signup form, but using First-click testing to test out ideas before you start going live A/B test can take mere minutes and give your B version a confidence boost.

#24 Reduce the bounce rates of certain sections of your website

People jumping off your website and not continuing their experience is something (depending on the landing page) everyone tries to improve. The metric ‘time on site’ and ‘average page views’ is a metric that shows the value your whole website has to offer. Again, there are many different ways to do this, but one big reason for people jumping off the website is not being able to find what they’re looking for. That’s where our IA toolkit comes in.

#25 Test your website’s IA in different countries

No, you don’t have to spend thousands of dollars to go to all these countries to test, although that’d be pretty sweet. You can remotely research participants from all over the world, using our integrated recruitment panel. Start seeing how different cultures, languages, and countries interact with your website.

#26 Run an empathy test (card sort)

Empathy – the ability to understand and share the experience of another person – is central to the design process. An empathy test is another great tool to use in the design phase because it enables you to find out if you are creating the right kind of feelings with your user. Take your design and show it to users. Provide them with a variety of words that could represent the design – for example “minimalistic”, “dynamic”, or “professional” – and ask them to pick out which the words which they think are best suited to their experience.

#27 Test visual hierarchy with first-click testing

Use first-click testing to understand which elements draw users' attention first on your page. Upload your design and ask participants to click on the most important element, or what catches their eye first. The resulting heatmap will show you if your visual hierarchy is working as intended - are users clicking where you expect them to? This technique helps validate design decisions about sizing, color, positioning, and contrast without needing to build the actual page.

#28 Take Qualitative Insights into the field

Get out of the office or the lab and observe social behaviour in the field. Use Qualitative Insights to input your observations on your field research. Then head back to your office to start making sense of the data in the Theme Builder.

#29 Use heatmaps to get the first impressions of designs

Heatmaps in our First-click testing tool are a great way of getting first impressions of any design. You can see where people clicked (correctly and incorrectly), giving you insights on what works and doesn’t work with your designs. Because it’s so fast to test, you can iterate until your designs start singing.

#30 Multivariate testing

Multivariate testing is when more than two versions of your studies are compared and allows you to understand which version performs better with your audience. Use multivariate testing with Tree testing and First-click testing to find the right design on which to focus and iterate.

#31 Improve your search engine optimization (SEO) with tree testing

Yes, a good IA improves your SEO. Search engines want to know how your users navigate throughout your site. Make sure people can easily find what they’re looking for, and you’ll start to see improvement in your search engine ranking.

#32 Test your mobile information architecture

As more and more people are using their smartphones for apps and to browse sites, you need to ensure its design gives your users a great experience. Test the IA of your mobile site to ensure people aren’t getting lost in the mobile version of your site. If you haven’t got a mobile-friendly design yet, now’s the time to start designing it!

#33 Run an Easter egg hunt using the correct areas in first-click testing

Liven up the workday by creating a fun Easter egg hunt in first-click testing. Simply upload a photo (like those really hard “spot the X” photos), set the correct area of your target, then send out your study with participant identifiers enabled. You can also send these out as competitions and have closing rules based on time, number of participants, or both.

#34 Keystroke level modeling

When interface efficiency is important you'll want to measure how much a new design can improve task times. You can actually estimate time saved (or lost) using some well-tested approaches that are based on average human performance for typical computer-based operations like clicking, pointing and typing. Read more about measuring task times without users.

#35 Feature prioritization and get some help for your roadmap

Find out what people think are the most important next steps for your team. Set up a card sort and ask people to categorize items and rank them in descending order of importance or impact on their work. This can also help you gauge their thoughts on potential new features for your site, and for bonus points compare team responses with customer responses.

#36 Tame your blog

Get the tags and categories in your blog under control to make life easier for your readers. Set up a card sort and use all your tags and categories as card labels. Either use your existing ones or test a fresh set of new tags and categories.

#37 Test your home button

Would an icon or text link work better for navigating to your home page? Before you go ahead and make changes to your site, you can find out by setting up a first-click testing test.

#38 Validate the designs in your head

As designers, you’ve probably got umpteen designs floating around in your head at any one time. But which of these are really worth pursuing? Figure this out by using The Optimal Workshop Suite to test out wireframes of new designs before putting any more work into them.

#39 ‘Buy now’ button shopping cart visibility

If you’re running an e-commerce site, ease of use and a great user experience are crucial. To see if your shopping cart and checkout processes are as good as they can be, run a first-click test.

#40 IA periodic health checks

Raise the visibility of good IA by running periodic IA health checks using Tree testing and reporting the results. Management loves metrics and catching any issues early is good too!

#41 Focus groups with qualitative insights

Thinking of launching a new product, app or website, or seeking opinions on an existing one? Focus groups can provide you with a lot of candid information that may help get your project off the ground. They’re also dangerous because they’re susceptible to groupthink, design by committee, and tunnel vision. Use with caution, but if you do then use with Qualitative Insights! Compare notes and find patterns across sessions. Pay attention to emotional triggers.

#42 Gather opinions with surveys

Whether you want the opinions of your users or from members of your team, you can set up a quick and simple survey using Surveys. It’s super useful for getting opinions on new ideas (consider it almost like a mini-focus group), or even for brainstorming with teammates.

#43 Design a style guide with card sorting

Style guides (for design and content) can take a lot of time and effort to create, especially when you need to get the guide proofed by various people in your company. To speed this up, simply create a card sort to find out what your guide should consist of. Find out the specifics in this article.

#44 Improve your company's CRM system

As your company grows, oftentimes your CRM can become riddled with outdated information and turn into a giant mess, especially if you deal with a lot of customers every day. To help clear this up, you can use card sorting and tree testing to solve navigational issues and get rid of redundant features. Learn more.

#45 Sort your life out

Let your creativity run wild, and get your team or family involved in organizing or prioritizing the things that matter. And the possibilities really are endless. Organize a long list of DIY projects, or ask the broader team how the functional pods should be re-organized. It’s up to you. How can card sorting help you in your work and daily life?

#46 Create an online diary study

Whether it’s a product, app or website, finding out the long-term behaviour and thoughts of your users is important. That’s where diary studies come in. For those new to this concept, diary studies are a longitudinal research method, aimed at collecting insights about a participant’s needs and behaviors. Participants note down activities as they’re using a particular product, app, or website. Add your participants into a qualitative study and allow them to create their diary study with ease.

#47 Source-specific data with an online survey

Online survey tools can complement your existing research by sourcing specific information from your participants. For example, if you need to find out more about how your participants use social media, which sites they use, and on which devices, you can do it all through a simple survey questionnaire. Additionally, if you need to identify usage patterns, device preferences or get information on what other products/websites your users are aware of/are using, a questionnaire is the ticket.

#48 Guerrilla testing with First-click testing

For really quick first-click testing, take First-click testing on a tablet, mobile device or laptop to a local coffee shop. Ask people standing in line if they’d like to take part in your super quick test in exchange for a cup of joe. Easy!

#50 Ask post-task questions for tree testing and first-click testing

You can now set specific task-related questions for both Tree testing and First-click testing. This is a great way to dive deeper into the mushy minds of your participants. Check out how to use this new(ish) feature here!

#51 Start testing prototypes

Paper prototypes are great, but what happens when your users are scattered around the globe, and you can’t invite them to an in-person test? By scanning (or taking a photo) of your paper prototypes, you can use first-click testingto test them with your users quickly and easily. Read more about our approach here.

#52 Take better notes for sense making

Qualitative research involves a lot of note-taking. So naturally, to be better at this method, improving how you take notes is important. Reframer is designed to make note-taking easy but it can still be an art. Learn more.

#53 Make sure you get the user's first-click right

Like most things, read a little, and then it’s all about practice.We’ve found that people who get the first click correct are almost three times as likely to complete a task successfully. Get your first clicks right in tree testing and first-click testing and you’ll start seeing your customers smile.


#54 Run a cat survey. Yep, cats!

We’ve gained some insight into how people intuitively group cats, and so can you (unless you’re a dog person). Honestly, doing something silly can be a useful way to introduce your team to a new method on a Friday afternoon. Remember to distribute the results!


#55 Destroy evil attractors in your tree

Evil attractors are those labels in your IA that attract unjustified clicks across tasks. This usually means the chosen label is ambiguous, or possibly a catch-all phrase like ‘Resources’. Read how to quickly identify evil attractors in the Destinations table of tree test results and how to fix them.

#56 Affinity map using card sorts

We all love our Post-its and sticking things on walls. But sometimes you need something quicker and accessible for people in remote areas. Try out using Card Sorts for a distributed approach to making sense of all the notes. Plus, you can easily import any qualitative insights when creating cards in card sort. Easy.

#57 Preference test with first-click testing

Whether you’re coming up with a new logo design, headline, featured image, or anything, you can preference test it with First-click testing. Create an image that shows the two designs side by side and upload it to First-click testing. From there, you can ask people to click whichever one they prefer!

#58 Add moderated card sort results to your card sort

An excellent way of gathering valuable qualitative insights alongside the results of your remote card sorts is to run a moderated version of the sorts with a smaller group of participants. When you can observe and interact with your participants as they complete the sort, you’ll be able to ask questions and learn more about their mental models and the reasons why they have categorized things in a particular way. Learn more.

#59 Test search box variations with first-click clicking

Case study by Viget: “One of the most heavily used features of the website is its keyword search, so we wanted to make absolutely certain that our redesigned search box didn’t make search harder for users to find and use.”

#60 Run an image card sort to organize products into groups

You can add images to each card that allows you understand how your participants may organize and label particular items. Very useful if you want to organize some retail products and want to find out how other people would organize them given a visual including shape, color, and other potential context.

#61 Test your customers' perceptions of different logo and brand image designs

Understand how customers perceive your brand by creating a closed card sort. Come up with a list of categories, and ask participants to sort images such as logos, and branded images.

#62 Run an open image card sort to classify images into groups based on the emotions they elicit

Are these pictures exhilarating, or terrifying? Are they humoros, or offensive? Relaxing, or boring? Productive, or frantic? Happy memories, or a deep sigh?

#63 Run an image card sort to organize your library

Whether it’s a physical library of books, or a digital drive full of ebooks, you can run a card sort to help organize them in a way that makes sense. Will it be by genre, author name, color or topic? Send out the study to your coworkers to get their input! You can also do this at home for your own personal library, and you can include music/CDs/vinyl records and movies!

#64 HR exercises to determine the motivations of your team

It’s simple to ask your team about their thoughts, feelings, and motivations with a Questions survey. You can choose to leave participant identifiers blank (so responses are anonymous), or you can ask for a name/email address. As a bonus, you can set up a calendar reminder to send out a new survey in the next quarter. Duplicate the survey and send it out again!

#65 Designing physical environments

If your company has a physical environment in which your customers visit, you can research new structures using a mixture of tools in The Optimal Workshop Suite. This especially comes in handy if your customers require certain information within the physical environment in order to make decisions. For example, picture a retail store. Are all the signs clear and communicate the right information? Are people overwhelmed by the physical environment?

#66 Use tree testing to refine an interactive phone menu system

Similar to how you’d design an IA, you can create a tree test to design an automated phone system. Whether you’re designing from the ground up, or improving your existing system, you will be able to find out if people are getting lost.


#67 Have your research team categorize and prioritize all these ideas

Before you dig deeper into more of these ideas, ask the rest of the team to help you decide which one to focus on. Let’s not get in the way of your work. Start your quick wins and log into your account. Here’s a spreadsheet of this list to upload to card sort. Aaaaaaaaaaand that’s a wrap! *Takes out gym towel and wipes sweaty face.
*Got any more suggestions to add to this list? We’d love to hear them in our comments section — we might even add them into this list

Related articles

View all blog articles
Learn more
1 min read

Online card sorting: The comprehensive guide

When it comes to designing and testing in the world of information architecture, it’s hard to beat card sorting. As a usability testing method, card sorting is easy to set up, simple to recruit for and can supply you with a range of useful insights. But there’s a long-standing debate in the world of card sorting, and that’s whether it’s better to run card sorts in person (moderated) or remotely over the internet (unmoderated).

This article should give you some insight into the world of online card sorting. We've included an analysis of the benefits (and the downsides) as well as why people use this approach. Let's take a look!

How an online card sort works

Running a card sort remotely has quickly become a popular option just because of how time-intensive in-person card sorting is. Instead of needing to bring your participants in for dedicated card sorting sessions, you can simply set up your card sort using an online tool (like our very own OptimalSort) and then wait for the results to roll in.

So what’s involved in a typical online card sort? At a very high level, here’s what’s required. We’re going to assume you’re already set up with an online card sorting tool at this point.

  1. Define the cards: Depending on what you’re testing, add the items (cards) to your study. If you were testing the navigation menu of a hotel website, your cards might be things like “Home”, “Book a room”, “Our facilities” and “Contact us”.
  2. Work out whether to run a closed or open sort: Determine whether you’ll set the groups for participants to sort cards into (closed) or leave it up to them (open). You may also opt for a mix, where you create some categories but leave the option open for participants to create their own.
  3. Recruit your participants: Whether using a participant recruitment service or by recruiting through your own channels, send out invites to your online card sort.
  4. Wait for the data: Once you’ve sent out your invites, all that’s left to do is wait for the data to come in and then analyze the results.

That’s online card sorting in a nutshell – not entirely different from running a card sort in person. If you’re interested in learning about how to interpret your card sorting results, we’ve put together this article on open and hybrid card sorts and this one on closed card sorts.

Why is online card sorting so popular?

Online card sorting has a few distinct advantages over in-person card sorting that help to make it a popular option among information architects and user researchers. There are downsides too (as there are with any remote usability testing option), but we’ll get to those in a moment.

Where remote (unmoderated) card sorting excels:

  • Time savings: Online card sorting is essentially ‘set and forget’, meaning you can set up the study, send out invites to your participants and then sit back and wait for the results to come in. In-person card sorting requires you to moderate each session and collate the data at the end.
  • Easier for participants: It’s not often that researchers are on the other side of the table, but it’s important to consider the participant’s viewpoint. It’s much easier for someone to spend 15 minutes completing your online card sort in their own time instead of trekking across town to your office for an exercise that could take well over an hour.
  • Cheaper: In a similar vein, online card sorting is much cheaper than in-person testing. While it’s true that you may still need to recruit participants, you won’t need to reimburse people for travel expenses.
  • Analytics: Last but certainly not least, online card sorting tools (like OptimalSort) can take much of the analytical burden off you by transforming your data into actionable insights. Other tools will differ, but OptimalSort can generate a similarity matrix, dendrograms and a participant-centric analysis using your study data.

Where in-person (moderated) card sorting excels:

  • Qualitative insights: For all intents and purposes, online card sorting is the most effective way to run a card sort. It’s cheaper, faster and easier for you. But, there’s one area where in-person card sorting excels, and that’s qualitative feedback. When you’re sitting directly across the table from your participant you’re far more likely to learn about the why as well as the what. You can ask participants directly why they grouped certain cards together.

Online card sorting: Participant numbers

So that’s online card sorting in a nutshell, as well as some of the reasons why you should actually use this method. But what about participant numbers? Well, there’s no one right answer, but the general rule is that you need more people than you’d typically bring in for a usability test.

This all comes down to the fact that card sorting is what’s known as a generative method, whereas usability testing is an evaluation method. Here’s a little breakdown of what we mean by these terms:

Generative method: There’s no design, and you need to get a sense of how people think about the problem you’re trying to solve. For example, how people would arrange the items that need to go into your website’s navigation. As Nielsen Norman Group explains: “There is great variability in different people's mental models and in the vocabulary they use to describe the same concepts. We must collect data from a fair number of users before we can achieve a stable picture of the users' preferred structure and determine how to accommodate differences among users”.

Evaluation method: There’s already a design, and you basically need to work out whether it’s a good fit for your users. Any major problems are likely to crop up even after testing 5 or so users. For example, you have a wireframe of your website and need to identify any major usability issues.

Basically, because you’ll typically be using card sorting to generate a new design or structure from nothing, you need to sample a larger number of people. If you were testing an existing website structure, you could get by with a smaller group.

Where to from here?

Following on from our discussion of generative versus evaluation methods, you’ve really got a choice of 2 paths from here if you’re in the midst of a project. For those developing new structures, the best course of action is likely to be a card sort. However, if you’ve got an existing structure that you need to test in order to usability problems and possible areas of improvement, you’re likely best to run a tree test. We’ve got some useful information on getting started with a tree test right here on the blog.

Learn more
1 min read

Web usability guide

There’s no doubt usability is a key element of all great user experiences, how do we apply and test usability principles for a website? This article looks at usability principles in web design, how to test it, practical tips for success and a look at our remote testing tool, Treejack.

A definition of usability for websites 🧐📖

Web usability is defined as the extent to which a website can be used to achieve a specific task or goal by a user. It refers to the quality of the user experience and can be broken down into five key usability principles:

  • Ease of use: How easy is the website to use? How easily are users able to complete their goals and tasks? How much effort is required from the user?
  • Learnability: How easily are users able to complete their goals and tasks the first time they use the website?
  • Efficiency: How quickly can users perform tasks while using your website?
  • User satisfaction: How satisfied are users with the experience the website provides? Is the experience a pleasant one?
  • Impact of errors: Are users making errors when using the website and if so, how serious are the consequences of those errors? Is the design forgiving enough make it easy for errors to be corrected?

Why is web usability important? 👀

Aside from the obvious desire to improve the experience for the people who use our websites, web usability is crucial to your website’s survival. If your website is difficult to use, people will simply go somewhere else. In the cases where users do not have the option to go somewhere else, for example government services, poor web usability can lead to serious issues. How do we know if our website is well-designed? We test it with users.

Testing usability: What are the common methods? 🖊️📖✏️📚

There are many ways to evaluate web usability and here are the common methods:

  • Moderated usability testing: Moderated usability testing refers to testing that is conducted in-person with a participant. You might do this in a specialised usability testing lab or perhaps in the user’s contextual environment such as their home or place of business. This method allows you to test just about anything from a low fidelity paper prototype all the way up to an interactive high fidelity prototype that closely resembles the end product.
  • Moderated remote usability testing: Moderated remote usability testing is very similar to the previous method but with one key difference- the facilitator and the participant/s are not in the same location. The session is still a moderated two-way conversation just over skype or via a webinar platform instead of in person. This method is particularly useful if you are short on time or unable to travel to where your users are located, e.g. overseas.
  • Unmoderated remote usability testing: As the name suggests, unmoderated remote usability testing is conducted without a facilitator present. This is usually done online and provides the flexibility for your participants to complete the activity at a time that suits them. There are several remote testing tools available ( including our suite of tools ) and once a study is launched these tools take care of themselves collating the results for you and surfacing key findings using powerful visual aids.
  • Guerilla testing: Guerilla testing is a powerful, quick and low cost way of obtaining user feedback on the usability of your website. Usually conducted in public spaces with large amounts of foot traffic, guerilla testing gets its name from its ‘in the wild’ nature. It is a scaled back usability testing method that usually only involves a few minutes for each test but allows you to reach large amounts of people and has very few costs associated with it.
  • Heuristic evaluation: A heuristic evaluation is conducted by usability experts to assess a website against recognized usability standards and rules of thumb (heuristics). This method evaluates usability without involving the user and works best when done in conjunction with other usability testing methods eg Moderated usability testing to ensure the voice of the user is heard during the design process.
  • Tree testing: Also known as a reverse card sort, tree testing is used to evaluate the findability of information on a website. This method allows you to work backwards through your information architecture and test that thinking against real world scenarios with users.
  • First click testing: Research has found that 87% of users who start out on the right path from the very first click will be able to successfully complete their task while less than half ( 46%) who start down the wrong path will succeed. First click testing is used to evaluate how well a website is supporting users and also provides insights into design elements that are being noticed and those that are being ignored.
  • Hallway testing: Hallway testing is a usability testing method used to gain insights from anyone nearby who is unfamiliar with your project. These might be your friends, family or the people who work in another department down the hall from you. Similar to guerilla testing but less ‘wild’. This method works best at picking up issues early in the design process before moving on to testing a more refined product with your intended audience.

Online usability testing tool: Tree testing 🌲🌳🌿

Tree testing is a remote usability testing tool that uses tree testing to help you discover exactly where your users are getting lost in the structure of your website. Treejack uses a simplified text-based version of your website structure removing distractions such as navigation and visual design allowing you to test the design from its most basic level.

Like any other tree test, it uses task based scenarios and includes the opportunity to ask participants pre and post study questions that can be used to gain further insights. Tree testing is a useful tool for testing those five key usability principles mentioned earlier with powerful inbuilt features that do most of the heavy lifting for you. Tree testing records and presents the following for each task:

  • complete details of the pathways followed by each participant
  • the time taken to complete each task
  • first click data
  • the directness of each result
  • visibility on when and where participants skipped a task

Participant paths data in our tree testing tool 🛣️

The level of detail recorded on the pathways followed by your participants makes it easy for you to determine the ease of use, learnability, efficiency and impact of errors of your website. The time taken to complete each task and the directness of each result also provide insights in relation to those four principles and user satisfaction can be measured through the results to your pre and post survey questions.

The first click data brings in the added benefits of first click testing and knowing when and where your participants gave up and moved on can help you identify any issues.Another thing tree testing does well is the way it brings all data for each task together into one comprehensive overview that tells you everything you need to know at a glance. Tree testing's task overview- all the key information in one placeIn addition to this, tree testing also generates comprehensive pathway maps called pietrees.

Each junction in the pathway is a piechart showing a statistical breakdown of participant activity at that point in the site structure including details about: how many were on the right track, how many were following the incorrect path and how many turned around and went back. These beautiful diagrams tell the story of your usability testing and are useful for communicating the results to your stakeholders.

Usability testing tips 🪄

Here are seven practical usability testing tips to get you started:

  • Test early and often: Usability testing isn’t something that only happens at the end of the project. Start your testing as soon as possible and iterate your design based on findings. There are so many different ways to test an idea with users and you have the flexibility to scale it back to suit your needs.
  • Try testing with paper prototypes: Just like there are many usability testing methods, there are also several ways to present your designs to your participant during testing. Fully functioning high fidelity prototypes are amazing but they’re not always feasible (especially if you followed the previous tip of test early and often). Paper prototypes work well for usability testing because your participant can draw on them and their own ideas- they’re also more likely to feel comfortable providing feedback on work that is less resolved! You could also use paper prototypes to form the basis for collaborative design sessions with your users by showing them your idea and asking them to redesign or design the next page/screen.
  • Run a benchmarking round of testing: Test the current state of the design to understand how your users feel about it. This is especially useful if you are planning to redesign an existing product or service and will save you time in the problem identification stages.
  • Bring stakeholders and clients into the testing process: Hearing how a product or service is performing direct from a user can be quite a powerful experience for a stakeholder or client. If you are running your usability testing in a lab with an observation room, invite them to attend as observers and also include them in your post session debriefs. They’ll gain feedback straight from the source and you’ll gain an extra pair of eyes and ears in the observation room. If you’re not using a lab or doing a different type of testing, try to find ways to include them as observers in some way. Also, don’t forget to remind them that as observers they will need to stay silent for the entire session beyond introducing themselves so as not to influence the participant - unless you’ve allocated time for questions.
  • Make the most of available resources: Given all the usability testing options out there, there’s really no excuse for not testing a design with users. Whether it’s time, money, human resources or all of the above making it difficult for you, there’s always something you can do. Think creatively about ways to engage users in the process and consider combining elements of different methods or scaling down to something like hallway testing or guerilla testing. It is far better to have a less than perfect testing method than to not test at all.
  • Never analyse your findings alone: Always analyse your usability testing results as a team or with at least one other person. Making sense of the results can be quite a big task and it is easy to miss or forget key insights. Bring the team together and affinity diagram your observations and notes after each usability testing session to ensure everything is captured. You could also use Reframer to record your observations live during each session because it does most of the analysis work for you by surfacing common themes and patterns as they emerge. Your whole team can use it too saving you time.
  • Engage your stakeholders by presenting your findings in creative ways: No one reads thirty page reports anymore. Help your stakeholders and clients feel engaged and included in the process by delivering the usability testing results in an easily digestible format that has a lasting impact. You might create an A4 size one page summary, or maybe an A0 size wall poster to tell everyone in the office the story of your usability testing or you could create a short video with snippets taken from your usability testing sessions (with participant permission of course) to communicate your findings. Remember you’re also providing an experience for your clients and stakeholders so make sure your results are as usable as what you just tested.

Related reading 🎧💌📖

Learn more
1 min read

Ready for take-off: Best practices for creating and launching remote user research studies

"Hi Optimal Work,I was wondering if there are some best practices you stick to when creating or sending out different UX research studies (i.e. Card sorts, Prototyye Test studies, etc)? Thank you! Mary"

Indeed I do! Over the years I’ve learned a lot about creating remote research studies and engaging participants. That experience has taught me a lot about what works, what doesn’t and what leaves me refreshing my results screen eagerly anticipating participant responses and getting absolute zip. Here are my top tips for remote research study creation and launch success!

Creating remote research studies

Use screener questions and post-study questions wisely

Screener questions are really useful for eliminating participants who may not fit the criteria you’re looking for but you can’t exactly stop them from being less than truthful in their responses. Now, I’m not saying all participants lie on the screener so they can get to the activity (and potentially claim an incentive) but I am saying it’s something you can’t control. To help manage this, I like to use the post-study questions to provide additional context and structure to the research.

Depending on the study, I might ask questions to which the answers might confirm or exclude specific participants from a specific group. For example, if I’m doing research on people who live in a specific town or area, I’ll include a location based question after the study. Any participant who says they live somewhere else is getting excluded via that handy toggle option in the results section. Post-study questions are also great for capturing additional ideas and feedback after participants complete the activity as remote research limits your capacity to get those — you’re not there with them so you can’t just ask. Post-study questions can really help bridge this gap. Use no more than five post-study questions at a time and consider not making them compulsory.

Do a practice run

No matter how careful I am, I always miss something! A typo, a card with a label in the wrong case, forgetting to update a new version of an information architecture after a change was made — stupid mistakes that we all make. By launching a practice version of your study and sharing it with your team or client, you can stop those errors dead in their tracks. It’s also a great way to get feedback from the team on your work before the real deal goes live. If you find an error, all you have to do is duplicate the study, fix the error and then launch. Just keep an eye on the naming conventions used for your studies to prevent the practice version and the final version from getting mixed up!

Sending out remote research studies

Manage expectations about how long the study will be open for

Something that has come back to bite me more than once is failing to clearly explain when the study will close. Understandably, participants can be left feeling pretty annoyed when they mentally commit to complete a study only to find it’s no longer available. There does come a point when you need to shut the study down to accurately report on quantitative data and you’re not going to be able to prevent every instance of this, but providing that information upfront will go a long way.

Provide contact details and be open to questions

You may think you’re setting yourself up to be bombarded with emails, but I’ve found that isn’t necessarily the case. I’ve noticed I get around 1-3 participants contacting me per study. Sometimes they just want to tell me they completed it and potentially provide additional information and sometimes they have a question about the project itself. I’ve also found that sometimes they have something even more interesting to share such as the contact details of someone I may benefit from connecting with — or something else entirely! You never know what surprises they have up their sleeves and it’s important to be open to it. Providing an email address or social media contact details could open up a world of possibilities.

Don’t forget to include the link!

It might seem really obvious, but I can’t tell you how many emails I received (and have been guilty of sending out) that are missing the damn link to the study. It happens! You’re so focused on getting that delivery right and it becomes really easy to miss that final yet crucial piece of information.

To avoid this irritating mishap, I always complete a checklist before hitting send:

  • Have I checked my spelling and grammar?
  • Have I replaced all the template placeholder content with the correct information?
  • Have I mentioned when the study will close?
  • Have I included contact details?
  • Have I launched my study and received confirmation that it is live?
  • Have I included the link to the study in my communications to participants?
  • Does the link work? (yep, I’ve broken it before)

General tips for both creating and sending out remote research studies

Know your audience

First and foremost, before you create or disseminate a remote research study, you need to understand who it’s going to and how they best receive this type of content. Posting it out when none of your followers are in your user group may not be the best approach. Do a quick brainstorm about the best way to reach them. For example if your users are internal staff, there might be an internal communications channel such as an all-staff newsletter, intranet or social media site that you can share the link and approach content to.

Keep it brief

And by that I’m talking about both the engagement mechanism and the study itself. I learned this one the hard way. Time is everything and no matter your intentions, no one wants to spend more time than they have to. Even more so in situations where you’re unable to provide incentives (yep, I’ve been there). As a rule, I always stick to no more than 10 questions in a remote research study and for card sorts, I’ll never include more than 60 cards. Anything more than that will see a spike in abandonment rates and of course only serve to annoy and frustrate your participants. You need to ensure that you’re balancing your need to gain insights with their time constraints.

As for the accompanying approach content, short and snappy equals happy! In the case of an email, website, other social media post, newsletter, carrier pigeon etc, keep your approach spiel to no more than a paragraph. Use an audience appropriate tone and stick to the basics such as: a high level sentence on what you’re doing, roughly how long the study will take participants to complete, details of any incentives on offer and of course don’t forget to thank them.

Set clear instructions

The default instructions in Optimal Workshop’s suite of tools are really well designed and I’ve learned to borrow from them for my approach content when sending the link out. There’s no need for wheel reinvention and it usually just needs a slight tweak to suit the specific study. This also helps provide participants with a consistent experience and minimizes confusion allowing them to focus on sharing those valuable insights!

Create a template

When you’re on to something that works — turn it into a template! Every time I create a study or send one out, I save it for future use. It still needs minor tweaks each time, but I use them to iterate my template.What are your top tips for creating and sending out remote user research studies? Comment below!

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.