August 8, 2022
4 min

Usability Testing Guide: What It Is, How to Run It, and When to Use Each Method

Knowing and understanding why and how your users use your product can be invaluable for getting to the nitty gritty of usability. Where they get stuck and where they fly through. Delving deep with probing questions into motivation or skimming over looking for issues can equally be informative.

Usability testing can be done in several ways, each way has its benefits. Put super simply, usability testing literally is testing how useable your product is for your users. If your product isn't useable users will not stick around or very often complete their task, let alone come back for more.

What is usability testing?

Usability testing is a research method used to evaluate how easy something is to use by testing it with representative users.

These tests typically involve observing a participant as they work through a series of tasks involving the product being tested. Having conducted several usability tests, you can analyze your observations to identify the most common issues.

We go into the three main methods of usability testing:

  1. Moderated and unmoderated
  2. Remote or in person
  3. Explorative, assessment or comparative

1. Moderated or unmoderated usability testing

Moderated usability testing


Moderated usability testing
is done in-person or remotely by a researcher who introduces the test to participants, answers their queries, and asks follow-up questions. Often these tests are done in real time with participants and can involve other research stakeholders. Moderated testing usually produces more in-depth results thanks to the direct interaction between researchers and test participants. However, this can be expensive to organize and run.

Top tip: Use moderated testing to investigate the reasoning behind user behavior.

Unmoderated usability testing


Unmoderated usability testing
is done without direct supervision; likely participants are in their own homes and/or using their own devices to browse the website that is being tested. And often at their own pace.  The cost of unmoderated testing is lower, though participant answers can remain superficial and making follow-up questions can be difficult.

Top tip: Use unmoderated testing to test a very specific question or observe and measure behavior patterns.

2. Research or in-person usability testing

Remote usability testing


Remote usability testing is done over the internet or by phone. Allowing the participants to have the time and space to work in their own environment and at their own pace. This however doesn’t give the researcher much in the way of contextual data because you’re unable to ask questions around intention or probe deeper if the participant makes a particular decision. Remote testing doesn’t go as deep into a participant’s reasoning, but it allows you to test large numbers of people in different geographical areas using fewer resources.

Top tip: Use remote testing when a large group of participants are needed and the questions asked can be direct and unambiguous.

In-person usability testing


In-person usability testing, as the name suggests, is done in the presence of a researcher. In-person testing does provide contextual data as researchers can observe and analyze body language and facial expressions. You’re also often able to converse with participants and find out more about why they do something. However, in-person testing can be expensive and time-consuming: you have to find a suitable space, block out a specific date, and recruit (and often pay) participants.

Top tip: In-person testing gives researchers more time and insight into motivation for decisions.

3. Explorative, Assessment or comparative testing

These three usability testing methods generate different types of information:

Explorative testing


Explorative testing is open-ended. Participants are asked to brainstorm, give opinions, and express emotional impressions about ideas and concepts. The information is typically collected in the early stages of product development and helps researchers pinpoint gaps in the market, identify potential new features, and workshop new ideas.

Assessment research


Assessment research is used to test a user's satisfaction with a product and how well they are able to use it. It's used to evaluate general functionality.

Comparative research


Comparative research methods involve asking users to choose which of two solutions they prefer, and they may be used to compare a product with its competitors.

Top tip: Depending on what research is being done, and how much qualitative or quantitative data is wanted.

Which method is right for you?

Whether the testing is done in-person, remote, moderated or unmoderated will depend on your purpose, what you want out of the testing, and to some extent your budget. 

Depending on what you are testing, each of the usability testing methods we explored here can offer an answer. If you are at the development stage of a product it can be useful to conduct a usability test on the entire product. Checking the intuitive usability of your website, to ensure users can make the best decisions, quickly. Or adding, changing or upgrading a product can also be the moment to check on a specific question around usability. Planning and understanding your objectives are key to selecting the right usability testing option for your project.

Let's take a look at a couple of examples of usability testing.

1. Lab based, in-person moderated testing - mid-life website

Imagine you have a website that sells sports equipment. Over time your site has become cluttered and disorganized, much like a bricks and mortar store may. You’ve noticed a drop in sales in certain areas. How do you find out what is going wrong or where users are getting lost? Having an in-person, lab (or other controlled environment), moderated usability test with users you can set tasks, watch (and record) what they do.

The researcher can literally be standing or sitting next to the participant throughout, recording contextual information such as how they interacted with the mouse, laptop or even the seat. Watching for cues as to the comfort of the participant and asking questions about why they make decisions can provide richer insights. Maybe they wanted purple yoga pants, but couldn’t find the ‘yoga’ section which was listed under gym rather than a clothing section.

Meaning you can look at how your stock is organised, or even investigate undertaking a card sort. This provides robust and fully rounded feedback on users behaviours, expectations and experiences. Providing data that can directly be turned into actionable directives when redeveloping the website. 

2. Remote, moderated assessment testing - app product development

You are looking at launching an app for parents to access for information and updates for the school. It’s still in development stage and at this point you want to know how easy the app is to use. Setting some very specific set tasks for participants to complete the app can be sent to them and they can be left to complete (or not). Providing feedback and comments around the usability.

The next step may be to use first click testing to see how and where the interface is clicked and where participants may be spending time, or becoming lost. Whilst the feedback and data gathered from this testing can be light, it will be very direct to the questions asked. And will provide data to back up (or possibly not) what assumptions were made.

3. Moderated, In-person, explorative testing - new product development

You’re right at the start of the development process. The idea is new and fresh and the basics are being considered. What better way to get an understanding of what your users’ truly want than an explorative study.

Open-ended questions with participants in a one-on-one environment (or possibly in groups) can provide rich data and insights for the development team. Imagine you have an exciting new promotional app that you are developing for a client. There are similar apps on the market but none as exciting as what your team has dreamt up. By putting it (and possibly the competitors) to participants they can give direct feedback on what they like, love and loathe.

They can also help brainstorm ideas or better ways to make the app work, or improve the interface. All of this done, before there is money sunk in development.

Usability testing summary: When to use each method (and why)

Key objectives will dictate which usability testing method will deliver the answers to your questions.

Whether it’s in-person, remote, moderated or comparative with a bit of planning you can gather data around your users very real experience of your product. Identify issues, successes and failures. Addressing your user experience with real data, and knowledge can but lead to a more intuitive product.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Mixed methods research in 2021

User experience research is super important to developing a product that truly engages, compels and energises people. We all want a website that is easy to navigate, simple to follow and compels our users to finish their tasks. Or an app that supports and drives engagement.

We’ve talked a lot about the various types of research tools that help improve these outcomes. 

There is a rising research trend in 2021.

Mixed method research - what is more compelling than these user research quantitative tools? Combining these with awesome qualitative research! Asking the same questions in various ways can provide deeper insights into how our users think and operate. Empowering you to develop products that truly talk to your users, answer their queries or even address their frustrations.

Though it isn’t enough to simply ‘do research’, as with anything you need to approach it with strategy, focus and direction. This will funnel your time, money and energy into areas that will generate the best results.

Mixed Method UX research is the research trend of 2021

With the likes of Facebook, Amazon, Etsy, eBay, Ford and many more big organizations offering newly formed job openings for mixed methods researchers it becomes very obvious where the research trend is heading.

It’s not only good to have, but now becoming imperative, to gather data, dive deeper and generate insights that provide more information on our users than ever before. And you don't need to be Facebook to reap the benefits. Mixed method research can be implemented across the board and can be as narrow as finding out how your homepage is performing through to analysing in depth the entirety of your product design.

And with all of these massive organizations making the move to increase their data collection and research teams. Why wouldn’t you?

The value in mixed method research is profound. Imagine understanding what, where, how and why your customers would want to use your service. And catering directly for them. The more we understand our customers, the deeper the relationship and the more likely we are to keep them engaged.

Although of course by diving deep into the reasons our users like (or don’t like) how our products operate can drive your organization to target and operate better at a higher level. Gearing your energies to attracting and keeping the right type of customer, providing the right level of service and after care. Potentially reducing overheads, by not delivering to expected levels.

What is mixed method research?

Mixed methods research isn’t overly complicated, and doesn’t take years for you to master. It simply is a term used to refer to using a combination of quantitative and qualitative data. This may mean using a research tool such as card sorting alongside interviews with users. 

Quantitative research is the tangible numbers and metrics that can be gathered through user research such as card sorting or tree testing.

Qualitative research is research around users’ behaviour and experiences. This can be through usability tests, interviews or surveys.

For instance you may be asking ‘how should I order the products on my site?’. With card sorting you can get the data insights that will inform how a user would like to see the products sorted. Coupled with interviews you will get the why.

Understanding the thinking behind the order, and why one user likes to see gym shorts stored under shorts and another would like to see them under active wear. With a deeper understanding of how and why users decide how content should be sorted are made will create a highly intuitive website. 

Another great reason for mixed method research would be to back up data insights for stakeholders. With a depth and breadth of qualitative and quantitative research informing decisions, it becomes clearer why changes may need to be made, or product designs need to be challenged.

How to do mixed method research

Take a look at our article for more examples of the uses of mixed method research. 

Simply put mixed method research means coupling quantitative research, such as tree testing, card sorting or first click testing, with qualitative research such as surveys, interviews or diary entry.

Say, for instance, the product manager has identified that there is an issue with keeping users engaged on the homepage of your website. We would start with asking where they get stuck, and when they are leaving.

This can be done using a first-click tool, such as Chalkmark, which will map where users head when they land on your homepage and beyond. 

This will give you the initial qualitative data. However, it may only give you some of the picture. Coupled with qualitative data, such as watching (and reporting on) body language. Or conducting interviews with users directly after their experience so we can understand why they found the process confusing or misleading.

A fuller picture, means a better understanding.

Key is to identify what your question is and honing in on this through both methods. Ultimately, we are answering your question from both sides of the coin.

Upcoming research trends to watch

Keeping an eye on the progression of the mixed method research trend, will mean keeping an eye on these:

1. Integrated Surveys

Rather than thinking of user surveys as being a one time, in person event, we’re seeing more and more often surveys being implemented through social media, on websites and through email. This means that data can be gathered frequently and across the board. This longitude data allows organizations to continuously analyse, interpret and improve products without really ever stopping. 

Rather than relying on users' memories for events and experiences data can be gathered in the moment. At the time of purchase or interaction. Increasing the reliability and quality of the data collected. 

2. Return to the social research

Customer research is rooted in the focus group. The collection of participants in one space, that allows them to voice their opinions and reach insights collectively. This did used to be an overwhelming task with days or even weeks to analyse unstructured forums and group discussions.

However, now with the advent of online research tools this can also be a way to round out mixed method research.

3. Co-creation

The ability to use your customers input to build better products. This has long been thought a way to increase innovative development. Until recently it too has been cumbersome and difficult to wrangle more than a few participants. But, there are a number of resources in development that will make co-creation the buzzword of the decade.

4. Owned Panels & Community

Beyond community engagement in the social sphere. There is a massive opportunity to utilise these engaged users in product development. Through a trusted forum, users are far more likely to actively and willingly participate in research. Providing insights into the community that will drive stronger product outcomes.

What does this all mean for me

So, there is a lot to keep in mind when conducting any effective user research. And there are a lot of very compelling reasons to do mixed method research and do it regularly. 

To remain innovative, and ahead of the ball it remains very important to be engaged with your users and their needs. Using qualitative and qualitative research to inform product decisions means you can operate knowing a fuller picture.

One of the biggest challenges with user research can be the coordination and participant recruitment. That’s where we come in.

Taking the pain out of the process and streamlining your research. Take a look at our Qualitative Research option, Reframer. Giving you an insight into how we can help make your mixed method research easier and analyse your data efficiently and in a format that is easy to understand.

User research doesn’t need to take weeks or months. With our participant recruitment we can provide reliable and quality participants across the board that will provide data you can rely on.

Why not get in deeper with mixed method research today!

Learn more
1 min read

Using User Engagement Metrics to Improve Your Website's User Experience

Are your users engaged in your website? The success of your website will largely depend on your answer. After all, engaged users are valuable users; they keep coming back and will recommend your site to colleagues, friends, and family. So, if you’re not sure if your users are engaged or not, consider looking into your user engagement metrics.

User engagement can be measured using a number of key metrics provided by website analytics platforms. Metrics such as bounce rate, time on page, and click-through rate all provide clues to user engagement and therefore overall website user experience.

This article will help you understand user engagement and why it’s important to measure. We’ll also discuss how to apply user engagement insights to improve website success. Combining a little bit of data with some user research is a powerful thing, so let’s get into it.

Understanding User Engagement Metrics 📐

User engagement metrics provide valuable insight for both new and existing websites. They should be checked regularly as a sort of ‘pulse check’ for website user experience and performance. So, what metrics should you be looking at? Website metrics can be overwhelming; there are hundreds if not thousands to analyze, so let’s focus on three:

Bounce rate


Measures the percentage of users that visit just one page on your site before leaving. If your bounce rate is high it suggests that users aren’t finding the content relevant, engaging, or useful. It points to a poor initial reaction to your site and means that users are arriving, making a judgment about your design or content, and then leaving.

Time on page


Calculated by the time difference between the point when a person lands on the page and when they move on to the next one. It indicates how engaging or relevant individual pages on your website are. Low time on page figures suggest that users aren’t getting what they need from a certain page, either in terms of the content, the aesthetics, or both.

Click-through rate


Click-through rate compares the number of times someone clicks on your content, to the number of impressions you get (how many times an internal link or ad was viewed). The higher the rate, the better the engagement and performance of that element. User experience design can influence click-through rates through copywriting, button contrasts, heading structure, navigation, etc.

Conversion rate


Conversion rates are perhaps the pinnacle of user engagement metrics. Conversion rate is the percentage of users that perform specific tasks you define. They are therefore dictated by your goals, which could include form submissions, transactions, etc. If your website has high conversion rates, you can be fairly confident that your website is matching your users’ needs, requirements, and expectations.

But how do these metrics help? Well, they don’t give you an answer directly. The metrics point to potential issues with website user experience. They guide further research and subsequent updates that lead to website improvement. In the next section, we’ll discuss how these and others can support better website user experiences.

Identifying Areas for Improvement 💡

So, you’ve looked at your website’s user engagement metrics and discovered some good, and some bad. The good news is, there’s value in discovering both! The catch? You just need to find it. Remember, the metrics on their own don’t give you answers; they provide you direction.

The ‘clues’ that user engagement metrics provide are the starting point for further research. Remember, we want to make data-driven decisions. We want to avoid making assumptions and jumping to conclusions about why our website is reporting certain metrics. Fortunately, there are a bunch of different ways to do this.

User research data can be gathered by using both qualitative and quantitative research techniques. Insights into user behavior and needs can reveal why your website might be performing in certain ways. Research can include both qualitative and quantitative techniques.

Qualitative research techniques

  • Usability test – Test a product with people by observing them as they attempt to complete various tasks.
  • User interview – Sit down with a user to learn more about their background, motivations and pain points.
  • Contextual inquiry – Learn more about your users in their own environment by asking them questions before moving onto an observation activity.
  • Focus group – Gather 6 to 10 people for a forum-like session to get feedback on a product.

Quantitate research techniques

  • Card sorts – Find out how people categorize and sort information on your website.
  • First-click tests – See where people click first when tasked with completing an action.
  • A/B tests – Compare 2 versions of a design in order to work out which is more effective.
  • Clickstream analysis – Analyze aggregate data about website visits.
  • Tree-testing - Test your site structure using text-only categorization and labels

The type of research depends on what question you want to answer. Being specific about your question will help you identify what research technique(s) to deploy and ultimately the quality of your answer. If you’re serious about website improvement; identify problem areas with user engagement metrics, and investigate how to fix them with user research.

Optimizing Content and Design

If you have conducted user research and found weak areas on your website, there are many things to consider. Three good places to start are navigation, content, and website layout. Combined, these have a huge impact on user experience and can be leveraged to address disappointing engagement metrics.

Navigation


Navigation is a crucial aspect of creating a good user experience since it fundamentally connects pages and content which allows users to find what they need. Navigation should be simple and easy to follow, with important information/actions at the top of menus. Observing the results of card sorting, tree testing, and user testing can be particularly useful in website optimization efforts. You may find that search bars, breadcrumb trails, and internal links can also help overcome navigation issues.

Content


Are users seeing compelling or relevant content when they arrive on your site? Is your content organized in a way that encourages further exploration? Card sorting and content audits are useful in answering these questions and can help provide you with the insights required to optimize your content. You should identify what content might be redundant, out of date, or repetitive, as well as any gaps that may need filling.

Layout


A well-designed layout can improve the overall usability of a website, making it easier for users to find what they're looking for, understand the content, and engage with it. Consider how consistent your heading structures are and be sure to use consistent styling throughout the site, such as similar font sizes and colors. Don’t be afraid to use white space; it’s great at breaking up sections and making content more readable.

An additional factor related to layout is mobile optimization. Mobile-first design is necessary for apps, but it should also factor into your website design. How responsive is your website? How easy is it to navigate on mobile? Is your font size appropriate? You might find that poor mobile experience is negatively impacting user engagement metrics.

Measuring Success 🔎

User experience design is an iterative, ongoing process, so it’s important to keep a record of your website’s user experience metrics at various points of development. Fortunately, website analytics platforms will provide you with historic user data and key metrics; but be sure to keep a separate record of what improvements you make along the way. This will help you pinpoint what changes impacted different metrics.

Define your goals and create a website optimization checklist that monitors key metrics on your site. For example, whenever you make an update, ensure bounce rates don’t exceed a certain number during the days following; check that your conversion rates are performing as they should be; check your time on sites hasn’t dropped. Be sure to compare metrics between desktop and mobile too.

User’s needs and expectations change over time, so keep an eye on how new content is performing. For example, which new blog posts have attracted the most attention? What pages or topics have had the most page views compared to the previous period? Tracking such changes can help to inform what your users are currently engaged in, and will help guide your user experience improvements.

Conclusion 🤗

User engagement metrics allow you to put clear parameters around user experience. They allow you to measure where your website is performing well, and where your website might need improving. Their main strength is in how accessible they are; you can access key metrics on website analytics platforms in moments. However, user engagement metrics on their own may not reveal how and why certain website improvements should be made. In order to understand what’s going on, you often need to dig a little deeper.

Time on page, bounce rate, click-through rate, and conversion rates are all great starting points to understand your next steps toward website improvement. Use them to define where further research may be needed. Not sure why your average pages per session is two? Try conducting first-click testing; where are they heading that seems to be a dead end? Is your bounce rate too high? Conduct a content audit to find out if your information is still relevant, or look into navigation roadblocks. Whatever the question; keep searching for the answer.

User engagement metrics will keep you on your toes, but that’s a good thing. They empower you to make ongoing website improvements and ensure that users are at the heart of your website design. 

Learn more
1 min read

User research and agile squadification at Trade Me

Hi, I’m Martin. I work as a UX researcher at Trade Me having left Optimal Experience (Optimal Workshop's sister company) last year. For those of you who don’t know, Trade Me is New Zealand’s largest online auction site that also lists real estate to buy and rent, cars to buy, jobs listings, travel accommodation and quite a few other things besides. Over three quarters of the population are members and about three quarters of the Internet traffic for New Zealand sites goes to the sites we run.

Leaving a medium-sized consultancy and joining Trade Me has been a big change in many ways, but in others not so much, as I hadn’t expected to find myself operating in a small team of in-house consultants. The approach the team is taking is proving to be pretty effective, so I thought I’d share some of the details of the way we work with the readers of Optimal Workshop’s blog. Let me explain what I mean…

What agile at Trade Me looks like

Over the last year or so, Trade Me has moved all of its development teams over to Agile following a model pioneered by Spotify. All of the software engineering parts of the business have been ‘squadified’. These people produce the websites & apps or provide and support the infrastructure that makes everything possible.Across Squads, there are common job roles in ‘Chapters’ (like designers or testers) and because people are not easy to force into boxes, and why should they be, there are interest groups called ‘Guilds’.The squads are self-organizing, running their own processes and procedures to get to where they need to. In practice, this means they use as many or as few of the Kanban, Scrum, and Rapid tools they find useful. Over time, we’ve seen that squads tend to follow similar practices as they learn from each other.

How our UX team fits in

Our UX team of three sits outside the squads, but we work with them and with the product owners across the business.How does this work? It might seem counter-intuitive to have UX outside of the tightly-integrated, highly-focused squads, sometimes working with product owners working on stuff that might have little to do with what’s being currently developed in the squads. This comes down to the way Trade Me divides down the UX responsibilities within the organization. Within each squad there is a designer. He or she is responsible for how that feature or app looks, and, more importantly, how it acts — interaction design as well as visual design.Then what do we do, if we are the UX team?

We represent the voice of Trade Me’s users

By conducting research with Trade Me’s users we can validate the squads’ day-to-day decisions, and help frame decisions on future plans. We do this by wearing two hats. Wearing the pointy hats of structured, detailed researchers, we look into long-term trends: the detailed behaviours and goals of our different audiences. We’ve conducted lots of one-on-one interviews with hundreds of people, including top sellers, motor parts buyers, and job seekers, as well as running surveys, focus groups and user testing sessions of future-looking prototypes. For example, we recently spent time with a number of buyers and sellers, seeking to understand their motivations and getting under their skin to find out how they perceive Trade Me.

This kind of research enables Trade Me to anticipate and respond to changes in user perception and satisfaction.Swapping hats to an agile beanie (and stretching the metaphor to breaking point), we react to the medium-term, short-term and very short-term needs of the squads testing their ideas, near-finished work and finished work with users, as well as sometimes simply answering questions and providing opinion, based upon our research. Sometimes this means that we can be testing something in the afternoon having only heard we are needed in the morning. This might sound impossible to accommodate, but the pace of change at Trade Me is such that stuff is getting deployed pretty much every day, many of which affects our users directly. It’s our job to ensure that we support our colleagues to do the very best we can for our users.

How our ‘drop everything’ approach works in practice

Screen Shot 2014-07-11 at 10.00.21 am

We recently conducted five or six rounds (no one can quite remember, we did it so quickly) of testing of our new iPhone application (pictured above) — sometimes testing more than one version at a time. The development team would receive our feedback face-to-face, make changes and we’d be testing the next version of the app the same or the next day. It’s only by doing this that we can ensure that Trade Me members will see positive changes happening daily rather than monthly.

How we prioritize what needs to get done

To help us try to decide what we should be doing at any one time we have some simple rules to prioritise:

  • Core product over other business elements
  • Finish something over start something new
  • Committed work over non-committed work
  • Strategic priorities over non-strategic priorities
  • Responsive support over less time-critical work
  • Where our input is crucial over where our input is a bonus

Applying these rules to any situation makes the decision whether to jump in and help pretty easy.At any one time, each of us in the UX team will have one or more long-term projects, some medium-term projects, and either some short-term projects or the capacity for some short-term projects (usually achieved by putting aside a long-term project for a moment).

We manage our time and projects on Trello, where we can see at a glance what’s happening this and next week, and what we’ve caught sniff of in the wind that might be coming up, or definitely is coming up.On the whole, both we and the squads favour fast response, bulleted list, email ‘reports’ for any short-term requests for user testing.  We get a report out within four hours of testing (usually well within that). After all, the squads are working in short sprints, and our involvement is often at the sharp end where delays are not welcome. Most people aren’t going to read past the management summary anyway, so why not just write that, unless you have to?

How we share our knowledge with the organization

Even though we mainly keep our reporting brief, we want the knowledge we’ve gained from working with each squad or on each product to be available to everyone. So we maintain a wiki that contains summaries of what we did for each piece of work, why we did it and what we found. Detailed reports, if there are any, are attached. We also send all reports out to staff who’ve subscribed to the UX interest email group.

Finally, we send out a monthly email, which looks across a bunch of research we’ve conducted, both short and long-term, and draws conclusions from which our colleagues can learn. All of these latter activities contribute to one of our key objectives: making Trade Me an even more user-centred organization than it is.I’ve been with Trade Me for about six months and we’re constantly refining our UX practices, but so far it seems to be working very well.Right, I’d better go – I’ve just been told I’m user testing something pretty big tomorrow and I need to write a test script!

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.