November 18, 2022
4 min

Moderated vs unmoderated research: which approach is best?

Knowing and understanding why and how your users use your product is invaluable for getting to the nitty gritty of usability. Delving deep with probing questions into motivation or skimming over looking for issues can equally be informative. 

Put super simply, usability testing literally is testing how usable your product is for your users. If your product isn’t usable users often won’t complete their task, let alone come back for more. No one wants to lose users before they even get started. Usability testing gets under their skin and really into the how, why and what they want (and equally what they don’t).

As we have been getting used to video calling regularly and using the internet for interactions, usability testing has followed suit. Being able to access participants remotely has allowed us to diversify the participant pool by not being restricted to those that are close enough to be in-person. This has also allowed an increase in the number of participants per test, as it becomes more cost-effective to perform remote usability testing.

But if we’re remote, does this mean it can’t be moderated? No - remote testing, along with modern technology, can mean that remote testing can be facilitated and moderated. But what is the best method - moderated or unmoderated?

What is moderated remote research testing?

In traditional usability testing, moderated research is done in person. With the moderator and the participant in the same physical space. This, of course, allows for conversation and observational behavioral monitoring. Meaning the moderator can note not only what the participant answers but how and even make note of the body language, surroundings, and other influencing factors. 

This has also meant that traditionally, the participant pool has been limited to those that can be available (and close enough) to make it into a facility for testing. And being in person has meant it takes time (and money) to perform these tests.

As technology has moved along and the speed of internet connections and video calling has increased, this has opened up a world of opportunities for usability testing. Allowing usability testing to be done remotely. Moderators can now set up testing remotely and ‘dial in’ to observe participants anywhere they are. And potentially even running focus groups or other testing in a group format across the internet. 

Pros of moderated remote research testing:

- In-depth gathering of insights through a back-and-forth conversation and observing of the participants.

- Follow-up questions don’t underestimate the value of being available to ask questions throughout the testing. And following up in the moment.

- Observational monitoring noticing and noting the environment and how the participants are behaving, can give more insight into how or why they choose to make a decision.

- Quick remote testing can be quicker to start, find participants, and complete than in-person. This is because you only need to set up a time to connect via the internet, rather than coordinating travel times, etc.

- Location (local and/or international) Testing online removes reliance on participants being physically present for the testing. This broadens your ability to broaden the pool, and participants can be either within your country or global. 

Cons of moderated remote research testing:

- Time-consuming having to be present at each test takes time. As does analyzing the data and insights generated. But remember, this is quality data.

- Limited interactions with any remote testing there is only so much you can observe or understand across the window of a computer screen. It can be difficult to have a grasp on all the factors that might be influencing your participants.

What is unmoderated remote research testing?

In its most simple sense, unmoderated user testing removes the ‘moderated’ part of the equation. Instead of having a facilitator guide participants through the test, participants are left to complete the testing by themselves and in their own time. For the most part, everything else stays the same. 

Removing the moderator, means that there isn’t anyone to respond to queries or issues in the moment. This can either delay, influence, or even potentially force participants to not complete or maybe not be as engaged as you may like. Unmoderated research testing suits a very simple and direct type of test. With clear instructions and no room for inference. 

Pros of unmoderated remote research testing:

- Speed and turnaround,  as there is no need to schedule meetings with each and every participant. Unmoderated usability testing is usually much faster to initiate and complete.

- Size of study (participant numbers) unmoderated usability testing allows you to collect feedback from dozens or even hundreds of users at the same time. 


- Location (local and/or international) Testing online removes reliance on participants being physically present for the testing, which broadens your participant pool.  And unmoderated testing means that it literally can be anywhere while participants complete the test in their own time.

Cons of unmoderated remote research testing:

- Follow-up questions as your participants are working on their own and in their own time, you can’t facilitate and ask questions in the moment. You may be able to ask limited follow-up questions.

- Products need to be simple to use unmoderated testing does not allow for prototypes or any product or site that needs guidance. 

- Low participant support without the moderator any issues with the test or the product can’t be picked up immediately and could influence the output of the test.

When should you do moderated vs unmoderated remote usability testing?

Each moderated and unmoderated remote usability testing have its use and place in user research. It really depends on the question you are asking and what you are wanting to know.

Moderated testing allows you to gather in-depth insights, follow up with questions, and engage the participants in the moment. The facilitator has the ability to guide participants to what they want to know, to dig deeper, or even ask why at certain points. This method doesn’t need as much careful setup as the participants aren’t on their own. While this is all done online, it does still allow connection and conversation. This method allows for more investigative research. Looking at why users might prefer one prototype to another. Or possibly tree testing a new website navigation to understand where they might get lost and querying why the participant made certain choices.

Unmoderated testing, on the other hand, is literally leaving the participants to it. This method needs very careful planning and explaining upfront. The test needs to be able to be set and run without a moderator. This lends itself more to wanting to know a direct answer to a query. Such as a card sort on a website to understand how your users might sort information. Or a first click to see how/where users will click on a new website.

Planning your next user test? Here’s how to choose the right method

With the ability to expand our pool of participants across the globe with all of the advances (and acceptance of) technology and video calling etc, the ability to expand our understanding of users’ experiences is growing. Remote usability testing is a great option when you want to gather information from users in the real world. Depending on your query, moderated or unmoderated usability testing will suit your study. As with all user testing, being prepared and planning ahead will allow you to make the most of your test.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Usability Testing Guide: What It Is, How to Run It, and When to Use Each Method

Knowing and understanding why and how your users use your product can be invaluable for getting to the nitty gritty of usability. Where they get stuck and where they fly through. Delving deep with probing questions into motivation or skimming over looking for issues can equally be informative.

Usability testing can be done in several ways, each way has its benefits. Put super simply, usability testing literally is testing how useable your product is for your users. If your product isn't useable users will not stick around or very often complete their task, let alone come back for more.

What is usability testing?

Usability testing is a research method used to evaluate how easy something is to use by testing it with representative users.

These tests typically involve observing a participant as they work through a series of tasks involving the product being tested. Having conducted several usability tests, you can analyze your observations to identify the most common issues.

We go into the three main methods of usability testing:

  1. Moderated and unmoderated
  2. Remote or in person
  3. Explorative, assessment or comparative

1. Moderated or unmoderated usability testing

Moderated usability testing


Moderated usability testing
is done in-person or remotely by a researcher who introduces the test to participants, answers their queries, and asks follow-up questions. Often these tests are done in real time with participants and can involve other research stakeholders. Moderated testing usually produces more in-depth results thanks to the direct interaction between researchers and test participants. However, this can be expensive to organize and run.

Top tip: Use moderated testing to investigate the reasoning behind user behavior.

Unmoderated usability testing


Unmoderated usability testing
is done without direct supervision; likely participants are in their own homes and/or using their own devices to browse the website that is being tested. And often at their own pace.  The cost of unmoderated testing is lower, though participant answers can remain superficial and making follow-up questions can be difficult.

Top tip: Use unmoderated testing to test a very specific question or observe and measure behavior patterns.

2. Research or in-person usability testing

Remote usability testing


Remote usability testing is done over the internet or by phone. Allowing the participants to have the time and space to work in their own environment and at their own pace. This however doesn’t give the researcher much in the way of contextual data because you’re unable to ask questions around intention or probe deeper if the participant makes a particular decision. Remote testing doesn’t go as deep into a participant’s reasoning, but it allows you to test large numbers of people in different geographical areas using fewer resources.

Top tip: Use remote testing when a large group of participants are needed and the questions asked can be direct and unambiguous.

In-person usability testing


In-person usability testing, as the name suggests, is done in the presence of a researcher. In-person testing does provide contextual data as researchers can observe and analyze body language and facial expressions. You’re also often able to converse with participants and find out more about why they do something. However, in-person testing can be expensive and time-consuming: you have to find a suitable space, block out a specific date, and recruit (and often pay) participants.

Top tip: In-person testing gives researchers more time and insight into motivation for decisions.

3. Explorative, Assessment or comparative testing

These three usability testing methods generate different types of information:

Explorative testing


Explorative testing is open-ended. Participants are asked to brainstorm, give opinions, and express emotional impressions about ideas and concepts. The information is typically collected in the early stages of product development and helps researchers pinpoint gaps in the market, identify potential new features, and workshop new ideas.

Assessment research


Assessment research is used to test a user's satisfaction with a product and how well they are able to use it. It's used to evaluate general functionality.

Comparative research


Comparative research methods involve asking users to choose which of two solutions they prefer, and they may be used to compare a product with its competitors.

Top tip: Depending on what research is being done, and how much qualitative or quantitative data is wanted.

Which method is right for you?

Whether the testing is done in-person, remote, moderated or unmoderated will depend on your purpose, what you want out of the testing, and to some extent your budget. 

Depending on what you are testing, each of the usability testing methods we explored here can offer an answer. If you are at the development stage of a product it can be useful to conduct a usability test on the entire product. Checking the intuitive usability of your website, to ensure users can make the best decisions, quickly. Or adding, changing or upgrading a product can also be the moment to check on a specific question around usability. Planning and understanding your objectives are key to selecting the right usability testing option for your project.

Let's take a look at a couple of examples of usability testing.

1. Lab based, in-person moderated testing - mid-life website

Imagine you have a website that sells sports equipment. Over time your site has become cluttered and disorganized, much like a bricks and mortar store may. You’ve noticed a drop in sales in certain areas. How do you find out what is going wrong or where users are getting lost? Having an in-person, lab (or other controlled environment), moderated usability test with users you can set tasks, watch (and record) what they do.

The researcher can literally be standing or sitting next to the participant throughout, recording contextual information such as how they interacted with the mouse, laptop or even the seat. Watching for cues as to the comfort of the participant and asking questions about why they make decisions can provide richer insights. Maybe they wanted purple yoga pants, but couldn’t find the ‘yoga’ section which was listed under gym rather than a clothing section.

Meaning you can look at how your stock is organised, or even investigate undertaking a card sort. This provides robust and fully rounded feedback on users behaviours, expectations and experiences. Providing data that can directly be turned into actionable directives when redeveloping the website. 

2. Remote, moderated assessment testing - app product development

You are looking at launching an app for parents to access for information and updates for the school. It’s still in development stage and at this point you want to know how easy the app is to use. Setting some very specific set tasks for participants to complete the app can be sent to them and they can be left to complete (or not). Providing feedback and comments around the usability.

The next step may be to use first click testing to see how and where the interface is clicked and where participants may be spending time, or becoming lost. Whilst the feedback and data gathered from this testing can be light, it will be very direct to the questions asked. And will provide data to back up (or possibly not) what assumptions were made.

3. Moderated, In-person, explorative testing - new product development

You’re right at the start of the development process. The idea is new and fresh and the basics are being considered. What better way to get an understanding of what your users’ truly want than an explorative study.

Open-ended questions with participants in a one-on-one environment (or possibly in groups) can provide rich data and insights for the development team. Imagine you have an exciting new promotional app that you are developing for a client. There are similar apps on the market but none as exciting as what your team has dreamt up. By putting it (and possibly the competitors) to participants they can give direct feedback on what they like, love and loathe.

They can also help brainstorm ideas or better ways to make the app work, or improve the interface. All of this done, before there is money sunk in development.

Usability testing summary: When to use each method (and why)

Key objectives will dictate which usability testing method will deliver the answers to your questions.

Whether it’s in-person, remote, moderated or comparative with a bit of planning you can gather data around your users very real experience of your product. Identify issues, successes and failures. Addressing your user experience with real data, and knowledge can but lead to a more intuitive product.

Learn more
1 min read

Using paper prototypes in UX

In UX research we are told again and again that to ensure truly user-centered design, it’s important to test ideas with real users as early as possible. There are many benefits that come from introducing the voice of the people you are designing for in the early stages of the design process. The more feedback you have to work with, the more you can inform your design to align with real needs and expectations. In turn, this leads to better experiences that are more likely to succeed in the real world.It is not surprising then that paper prototypes have become a popular tool used among researchers. They allow ideas to be tested as they emerge, and can inform initial designs before putting in the hard yards of building the real thing. It would seem that they’re almost a no-brainer for researchers, but just like anything out there, along with all the praise, they have also received a fair share of criticism, so let’s explore paper prototypes a little further.

What’s a paper prototype anyway? 🧐📖

Paper prototyping is a simple usability testing technique designed to test interfaces quickly and cheaply. A paper prototype is nothing more than a visual representation of what an interface could look like on a piece of paper (or even a whiteboard or chalkboard). Unlike high-fidelity prototypes that allow for digital interactions to take place, paper prototypes are considered to be low-fidelity, in that they don’t allow direct user interaction. They can also range in sophistication, from a simple sketch using a pen and paper to simulate an interface, through to using designing or publishing software to create a more polished experience with additional visual elements.

Screen Shot 2016-04-15 at 9.26.30 AM
Different ways of designing paper prototypes, using OptimalSort as an example

Showing a research participant a paper prototype is far from the real deal, but it can provide useful insights into how users may expect to interact with specific features and what makes sense to them from a basic, user-centered perspective. There are some mixed attitudes towards paper prototypes among the UX community, so before we make any distinct judgements, let's weigh up their pros and cons.

Advantages 🏆

  • They’re cheap and fastPen and paper, a basic word document, Photoshop. With a paper prototype, you can take an idea and transform it into a low-fidelity (but workable) testing solution very quickly, without having to write code or use sophisticated tools. This is especially beneficial to researchers who work with tight budgets, and don’t have the time or resources to design an elaborate user testing plan.
  • Anyone can do itPaper prototypes allow you to test designs without having to involve multiple roles in building them. Developers can take a back seat as you test initial ideas, before any code work begins.
  • They encourage creativityFrom both the product teams participating in their design, but also from the users. They require the user to employ their imagination, and give them the opportunity express their thoughts and ideas on what improvements can be made. Because they look unfinished, they naturally invite constructive criticism and feedback.
  • They help minimize your chances of failurePaper prototypes and user-centered design go hand in hand. Introducing real people into your design as early as possible can help verify whether you are on the right track, and generate feedback that may give you a good idea of whether your idea is likely to succeed or not.

Disadvantages 😬

  • They’re not as polished as interactive prototypesIf executed poorly, paper prototypes can appear unprofessional and haphazard. They lack the richness of an interactive experience, and if our users are not well informed when coming in for a testing session, they may be surprised to be testing digital experiences on pieces of paper.
  • The interaction is limitedDigital experiences can contain animations and interactions that can’t be replicated on paper. It can be difficult for a user to fully understand an interface when these elements are absent, and of course, the closer the interaction mimics the final product, the more reliable our findings will be.
  • They require facilitationWith an interactive prototype you can assign your user tasks to complete and observe how they interact with the interface. Paper prototypes, however, require continuous guidance from a moderator in communicating next steps and ensuring participants understand the task at hand.
  • Their results have to be interpreted carefullyPaper prototypes can’t emulate the final experience entirely. It is important to interpret their findings while keeping their limitations in mind. Although they can help minimize your chances of failure, they can’t guarantee that your final product will be a success. There are factors that determine success that cannot be captured on a piece of paper, and positive feedback at the prototyping stage does not necessarily equate to a well-received product further down the track.

Improving the interface of card sorting, one prototype at a time 💡

We recently embarked on a research project looking at the user interface of our card-sorting tool, OptimalSort. Our research has two main objectives — first of all to benchmark the current experience on laptops and tablets and identify ways in which we can improve the current interface. The second objective is to look at how we can improve the experience of card sorting on a mobile phone.

Rather than replicating the desktop experience on a smaller screen, we want to create an intuitive experience for mobiles, ensuring we maintain the quality of data collected across devices.Our current mobile experience is a scaled down version of the desktop and still has room for improvement, but despite that, 9 per cent of our users utilize the app. We decided to start from the ground up and test an entirely new design using paper prototypes. In the spirit of testing early and often, we decided to jump right into testing sessions with real users. In our first testing sprint, we asked participants to take part in two tasks. The first was to perform an open or closed card sort on a laptop or tablet. The second task involved using paper prototypes to see how people would respond to the same experience on a mobile phone.

blog_artwork_01-03

Context is everything 🎯

What did we find? In the context of our research project, paper prototypes worked remarkably well. We were somewhat apprehensive at first, trying to figure out the exact flow of the experience and whether the people coming into our office would get it. As it turns out, people are clever, and even those with limited experience using a smartphone were able to navigate and identify areas for improvement just as easily as anyone else. Some participants even said they prefered the experience of testing paper prototypes over a laptop. In an effort to make our prototype-based tasks easy to understand and easy to explain to our participants, we reduced the full card sort to a few key interactions, minimizing the number of branches in the UI flow.

This could explain a preference for the mobile task, where we only asked participants to sort through a handful of cards, as opposed to a whole set.The main thing we found was that no matter how well you plan your test, paper prototypes require you to be flexible in adapting the flow of your session to however your user responds. We accepted that deviating from our original plan was something we had to embrace, and in the end these additional conversations with our participants helped us generate insights above and beyond the basics we aimed to address. We now have a whole range of feedback that we can utilize in making more sophisticated, interactive prototypes.

Whether our success with using paper prototypes was determined by the specific setup of our testing sessions, or simply by their pure usefulness as a research technique is hard to tell. By first performing a card sorting task on a laptop or tablet, our participants approached the paper prototype with an understanding of what exactly a card sort required. Therefore there is no guarantee that we would have achieved the same level of success in testing paper prototypes on their own. What this does demonstrate, however, is that paper prototyping is heavily dependent on the context of your assessment.

Final thoughts 💬

Paper prototypes are not guaranteed to work for everybody. If you’re designing an entirely new experience and trying to describe something complex in an abstracted form on paper, people may struggle to comprehend your idea. Even a careful explanation doesn’t guarantee that it will be fully understood by the user. Should this stop you from testing out the usefulness of paper prototypes in the context of your project? Absolutely not.

In a perfect world we’d test high fidelity interactive prototypes that resemble the real deal as closely as possible, every step of the way. However, if we look at testing from a practical perspective, before we can fully test sophisticated designs, paper prototypes provide a great solution for generating initial feedback.In his article criticizing the use of paper prototypes, Jake Knapp makes the point that when we show customers a paper prototype we’re inviting feedback, not reactions. What we found in our research however, was quite the opposite.

In our sessions, participants voiced their expectations and understanding of what actions were possible at each stage, without us having to probe specifically for feedback. Sure we also received general comments on icon or colour preferences, but for the most part our users gave us insights into what they felt throughout the experience, in addition to what they thought.

Further reading 🧠

Learn more
1 min read

Efficient Research: Maximizing the ROI of Understanding Your Customers

Introduction

User research is invaluable, but in fast-paced environments, researchers often struggle with tight deadlines, limited resources, and the need to prove their impact. In our recent UX Insider webinar, Weidan Li, Senior UX Researcher at Seek, shared insights on Efficient Research—an approach that optimizes Speed, Quality, and Impact to maximize the return on investment (ROI) of understanding customers.

At the heart of this approach is the Efficient Research Framework, which balances these three critical factors:

  • Speed – Conducting research quickly without sacrificing key insights.
  • Quality – Ensuring rigor and reliability in findings.
  • Impact – Making sure research leads to meaningful business and product changes.

Within this framework, Weidan outlined nine tactics that help UX researchers work more effectively. Let’s dive in.

1. Time Allocation: Invest in What Matters Most

Not all research requires the same level of depth. Efficient researchers prioritize their time by categorizing projects based on urgency and impact:

  • High-stakes decisions (e.g., launching a new product) require deep research.
  • Routine optimizations (e.g., tweaking UI elements) can rely on quick testing methods.
  • Low-impact changes may not need research at all.

By allocating time wisely, researchers can avoid spending weeks on minor issues while ensuring critical decisions are well-informed.

2. Assistance of AI: Let Technology Handle the Heavy Lifting

AI is transforming UX research, enabling faster and more scalable insights. Weidan suggests using AI to:

  • Automate data analysis – AI can quickly analyze survey responses, transcripts, and usability test results.
  • Generate research summaries – Tools like ChatGPT can help synthesize findings into digestible insights.
  • Speed up recruitment – AI-powered platforms can help find and screen participants efficiently.

While AI can’t replace human judgment, it can free up researchers to focus on higher-value tasks like interpreting results and influencing strategy.

3. Collaboration: Make Research a Team Sport

Research has a greater impact when it’s embedded into the product development process. Weidan emphasizes:

  • Co-creating research plans with designers, PMs, and engineers to align on priorities.
  • Involving stakeholders in synthesis sessions so insights don’t sit in a report.
  • Encouraging non-researchers to run lightweight studies, such as A/B tests or quick usability checks.

When research is shared and collaborative, it leads to faster adoption of insights and stronger decision-making.

4. Prioritization: Focus on the Right Questions

With limited resources, researchers must choose their battles wisely. Weidan recommends using a prioritization framework to assess:

  • Business impact – Will this research influence a high-stakes decision?
  • User impact – Does it address a major pain point?
  • Feasibility – Can we conduct this research quickly and effectively?

By filtering out low-priority projects, researchers can avoid research for research’s sake and focus on what truly drives change.

5. Depth of Understanding: Go Beyond Surface-Level Insights

Speed is important, but efficient research isn’t about cutting corners. Weidan stresses that even quick studies should provide a deep understanding of users by:

  • Asking why, not just what – Observing behavior is useful, but uncovering motivations is key.
  • Using triangulation – Combining methods (e.g., usability tests + surveys) to validate findings.
  • Revisiting past research – Leveraging existing insights instead of starting from scratch.

Balancing speed with depth ensures research is not just fast, but meaningful.

6. Anticipation: Stay Ahead of Research Needs

Proactive researchers don’t wait for stakeholders to request studies—they anticipate needs and set up research ahead of time. This means:

  • Building a research roadmap that aligns with upcoming product decisions.
  • Running continuous discovery research so teams have a backlog of insights to pull from.
  • Creating self-serve research repositories where teams can find relevant past studies.

By anticipating research needs, UX teams can reduce last-minute requests and deliver insights exactly when they’re needed.

7. Justification of Methodology: Explain Why Your Approach Works

Stakeholders may question research methods, especially when they seem time-consuming or expensive. Weidan highlights the importance of educating teams on why specific methods are used:

  • Clearly explain why qualitative research is needed when stakeholders push for just numbers.
  • Show real-world examples of how past research has led to business success.
  • Provide a trade-off analysis (e.g., “This method is faster but provides less depth”) to help teams make informed choices.

A well-justified approach ensures research is respected and acted upon.

8. Individual Engagement: Tailor Research Communication to Your Audience

Not all stakeholders consume research the same way. Weidan recommends adapting insights to fit different audiences:

  • Executives – Focus on high-level impact and key takeaways.
  • Product teams – Provide actionable recommendations tied to specific features.
  • Designers & Engineers – Share usability findings with video clips or screenshots.

By delivering insights in the right format, researchers increase the likelihood of stakeholder buy-in and action.

9. Business Actions: Ensure Research Leads to Real Change

The ultimate goal of research is not just understanding users—but driving business decisions. To ensure research leads to action:

  • Follow up on implementation – Track whether teams apply the insights.
  • Tie findings to key metrics – Show how research affects conversion rates, retention, or engagement.
  • Advocate for iterative research – Encourage teams to re-test and refine based on new data.

Research is most valuable when it translates into real business outcomes.

Final Thoughts: Research That Moves the Needle

Efficient research is not just about doing more, faster—it’s about balancing speed, quality, and impact to maximize its influence. Weidan’s nine tactics help UX researchers work smarter by:


✔️  Prioritizing high-impact work
✔️  Leveraging AI and collaboration
✔️  Communicating research in a way that drives action

By adopting these strategies, UX teams can ensure their research is not just insightful, but transformational.

Watch the full webinar here

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.