November 18, 2022
4 min

Moderated vs unmoderated research: which approach is best?

Knowing and understanding why and how your users use your product is invaluable for getting to the nitty gritty of usability. Delving deep with probing questions into motivation or skimming over looking for issues can equally be informative. 

Put super simply, usability testing literally is testing how usable your product is for your users. If your product isn’t usable users often won’t complete their task, let alone come back for more. No one wants to lose users before they even get started. Usability testing gets under their skin and really into the how, why and what they want (and equally what they don’t).

As we have been getting used to video calling regularly and using the internet for interactions, usability testing has followed suit. Being able to access participants remotely has allowed us to diversify the participant pool by not being restricted to those that are close enough to be in-person. This has also allowed an increase in the number of participants per test, as it becomes more cost-effective to perform remote usability testing.

But if we’re remote, does this mean it can’t be moderated? No - remote testing, along with modern technology, can mean that remote testing can be facilitated and moderated. But what is the best method - moderated or unmoderated?

What is moderated remote research testing?

In traditional usability testing, moderated research is done in person. With the moderator and the participant in the same physical space. This, of course, allows for conversation and observational behavioral monitoring. Meaning the moderator can note not only what the participant answers but how and even make note of the body language, surroundings, and other influencing factors. 

This has also meant that traditionally, the participant pool has been limited to those that can be available (and close enough) to make it into a facility for testing. And being in person has meant it takes time (and money) to perform these tests.

As technology has moved along and the speed of internet connections and video calling has increased, this has opened up a world of opportunities for usability testing. Allowing usability testing to be done remotely. Moderators can now set up testing remotely and ‘dial in’ to observe participants anywhere they are. And potentially even running focus groups or other testing in a group format across the internet. 

Pros of moderated remote research testing:

- In-depth gathering of insights through a back-and-forth conversation and observing of the participants.

- Follow-up questions don’t underestimate the value of being available to ask questions throughout the testing. And following up in the moment.

- Observational monitoring noticing and noting the environment and how the participants are behaving, can give more insight into how or why they choose to make a decision.

- Quick remote testing can be quicker to start, find participants, and complete than in-person. This is because you only need to set up a time to connect via the internet, rather than coordinating travel times, etc.

- Location (local and/or international) Testing online removes reliance on participants being physically present for the testing. This broadens your ability to broaden the pool, and participants can be either within your country or global. 

Cons of moderated remote research testing:

- Time-consuming having to be present at each test takes time. As does analyzing the data and insights generated. But remember, this is quality data.

- Limited interactions with any remote testing there is only so much you can observe or understand across the window of a computer screen. It can be difficult to have a grasp on all the factors that might be influencing your participants.

What is unmoderated remote research testing?

In its most simple sense, unmoderated user testing removes the ‘moderated’ part of the equation. Instead of having a facilitator guide participants through the test, participants are left to complete the testing by themselves and in their own time. For the most part, everything else stays the same. 

Removing the moderator, means that there isn’t anyone to respond to queries or issues in the moment. This can either delay, influence, or even potentially force participants to not complete or maybe not be as engaged as you may like. Unmoderated research testing suits a very simple and direct type of test. With clear instructions and no room for inference. 

Pros of unmoderated remote research testing:

- Speed and turnaround,  as there is no need to schedule meetings with each and every participant. Unmoderated usability testing is usually much faster to initiate and complete.

- Size of study (participant numbers) unmoderated usability testing allows you to collect feedback from dozens or even hundreds of users at the same time. 


- Location (local and/or international) Testing online removes reliance on participants being physically present for the testing, which broadens your participant pool.  And unmoderated testing means that it literally can be anywhere while participants complete the test in their own time.

Cons of unmoderated remote research testing:

- Follow-up questions as your participants are working on their own and in their own time, you can’t facilitate and ask questions in the moment. You may be able to ask limited follow-up questions.

- Products need to be simple to use unmoderated testing does not allow for prototypes or any product or site that needs guidance. 

- Low participant support without the moderator any issues with the test or the product can’t be picked up immediately and could influence the output of the test.

When should you do moderated vs unmoderated remote usability testing?

Each moderated and unmoderated remote usability testing have its use and place in user research. It really depends on the question you are asking and what you are wanting to know.

Moderated testing allows you to gather in-depth insights, follow up with questions, and engage the participants in the moment. The facilitator has the ability to guide participants to what they want to know, to dig deeper, or even ask why at certain points. This method doesn’t need as much careful setup as the participants aren’t on their own. While this is all done online, it does still allow connection and conversation. This method allows for more investigative research. Looking at why users might prefer one prototype to another. Or possibly tree testing a new website navigation to understand where they might get lost and querying why the participant made certain choices.

Unmoderated testing, on the other hand, is literally leaving the participants to it. This method needs very careful planning and explaining upfront. The test needs to be able to be set and run without a moderator. This lends itself more to wanting to know a direct answer to a query. Such as a card sort on a website to understand how your users might sort information. Or a first click to see how/where users will click on a new website.

Planning your next user test? Here’s how to choose the right method

With the ability to expand our pool of participants across the globe with all of the advances (and acceptance of) technology and video calling etc, the ability to expand our understanding of users’ experiences is growing. Remote usability testing is a great option when you want to gather information from users in the real world. Depending on your query, moderated or unmoderated usability testing will suit your study. As with all user testing, being prepared and planning ahead will allow you to make the most of your test.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Meera Pankhania: From funding to delivery - Ensuring alignment from start to finish

It’s a chicken and egg situation when it comes to securing funding for a large transformation program in government. On one hand, you need to submit a business case and, as part of that, you need to make early decisions about how you might approach and deliver the program of work. On the other hand, you need to know enough about the problem you are going to solve to ensure you have sufficient funding to understand the problem better, hire the right people, design the right service, and build it the right way. 

Now imagine securing hundreds of millions of dollars to design and build a service, but not feeling confident about what the user needs are. What if you had the opportunity to change this common predicament and influence your leadership team to carry out alignment activities, all while successfully delivering within the committed time frames?

Meera Pankhania, Design Director and Co-founder of Propel Design, recently spoke at UX New Zealand, the leading UX and IA conference in New Zealand hosted by Optimal Workshop, on traceability and her learnings from delivering a $300 million Government program.

In her talk, Meera helps us understand how to use service traceability techniques in our work and apply them to any environment - ensuring we design and build the best service possible, no matter the funding model.

Background on Meera Pankhania

As a design leader, Meera is all about working on complex, purpose-driven challenges. She helps organizations take a human-centric approach to service transformation and helps deliver impactful, pragmatic outcomes while building capability and leading teams through growth and change.

Meera co-founded Propel Design, a strategic research, design, and delivery consultancy in late 2020. She has 15 years of experience in service design, inclusive design, and product management across the private, non-profit, and public sectors in both the UK and Australia. 

Meera is particularly interested in policy and social design. After a stint in the Australian Public Service, Meera was appointed as a senior policy adviser to the NSW Minister for Customer Service, Hon. Victor Dominello MP. In this role, she played a part in NSW’s response to the COVID pandemic, flexing her design leadership skills in a new, challenging, and important context.

Contact Details:

Email address: meera@propeldesign.com.au

Find Meera on LinkedIn  

From funding to delivery: ensuring alignment from start to finish 🏁🎉👏

Meera’s talk explores a fascinating case study within the Department of Employment Services (Australia) where a substantial funding investment of around $300 million set the stage for a transformative journey. This funding supported the delivery of a revamped Employment Services Model, which had the goal of delivering better services to job seekers and employers, and a better system for providers within this system. The project had a focus on aligning teams prior to delivery, which resulted in a huge amount of groundwork for Meera.

Her journey involved engaging various stakeholders within the department, including executives, to understand the program as a whole and what exactly needed to be delivered. “Traceability” became the watchword for this project, which is laid out in three phases.

  • Phase 1: Aligning key deliverables
  • Phase 2: Ensuring delivery readiness
  • Phase 3: Building sustainable work practices

Phase 1: Aligning key deliverables 🧮

Research and discovery (pre-delivery)

Meera’s work initially meant conducting extensive research and engagement with executives, product managers, researchers, designers, and policymakers. Through this process, a common theme was identified – the urgent (and perhaps misguided) need to start delivering! Often, organizations focus on obtaining funding without adequately understanding the complexities involved in delivering the right services to the right users, leading to half-baked delivery.

After this initial research, some general themes started to emerge:

  1. Assumptions were made that still needed validation
  2. Teams weren’t entirely sure that they understood the user’s needs
  3. A lack of holistic understanding of how much research and design was needed

The conclusion of this phase was that “what” needed to be delivered wasn’t clearly defined. The same was true for “how” it would be delivered.

Traceability

Meera’s journey heavily revolved around the concept of "traceability” and sought to ensure that every step taken within the department was aligned with the ultimate goal of improving employment services. Traceability meant having a clear origin and development path for every decision and action taken. This is particularly important when spending taxpayer dollars!

So, over the course of eight weeks (which turned out to be much longer), the team went through a process of combing through documents in an effort to bring everything together to make sense of the program as a whole. This involved some planning, user journey mapping, and testing and refinement. 

Documenting Key Artifacts

Numerous artifacts and documents played a crucial role in shaping decisions. Meera and her team gathered and organized these artifacts, including policy requirements, legislation, business cases, product and program roadmaps, service maps, and blueprints. The team also included prior research insights and vision documents which helped to shape a holistic view of the required output.

After an effort of combing through the program documents and laying everything out, it became clear that there were a lot of gaps and a LOT to do.

Prioritising tasks

As a result of these gaps, a process of task prioritization was necessary. Tasks were categorized based on a series of factors and then mapped out based on things like user touch points, pain points, features, business policy, and technical capabilities.

This then enabled Meera and the team to create Product Summary Tiles. These tiles meant that each product team had its own summary ahead of a series of planning sessions. It gave them as much context (provided by the traceability exercise) as possible to help with planning. Essentially, these tiles provided teams with a comprehensive overview of their projects i.e. what their user needs, what certain policies require them to deliver, etc.  

Phase 2: Ensuring delivery readiness 🙌🏻

Meera wanted every team to feel confident that we weren’t doing too much or too little in order to design and build the right service, the right way.

Standard design and research check-ins were well adopted, which was a great start, but Meera and the team also built a Delivery Readiness Tool. It was used to assess a team's readiness to move forward with a project. This tool includes questions related to the development phase, user research, alignment with the business case, consideration of policy requirements, and more. Ultimately, it ensures that teams have considered all necessary factors before progressing further. 

Phase 3: Building sustainable work practices 🍃

As the program progressed, several sustainable work practices emerged which Government executives were keen to retain going forward.

Some of these included:

  • ResearchOps Practice: The team established a research operations practice, streamlining research efforts and ensuring that ongoing research was conducted efficiently and effectively.
  • Consistent Design Artifacts: Templates and consistent design artifacts were created, reducing friction and ensuring that teams going forward started from a common baseline.
  • Design Authority and Ways of Working: A design authority was established to elevate and share best practices across the program.
  • Centralized and Decentralized Team Models: The program showcased the effectiveness of a combination of centralized and decentralized team models. A central design team provided guidance and support, while service design leads within specific service lines ensured alignment and consistency.

Why it matters 🔥

Meera's journey serves as a valuable resource for those working on complex design programs, emphasizing the significance of aligning diverse stakeholders and maintaining traceability. Alignment and traceability are critical to ensuring that programs never lose sight of the problem they’re trying to solve, both from the user and organization’s perspective. They’re also critical to delivering on time and within budget!

Traceability key takeaways 🥡

  • Early Alignment Matters: While early alignment is ideal, it's never too late to embark on a traceability journey. It can uncover gaps, increase confidence in decision-making, and ensure that the right services are delivered.
  • Identify and audit: You never know what artifacts will shape your journey. Identify everything early, and don’t be afraid to get clarity on things you’re not sure about.
  • Conducting traceability is always worthwhile: Even if you don’t find many gaps in your program, you will at least gain a high level of confidence that your delivery is focused on the right things.

Delivery readiness key takeaways 🥡

  • Skills Mix is Vital: Assess and adapt team member roles to match their skills and experiences, ensuring they are positioned optimally.
  • Not Everyone Shares the Same Passion: Recognize that not everyone will share the same level of passion for design and research. Make the relevance of these practices clear to all team members.

Sustainability key takeaways 🥡

  • One Size Doesn't Fit All: Tailor methodologies, templates, and practices to the specific needs of your organization.
  • Collaboration is Key: Foster a sense of community and collective responsibility within teams, encouraging shared ownership of project outcomes.

Learn more
1 min read

Different ways to test information architecture

We all know that building a robust information architecture (IA) can make or break your product. And getting it right can rely on robust user research. Especially when it comes to creating human-centered, intuitive products that deliver outstanding user experiences.

But what are the best methods to test your information architecture? To make sure that your focus is on building an information architecture that is truly based on what your users want, and need.

What is user research? 🗣️🧑🏻💻

With all the will in the world, your product (or website or mobile app) may work perfectly and be as intuitive as possible. But, if it is only built on information from your internal organizational perspective, it may not measure up in the eyes of your user. Often, organizations make major design decisions without fully considering their users. User research (UX) backs up decisions with data, helping to make sure that design decisions are strategic decisions. 

Testing your information architecture can also help establish the structure for a better product from the ground up. And ultimately, the performance of your product. User experience research focuses your design on understanding your user expectations, behaviors, needs, and motivations. It is an essential part of creating, building, and maintaining great products. 

Taking the time to understand your users through research can be incredibly rewarding with the insights and data-backed information that can alter your product for the better. But what are the key user research methods for your information architecture? Let’s take a look.

Research methods for information architecture ⚒️

There is more than one way to test your IA. And testing with one method is good, but with more than one is even better. And, of course, the more often you test, especially when there are major additions or changes, you can tweak and update your IA to improve and delight your user’s experience.

Card Sorting 🃏

Card sorting is a user research method that allows you to discover how users understand and categorize information. It’s particularly useful when you are starting the planning process of your information architecture or at any stage you notice issues or are making changes. Putting the power into your users’ hands and asking how they would intuitively sort the information. In a card sort, participants sort cards containing different items into labeled groups. You can use the results of a card sort to figure out how to group and label the information in a way that makes the most sense to your audience. 

There are a number of techniques and methods that can be applied to a card sort. Take a look here if you’d like to know more.

Card sorting has many applications. It’s as useful for figuring out how content should be grouped on a website or in an app as it is for figuring out how to arrange the items in a retail store.You can also run a card sort in person, using physical cards, or remotely with online tools such as OptimalSort.

Tree Testing 🌲

Taking a look at your information architecture from the other side can also be valuable. Tree testing is a usability method for evaluating the findability of topics on a product. Testing is done on a simplified text version of your site structure without the influence of navigation aids and visual design.

Tree testing tells you how easily people can find information on your product and exactly where people get lost. Your users rely on your information architecture – how you label and organize your content – to get things done.

Tree testing can answer questions like:

  • Do my labels make sense to people?
  • Is my content grouped logically to people?
  • Can people find the information they want easily and quickly? If not, what’s stopping them?

Treejack is our tree testing tool and is designed to make it easy to test your information architecture. Running a tree test isn’t actually that difficult, especially if you’re using the right tool. You’ll  learn how to set useful objectives, how to build your tree, write your tasks, recruit participants, and measure results.

Combining information architecture research methods 🏗

If you are wanting a fully rounded view of your information architecture, it can be useful to combine your research methods.

Tree testing and card sorting, along with usability testing, can give you insights into your users and audience. How do they think? How do they find their way through your product? And how do they want to see things labeled, organized, and sorted? 

If you want to get fully into the comparison of tree testing and card sorting, take a look at our article here, which compares the options and explains which is best and when. 

Learn more
1 min read

Why user research is essential for product development

Many organizations are aware that staying relevant essential for their success. This can mean a lot of things to different organizations. What it often means is coming up with plenty of new, innovative ideas and products to keep pace with the demands and needs of the marketplace. It also means keeping up with the expectations and needs of your users, which often means  shorter and shorter product development life cycle times.  While maintaining this pace can be daunting, it can also be seen as a strength, tightening up your processes and cutting out unnecessary steps.

A vital part of developing new (or tweaking existing) products is considering the end user first. There really is no point in creating anything new if it isn’t meeting a need or filling a gap in the market. How can you make sure you are hitting the right mark? Ask your users.  We look into some of the key user research methods available to help you in your product development process.

If you want to know more about how to fit research into your product development process, take a read here.

What is user research? 👨🏻💻

User experience (UX) research, or user research as it’s commonly referred to, is an important part of the product development process. Primarily, UX research involves using different research methods to gather qualitative and quantitative data and insights about how your users interact with your product. It is an essential part of developing, building, and launching a product that truly meets the needs, desires, and requirements of your users. 

At its simplest, user research is talking to your users and understanding what they want and why. And using this to deliver what they need.

How does user research fit into the product development process? 🧩🧩

User research is an essential part of the product development process. By asking questions of your users about how your product works and what place it fills in the market, you can create a product that delivers what the market needs to those who need it. 

Without user research, you could literally be firing arrows in the dark, or at the very best, working from a very internal organizational view based on assuming that what you believe users need is what they want. With user research, you can collect qualitative and quantitative data that clearly tells you where and what users would like to see and how they would use it.

Investing in user research right at the start of the product development process can save the team and the organization heavy investment in time and money. With detailed data responses, your brand-new product can leapfrog many development hurdles, delivering a final product that users love and want to keep using. Firing arrows to hit a bullseye.

What user research methods should we use? 🥺

Qualitative ResearchMethods

Qualitative research is about exploration. It focuses on discovering things we cannot measure with numbers and typically involves getting to know users directly through interviews or observation.

Usability Testing – Observational

One of the best ways to learn about your users and how they interact with your new product is to observe them in their own environment. Watch how they accomplish tasks, the order they do things, what frustrates them, and what makes the task easier and/or more enjoyable for your subject. The data can be collated to inform the usability of your product, improving intuitive design and what resonates with your users.

Competitive Analysis

Reviewing products already on the market can be a great start to the product development process. Why are your competitors’ products successful? And how well do they behave for users? Learn from their successes, and even better, build on where they may not be performing as well and find where your product fills the gap in the market.

Quantitative Research Methods

Quantitative research is about measurement. It focuses on gathering data and then turning this data into usable statistics.

Surveys

Surveys are a popular user research method for gathering information from a wide range of people. In most cases, a survey will feature a set of questions designed to assess someone’s thoughts on a particular aspect of your new product. They’re useful for getting feedback or understanding attitudes, and you can use the learnings from your survey of a subset of users to draw conclusions about a larger population of users.

Wrap Up 🌯

Gathering information on your users during the product development process and before you invest time and money can be hugely beneficial to the entire process. Collating robust data and insights to guide the new product development and respond directly to user needs, and filling that all-important niche. Undertaking user experience research shouldn’t stop at product development but throughout each and every step of your product life cycle. If you want to find out more about UX research throughout the life cycle of your product, take a read of our article UX research for each product phase.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.