November 18, 2022
4 min

Moderated vs unmoderated research: which approach is best?

Knowing and understanding why and how your users use your product is invaluable for getting to the nitty gritty of usability. Delving deep with probing questions into motivation or skimming over looking for issues can equally be informative. 

Put super simply, usability testing literally is testing how usable your product is for your users. If your product isn’t usable users often won’t complete their task, let alone come back for more. No one wants to lose users before they even get started. Usability testing gets under their skin and really into the how, why and what they want (and equally what they don’t).

As we have been getting used to video calling regularly and using the internet for interactions, usability testing has followed suit. Being able to access participants remotely has allowed us to diversify the participant pool by not being restricted to those that are close enough to be in-person. This has also allowed an increase in the number of participants per test, as it becomes more cost-effective to perform remote usability testing.

But if we’re remote, does this mean it can’t be moderated? No - remote testing, along with modern technology, can mean that remote testing can be facilitated and moderated. But what is the best method - moderated or unmoderated?

What is moderated remote research testing? 🙋🏻

In traditional usability testing, moderated research is done in person. With the moderator and the participant in the same physical space. This, of course, allows for conversation and observational behavioral monitoring. Meaning the moderator can note not only what the participant answers but how and even make note of the body language, surroundings, and other influencing factors. 

This has also meant that traditionally, the participant pool has been limited to those that can be available (and close enough) to make it into a facility for testing. And being in person has meant it takes time (and money) to perform these tests.

As technology has moved along and the speed of internet connections and video calling has increased, this has opened up a world of opportunities for usability testing. Allowing usability testing to be done remotely. Moderators can now set up testing remotely and ‘dial in’ to observe participants anywhere they are. And potentially even running focus groups or other testing in a group format across the internet. 

Pros:

- In-depth gathering of insights through a back-and-forth conversation and observing of the participants.

- Follow-up questions don’t underestimate the value of being available to ask questions throughout the testing. And following up in the moment.

- Observational monitoring noticing and noting the environment and how the participants are behaving, can give more insight into how or why they choose to make a decision.

- Quick remote testing can be quicker to start, find participants, and complete than in-person. This is because you only need to set up a time to connect via the internet, rather than coordinating travel times, etc.

- Location (local and/or international) Testing online removes reliance on participants being physically present for the testing. This broadens your ability to broaden the pool, and participants can be either within your country or global. 

Cons:

- Time-consuming having to be present at each test takes time. As does analyzing the data and insights generated. But remember, this is quality data.

- Limited interactions with any remote testing there is only so much you can observe or understand across the window of a computer screen. It can be difficult to have a grasp on all the factors that might be influencing your participants.

What is unmoderated remote research testing? 😵💫

In its most simple sense, unmoderated user testing removes the ‘moderated’ part of the equation. Instead of having a facilitator guide participants through the test, participants are left to complete the testing by themselves and in their own time. For the most part, everything else stays the same. 

Removing the moderator, means that there isn’t anyone to respond to queries or issues in the moment. This can either delay, influence, or even potentially force participants to not complete or maybe not be as engaged as you may like. Unmoderated research testing suits a very simple and direct type of test. With clear instructions and no room for inference. 

Pros:

- Speed and turnaround,  as there is no need to schedule meetings with each and every participant. Unmoderated usability testing is usually much faster to initiate and complete.

- Size of study (participant numbers) unmoderated usability testing allows you to collect feedback from dozens or even hundreds of users at the same time. 


- Location (local and/or international) Testing online removes reliance on participants being physically present for the testing, which broadens your participant pool.  And unmoderated testing means that it literally can be anywhere while participants complete the test in their own time.

Cons:

- Follow-up questions as your participants are working on their own and in their own time, you can’t facilitate and ask questions in the moment. You may be able to ask limited follow-up questions.

- Products need to be simple to use unmoderated testing does not allow for prototypes or any product or site that needs guidance. 

- Low participant support without the moderator any issues with the test or the product can’t be picked up immediately and could influence the output of the test.

When should you do which? 🤔

Each moderated and unmoderated remote usability testing have its use and place in user research. It really depends on the question you are asking and what you are wanting to know.

Moderated testing allows you to gather in-depth insights, follow up with questions, and engage the participants in the moment. The facilitator has the ability to guide participants to what they want to know, to dig deeper, or even ask why at certain points. This method doesn’t need as much careful setup as the participants aren’t on their own. While this is all done online, it does still allow connection and conversation. This method allows for more investigative research. Looking at why users might prefer one prototype to another. Or possibly tree testing a new website navigation to understand where they might get lost and querying why the participant made certain choices.

Unmoderated testing, on the other hand, is literally leaving the participants to it. This method needs very careful planning and explaining upfront. The test needs to be able to be set and run without a moderator. This lends itself more to wanting to know a direct answer to a query. Such as a card sort on a website to understand how your users might sort information. Or a first click to see how/where users will click on a new website.

Wrap Up 🌯

With the ability to expand our pool of participants across the globe with all of the advances (and acceptance of) technology and video calling etc, the ability to expand our understanding of users’ experiences is growing. Remote usability testing is a great option when you want to gather information from users in the real world. Depending on your query, moderated or unmoderated usability testing will suit your study. As with all user testing, being prepared and planning ahead will allow you to make the most of your test.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Collating your user testing notes

It’s been a long day. Scratch that - it’s been a long week! Admit it. You loved every second of it.

Twelve hour days, the mad scramble to get the prototype ready in time, the stakeholders poking their heads in occasionally, dealing with no-show participants and the excitement around the opportunity to speak to real life human beings about product or service XYZ. Your mind is exhausted but you are buzzing with ideas and processing what you just saw. You find yourself sitting in your war room with several pages of handwritten notes and with your fellow observers you start popping open individually wrapped lollies leftover from the day’s sessions. Someone starts a conversation around what their favourite flavour is and then the real fun begins. Sound familiar? Welcome to the post user testing debrief meeting.

How do you turn those scribbled notes and everything rushing through your mind into a meaningful picture of the user experience you just witnessed? And then when you have that picture, what do you do next? Pull up a bean bag, grab another handful of those lollies we feed our participants and get comfy because I’m going to share my idiot-proof, step by step guide for turning your user testing notes into something useful.

Let’s talk

Get the ball rolling by holding a post session debrief meeting while it’s all still fresh your collective minds. This can be done as one meeting at the end of the day’s testing or you could have multiple quick debriefs in between testing sessions. Choose whichever options works best for you but keep in mind this needs to be done at least once and before everyone goes home and forgets everything. Get all observers and facilitators together in any meeting space that has a wall like surface that you can stick post its to - you can even use a window! And make sure you use real post its - the fake ones fall off!

Mark your findings (Tagging)

Before you put sharpie to post it, it’s essential to agree as a group on how you will tag your observations. Tagging the observations now will make the analysis work much easier and help you to spot patterns and themes. Colour coding the post its is by far the simplest and most effective option and how you assign the colours is entirely up to you. You could have a different colour for each participant or testing session, you could have different colours to denote participant attributes that are relevant to your study eg senior staff and junior staff, or you could use different colours to denote specific testing scenarios that were used. There’s many ways you could carve this up and there’s no right or wrong way. Just choose the option that suits you and your team best because you’re the ones who have to look at it and understand it. If you only have one colour post it eg yellow, you could colour code the pen colours you use to write on the notes or include some kind of symbol to help you track them.

Processing the paper (Collating)

That pile of paper is not going to process itself! Your next job as a group is to work through the task of transposing your observations to post it notes. For now, just stick them to the wall in any old way that suits you. If you’re the organising type, you could group them by screen or testing scenario. The positioning will all change further down the process, so at this stage it’s important to just keep it simple. For issues that occur repeatedly across sessions, just write them down on their own post its- doubles will be useful to see further down the track.In addition to  holding a debrief meetings, you also need to round up everything that was used to capture the testing session/s. And I mean EVERYTHING.

Handwritten notes, typed notes, video footage and any audio recordings need to be reviewed just in case something was missed. Any handwritten notes should be typed to assist you with the completion of the report. Don’t feel that you have to wait until the testing is completed before you start typing up your notes because you will find they pile up very quickly and if your handwriting is anything like mine…. Well let’s just say my short term memory is often required to pick up the slack and even that has it’s limits. Type them up in between sessions where possible and save each session as it’s own document. I’ll often use the testing questions or scenario based tasks to structure my typed notes and I find that makes it really easy to refer back to.Now that you’ve processed all the observations, it’s time to start sorting your observations to surface behavioural patterns and make sense of it all.

Spotting patterns and themes through affinity diagramming

Affinity diagramming is a fantastic tool for making sense of user testing observations. In fact it’s just about my favourite way to make sense of any large mass of information. It’s an engaging and visual process that grows and evolves like a living creature taking on a life of its own. It also builds on the work you’ve just done which is a real plus!By now, testing is over and all of your observations should all be stuck to a wall somewhere. Get everyone together again as a group and step back and take it all in. Just let it sit with you for a moment before you dive in. Just let it breathe. Have you done that? Ok now as individuals working at the same time, start by grouping things that you think belong together. It’s important to just focus on the content of the labels and try to ignore the colour coded tagging at this stage, so if session one was blue post its don’t group all the blue ones together just because they’re all blue! If you get stuck, try grouping by topic or create two groups eg issues and wins and then chunk the information up from there.

You will find that the groups will change several times over the course of the process  and that’s ok because that’s what it needs to do.While you do this, everyone else will be doing the same thing - grouping things that make sense to them.  Trust me, it’s nowhere near as chaotic as it sounds! You may start working as individuals but it won’t be long before curiosity kicks in and the room is buzzing with naturally occurring conversation.Make sure you take a step back regularly and observe what everyone else is doing and don’t be afraid to ask questions and move other people’s post its around- no one owns it! No matter how silly something may seem just put it there because it can be moved again. Have a look at where your tagged observations have ended up. Are there clusters of colour? Or is it more spread out? What that means will depend largely on how you decided to tag your findings. For example if you assigned each testing session its own colour and you have groups with lot’s of different colours in them you’ll find that the same issue was experienced by multiple people.Next, start looking at each group and see if you can break them down into smaller groups and at the same time consider the overall picture for bigger groups eg can the wall be split into say three high level groups.Remember, you can still change your groups at anytime.

Thinning the herd (Merging)

Once you and your team are happy with the groups, it’s time to start condensing the size of this beast. Look for doubled up findings and stack those post its on top of each other to cut the groups down- just make sure you can still see how many there were. The point of merging is to condense without losing anything so don’t remove something just because it only happened once. That one issue could be incredibly serious. Continue to evaluate and discuss as a group until you are happy. By now clear and distinct groups of your observations should have emerged and at a glance you should be able to identify the key findings from your study.

A catastrophe or a cosmetic flaw? (Scoring)

Scoring relates to how serious the issues are and how bad the consequences of not fixing them are. There are arguments for and against the use of scoring and it’s important to recognise that it is just one way to communicate your findings.I personally rarely use scoring systems. It’s not really something I think about when I’m analysing the observations. I rarely rank one problem or finding over another. Why? Because all data is good data and it all adds to the overall picture.I’ve always been a huge advocate for presenting the whole story and I will never diminish the significance of a finding by boosting another. That said, I do understand the perspective of those who place metrics around their findings. Other designers have told me they feel that it allows them to quantify the seriousness of each issue and help their client/designer/boss make decisions about what to do next.We’ve all got our own way of doing things, so I’ll leave it up to you to choose whether or not you score the issues. If you decide to score your findings there are a number of scoring systems you can use and if I had to choose one, I quite like Jakob Nielsen’s methodology for the simple way it takes into consideration multiple factors. Ultimately you should choose the one that suits your working style best.

Let’s say you did decide to score the issues. Start by writing down each key finding on it’s own post it and move to a clean wall/ window. Leave your affinity diagram where it is. Divide the new wall in half: one side for wins eg findings that indicate things that tested well and the other for issues. You don’t need to score the wins but you do need to acknowledge what went well because knowing what you’re doing well is just as important as knowing where you need to improve. As a group (wow you must be getting sick of each other! Make sure you go out for air from time to time!) score the issues based on your chosen methodology.Once you have completed this entire process you will have everything you need to write a kick ass report.

What could possibly go wrong? (and how to deal with it)

No process is perfect and there are a few potential dramas to be aware of:

People jumping into solution mode too early

In the middle of the debrief meeting, someone has an epiphany. Shouts of We should move the help button! or We should make the yellow button smaller! ring out and the meeting goes off the rails.I’m not going to point fingers and blame any particular role because we’ve all done it, but it’s important to recognise that’s not why we’re sitting here. The debrief meeting is about digesting and sharing what you and the other observers just saw. Observing and facilitating user testing is a privilege. It’s a precious thing that deserves respect and if you jump into solution mode too soon, you may miss something. Keep the conversation on track by appointing a team member to facilitate the debrief meeting.

Storage problems

Handwritten notes taken by multiple observers over several days of testing adds up to an enormous pile of paper. Not only is it a ridiculous waste of paper but they have to be securely stored for three months following the release of the report. It’s not pretty. Typing them up can solve that issue but it comes with it’s own set of storage related hurdles. Just like the handwritten notes, they need to be stored securely. They don’t belong on SharePoint or in the share drive or any other shared storage environment that can be accessed by people outside your observer group. User testing notes are confidential and are not light reading for anyone and everyone no matter how much they complain. Store any typed notes in a limited access storage solution that only the observers have access to and if anyone who shouldn’t be reading them asks, tell them that they are confidential and the integrity of the research must be preserved and respected.

Time issues

Before the storage dramas begin, you have to actually pick through the mountain of paper. Not to mention the video footage, and the audio and you have to chase up that sneaky observer who disappeared when the clock struck 5. All of this takes up a lot of time. Another time related issue comes in the form of too much time passing in between testing sessions and debrief meetings. The best way to deal with both of these issues  is to be super organised and hold multiple smaller debriefs in between sessions where possible. As a group, work out your time commitments before testing begins and have a clear plan in place for when you will meet.  This will prevent everything piling up and overwhelming you at the end.

Disagreements over scoring

At the end of that long day/week we’re all tired and discussions around scoring the issues can get a little heated. One person’s showstopper may be another person’s mild issue. Many of the ranking systems use words as well as numbers to measure the level of severity and it’s easy to get caught up in the meaning of the words and ultimately get sidetracked from the task at hand. Be proactive and as a group set ground rules upfront for all discussions. Determine how long you’ll spend discussing an issue and what you will do in the event that agreement cannot be reached. People want to feel heard and they want to feel like their contributions are valued. Given that we are talking about an iterative process, sometimes it’s best just to write everything down to keep people happy and merge and cull the list in the next iteration. By then they’ve likely had time to reevaluate their own thinking.

And finally...

We all have our own ways of making sense of our user testing observations and there really is no right or wrong way to go about it. The one thing I would like to reiterate is the importance of collaboration and teamwork. You cannot do this alone, so please don’t try. If you’re a UX team of one, you probably already have a trusted person that you bounce ideas off. They would be a fantastic person to do this with. How do you approach this process? What sort of challenges have you faced? Let me know in the comments below.

Learn more
1 min read

Different ways to test information architecture

We all know that building a robust information architecture (IA) can make or break your product. And getting it right can rely on robust user research. Especially when it comes to creating human-centered, intuitive products that deliver outstanding user experiences.

But what are the best methods to test your information architecture? To make sure that your focus is on building an information architecture that is truly based on what your users want, and need.

What is user research? 🗣️🧑🏻💻

With all the will in the world, your product (or website or mobile app) may work perfectly and be as intuitive as possible. But, if it is only built on information from your internal organizational perspective, it may not measure up in the eyes of your user. Often, organizations make major design decisions without fully considering their users. User research (UX) backs up decisions with data, helping to make sure that design decisions are strategic decisions. 

Testing your information architecture can also help establish the structure for a better product from the ground up. And ultimately, the performance of your product. User experience research focuses your design on understanding your user expectations, behaviors, needs, and motivations. It is an essential part of creating, building, and maintaining great products. 

Taking the time to understand your users through research can be incredibly rewarding with the insights and data-backed information that can alter your product for the better. But what are the key user research methods for your information architecture? Let’s take a look.

Research methods for information architecture ⚒️

There is more than one way to test your IA. And testing with one method is good, but with more than one is even better. And, of course, the more often you test, especially when there are major additions or changes, you can tweak and update your IA to improve and delight your user’s experience.

Card Sorting 🃏

Card sorting is a user research method that allows you to discover how users understand and categorize information. It’s particularly useful when you are starting the planning process of your information architecture or at any stage you notice issues or are making changes. Putting the power into your users’ hands and asking how they would intuitively sort the information. In a card sort, participants sort cards containing different items into labeled groups. You can use the results of a card sort to figure out how to group and label the information in a way that makes the most sense to your audience. 

There are a number of techniques and methods that can be applied to a card sort. Take a look here if you’d like to know more.

Card sorting has many applications. It’s as useful for figuring out how content should be grouped on a website or in an app as it is for figuring out how to arrange the items in a retail store.You can also run a card sort in person, using physical cards, or remotely with online tools such as OptimalSort.

Tree Testing 🌲

Taking a look at your information architecture from the other side can also be valuable. Tree testing is a usability method for evaluating the findability of topics on a product. Testing is done on a simplified text version of your site structure without the influence of navigation aids and visual design.

Tree testing tells you how easily people can find information on your product and exactly where people get lost. Your users rely on your information architecture – how you label and organize your content – to get things done.

Tree testing can answer questions like:

  • Do my labels make sense to people?
  • Is my content grouped logically to people?
  • Can people find the information they want easily and quickly? If not, what’s stopping them?

Treejack is our tree testing tool and is designed to make it easy to test your information architecture. Running a tree test isn’t actually that difficult, especially if you’re using the right tool. You’ll  learn how to set useful objectives, how to build your tree, write your tasks, recruit participants, and measure results.

Combining information architecture research methods 🏗

If you are wanting a fully rounded view of your information architecture, it can be useful to combine your research methods.

Tree testing and card sorting, along with usability testing, can give you insights into your users and audience. How do they think? How do they find their way through your product? And how do they want to see things labeled, organized, and sorted? 

If you want to get fully into the comparison of tree testing and card sorting, take a look at our article here, which compares the options and explains which is best and when. 

Learn more
1 min read

How to convince others of the importance of UX research

There’s not much a parent won’t do to ensure their child has the best chance of succeeding in life. Unsurprisingly, things are much the same in product development. Whether it’s a designer, manager, developer or copywriter, everyone wants to see the product reach its full potential.

Key to a product’s success (even though it’s still not widely practiced) is UX research. Without research focused on learning user pain points and behaviors, development basically happens in the dark. Feeding direct insights from customers and users into the development of a product means teams can flick the light on and make more informed design decisions.

While the benefits of user research are obvious to anyone working in the field, it can be a real challenge to convince others of just how important and useful it is. We thought we’d help.

Define user research

If you want to sell the importance of UX research within your organization, you’ve got to ensure stakeholders have a clear understanding of what user research is and what they stand to gain from backing it.

In general, there are a few key things worth focusing on when you’re trying to explain the benefits of research:

  • More informed design decisions: Companies make major design decisions far too often without considering users. User research provides the data needed to make informed decisions.
  • Less uncertainty and risk: Similarly, research reduces risk and uncertainty simply by giving companies more clarity around how a particular product or service is used.
  • Retention and conversion benefits: Research means you’ll be more aligned with the needs of your customers and prospective customers.

Use the language of the people you’re trying to convince. A capable UX research practice will almost always improve key business metrics, namely sales and retention.

The early stages

When embarking on a project, book in some time early in the process to answer questions, explain your research approach and what you hope to gain from it. Here are some of the key things to go over:

  • Your objectives: What are you trying to achieve? This is a good time to cover your research questions.
  • Your research methods: Which methods will you be using to carry out your research? Cover the advantages of these methods and the information you’re likely to get from using them.
  • Constraints: Do you see any major obstacles? Any issues with resources?
  • Provide examples: Nothing shows the value of doing research quite like a case study. If you can’t find an example of research within your own organization, see what you can find online.

Involve others in your research

When trying to convince someone of the validity of what you’re doing, it’s often best to just show them. There are a couple of effective ways you can do this – at a team or individual level and at an organizational level.

We’ll explain the best way to approach this below, but there’s another important reason to bring others into your research. UX research can’t exist in a vacuum – it thrives on integration and collaboration with other teams. Importantly, this also means working with other teams to define the problems they’re trying to solve and the scope of their projects. Once you’ve got an understanding of what they’re trying to achieve, you’ll be in a better position to help them through research.

Educate others on what research is

Education sessions (lunch-and-learns) are one of the best ways to get a particular team or group together and run through the what and why of user research. You can work with them to work out what they’d like to see from you, and how you can help each other.

Tailor what you’re saying to different teams, especially if you’re talking to people with vastly different skill sets. For example, developers and designers are likely to see entirely different value in research.

Collect user insights across the organization

Putting together a comprehensive internal repository focused specifically on user research is another excellent way to grow awareness. It can also help to quantify things that may otherwise fall by the wayside. For example, you can measure the magnitude of certain pain points or observe patterns in feature requests. Using a platform like Notion or Confluence (or even Google Drive if you don’t want a dedicated platform), log all of your study notes, insights and research information that you find useful.

Whenever someone wants to learn more about research within the organization, they’ll be able to find everything easily.

Bring stakeholders along to research sessions

Getting a stakeholder along to a research session (usability tests and user interviews are great starting points) will help to show them the value that face-to-face sessions with users can provide.

To really involve an observer in your UX research, assign them a specific role. Note taker, for example. With a short briefing on best-practices for note taking, they can get a feel for what’s like to do some of the work you do.

You may also want to consider bringing anyone who’s interested along to a research session, even if they’re just there to observe.

Share your findings – consistently

Research is about more than just testing a hypothesis, it’s important to actually take your research back to the people who can action the data.

By sharing your research findings with teams and stakeholders regularly, your organization will start to build up an understanding of the value that ongoing research can provide, meaning getting approval to pursue research in future becomes easier. This is a bit of a chicken and egg situation, but it’s a practice that all researchers need to get into – especially those embedded in large teams or organizations.

Anything else you think is worth mentioning? Let us know in the comments.

Read more

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.