October 31, 2024
10

Ready for take-off: Best practices for creating and launching remote user research studies

"Hi Optimal Work,I was wondering if there are some best practices you stick to when creating or sending out different UX research studies (i.e. Card sorts, Prototyye Test studies, etc)? Thank you! Mary"

Indeed I do! Over the years I’ve learned a lot about creating remote research studies and engaging participants. That experience has taught me a lot about what works, what doesn’t and what leaves me refreshing my results screen eagerly anticipating participant responses and getting absolute zip. Here are my top tips for remote research study creation and launch success!

Creating remote research studies

Use screener questions and post-study questions wisely

Screener questions are really useful for eliminating participants who may not fit the criteria you’re looking for but you can’t exactly stop them from being less than truthful in their responses. Now, I’m not saying all participants lie on the screener so they can get to the activity (and potentially claim an incentive) but I am saying it’s something you can’t control. To help manage this, I like to use the post-study questions to provide additional context and structure to the research.

Depending on the study, I might ask questions to which the answers might confirm or exclude specific participants from a specific group. For example, if I’m doing research on people who live in a specific town or area, I’ll include a location based question after the study. Any participant who says they live somewhere else is getting excluded via that handy toggle option in the results section. Post-study questions are also great for capturing additional ideas and feedback after participants complete the activity as remote research limits your capacity to get those — you’re not there with them so you can’t just ask. Post-study questions can really help bridge this gap. Use no more than five post-study questions at a time and consider not making them compulsory.

Do a practice run

No matter how careful I am, I always miss something! A typo, a card with a label in the wrong case, forgetting to update a new version of an information architecture after a change was made — stupid mistakes that we all make. By launching a practice version of your study and sharing it with your team or client, you can stop those errors dead in their tracks. It’s also a great way to get feedback from the team on your work before the real deal goes live. If you find an error, all you have to do is duplicate the study, fix the error and then launch. Just keep an eye on the naming conventions used for your studies to prevent the practice version and the final version from getting mixed up!

Sending out remote research studies

Manage expectations about how long the study will be open for

Something that has come back to bite me more than once is failing to clearly explain when the study will close. Understandably, participants can be left feeling pretty annoyed when they mentally commit to complete a study only to find it’s no longer available. There does come a point when you need to shut the study down to accurately report on quantitative data and you’re not going to be able to prevent every instance of this, but providing that information upfront will go a long way.

Provide contact details and be open to questions

You may think you’re setting yourself up to be bombarded with emails, but I’ve found that isn’t necessarily the case. I’ve noticed I get around 1-3 participants contacting me per study. Sometimes they just want to tell me they completed it and potentially provide additional information and sometimes they have a question about the project itself. I’ve also found that sometimes they have something even more interesting to share such as the contact details of someone I may benefit from connecting with — or something else entirely! You never know what surprises they have up their sleeves and it’s important to be open to it. Providing an email address or social media contact details could open up a world of possibilities.

Don’t forget to include the link!

It might seem really obvious, but I can’t tell you how many emails I received (and have been guilty of sending out) that are missing the damn link to the study. It happens! You’re so focused on getting that delivery right and it becomes really easy to miss that final yet crucial piece of information.

To avoid this irritating mishap, I always complete a checklist before hitting send:

  • Have I checked my spelling and grammar?
  • Have I replaced all the template placeholder content with the correct information?
  • Have I mentioned when the study will close?
  • Have I included contact details?
  • Have I launched my study and received confirmation that it is live?
  • Have I included the link to the study in my communications to participants?
  • Does the link work? (yep, I’ve broken it before)

General tips for both creating and sending out remote research studies

Know your audience

First and foremost, before you create or disseminate a remote research study, you need to understand who it’s going to and how they best receive this type of content. Posting it out when none of your followers are in your user group may not be the best approach. Do a quick brainstorm about the best way to reach them. For example if your users are internal staff, there might be an internal communications channel such as an all-staff newsletter, intranet or social media site that you can share the link and approach content to.

Keep it brief

And by that I’m talking about both the engagement mechanism and the study itself. I learned this one the hard way. Time is everything and no matter your intentions, no one wants to spend more time than they have to. Even more so in situations where you’re unable to provide incentives (yep, I’ve been there). As a rule, I always stick to no more than 10 questions in a remote research study and for card sorts, I’ll never include more than 60 cards. Anything more than that will see a spike in abandonment rates and of course only serve to annoy and frustrate your participants. You need to ensure that you’re balancing your need to gain insights with their time constraints.

As for the accompanying approach content, short and snappy equals happy! In the case of an email, website, other social media post, newsletter, carrier pigeon etc, keep your approach spiel to no more than a paragraph. Use an audience appropriate tone and stick to the basics such as: a high level sentence on what you’re doing, roughly how long the study will take participants to complete, details of any incentives on offer and of course don’t forget to thank them.

Set clear instructions

The default instructions in Optimal Workshop’s suite of tools are really well designed and I’ve learned to borrow from them for my approach content when sending the link out. There’s no need for wheel reinvention and it usually just needs a slight tweak to suit the specific study. This also helps provide participants with a consistent experience and minimizes confusion allowing them to focus on sharing those valuable insights!

Create a template

When you’re on to something that works — turn it into a template! Every time I create a study or send one out, I save it for future use. It still needs minor tweaks each time, but I use them to iterate my template.What are your top tips for creating and sending out remote user research studies? Comment below!

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Measuring the impact of UXR: beyond CSAT and NPS

In the rapidly evolving world of user experience research (UXR), demonstrating value and impact has become more crucial than ever. While traditional metrics like Customer Satisfaction (CSAT) scores and Net Promoter Scores (NPS) have long been the go-to measures for UX professionals, they often fall short in capturing the full scope and depth of UXR's impact. As organizations increasingly recognize the strategic importance of user-centered design, it's time to explore more comprehensive and nuanced approaches to measuring UXR's contribution.

Limitations of traditional metrics

CSAT and NPS, while valuable, have significant limitations when it comes to measuring UXR impact. These metrics provide a snapshot of user sentiment but fail to capture the direct influence of research insights on product decisions, business outcomes, or long-term user behavior. Moreover, they can be influenced by factors outside of UXR's control, such as marketing campaigns or competitor actions, making it challenging to isolate the specific impact of research efforts.

Another limitation is the lack of context these metrics provide. They don't offer insights into why users feel a certain way or how specific research-driven improvements contributed to their satisfaction. This absence of depth can lead to misinterpretation of data and missed opportunities for meaningful improvements.

Alternative measurement approaches

To overcome these limitations, UX researchers are exploring alternative approaches to measuring impact. One promising method is the use of proxy measures that more directly tie to research activities. For example, tracking the number of research-driven product improvements implemented or measuring the reduction in customer support tickets related to usability issues can provide more tangible evidence of UXR's impact.

Another approach gaining traction is the integration of qualitative data into impact measurement. By combining quantitative metrics with rich, contextual insights from user interviews and observational studies, researchers can paint a more comprehensive picture of how their work influences user behavior and product success.

Linking UXR to business outcomes

Perhaps the most powerful way to demonstrate UXR's value is by directly connecting research insights to key business outcomes. This requires a deep understanding of organizational goals and close collaboration with stakeholders across functions. For instance, if a key business objective is to increase user retention, UX researchers can focus on identifying drivers of user loyalty and track how research-driven improvements impact retention rates over time.

Risk reduction is another critical area where UXR can demonstrate significant value. By validating product concepts and designs before launch, researchers can help organizations avoid costly mistakes and reputational damage. Tracking the number of potential issues identified and resolved through research can provide a tangible measure of this impact.

Case studies of successful impact measurement

While standardized metrics for UXR impact remain elusive, some organizations have successfully implemented innovative measurement approaches. For example, one technology company developed a "research influence score" that tracks how often research insights are cited in product decision-making processes and the subsequent impact on key performance indicators.

Another case study involves a financial services firm that implemented a "research ROI calculator." This tool estimates the potential cost savings and revenue increases associated with research-driven improvements, providing a clear financial justification for UXR investments.

These case studies highlight the importance of tailoring measurement approaches to the specific context and goals of each organization. By thinking creatively and collaborating closely with stakeholders, UX researchers can develop meaningful ways to quantify their impact and demonstrate the strategic value of their work.

As the field of UXR continues to evolve, so too must our approaches to measuring its impact. By moving beyond traditional metrics and embracing more holistic and business-aligned measurement strategies, we can ensure that the true value of user research is recognized and leveraged to drive organizational success. The future of UXR lies not just in conducting great research, but in effectively communicating its impact and cementing its role as a critical strategic function within modern organizations.

Maximize UXR ROI with Optimal 

While innovative measurement approaches are crucial, having the right tools to conduct and analyze research efficiently is equally important for maximizing UXR's return on investment. This is where the Optimal Workshop platform comes in, offering a comprehensive solution to streamline your UXR efforts and amplify their impact.

The Optimal Platform provides a suite of user-friendly tools designed to support every stage of the research process, from participant recruitment to data analysis and insight sharing. By centralizing your research activities on a single platform, you can significantly reduce the time and resources spent on administrative tasks, allowing your team to focus on generating valuable insights.

Key benefits of using Optimal for improving UXR ROI include:

  • Faster research cycles: With automated participant management and data collection tools, you can complete studies more quickly, enabling faster iteration and decision-making.

  • Enhanced collaboration: The platform's sharing features make it easy to involve stakeholders throughout the research process, increasing buy-in and ensuring insights are actioned promptly.

  • Robust analytics: Advanced data visualization and analysis tools help you uncover deeper insights and communicate them more effectively to decision-makers.

  • Scalable research: The platform's user-friendly interface enables non-researchers to conduct basic studies, democratizing research across your organization and increasing its overall impact.

  • Comprehensive reporting: Generate professional, insightful reports that clearly demonstrate the value of your research to stakeholders at all levels.

By leveraging the Optimal Workshop, you're not just improving your research processes – you're positioning UXR as a strategic driver of business success. Our platform's capabilities align perfectly with the advanced measurement approaches discussed earlier, enabling you to track research influence, calculate ROI, and demonstrate tangible impact on key business outcomes.

Ready to transform how you measure and communicate the impact of your UX research? Sign up for a free trial of the Optimal platform today and experience firsthand how it can drive your UXR efforts to new heights of efficiency and effectiveness. 

Learn more
1 min read

Online card sorting: The comprehensive guide

When it comes to designing and testing in the world of information architecture, it’s hard to beat card sorting. As a usability testing method, card sorting is easy to set up, simple to recruit for and can supply you with a range of useful insights. But there’s a long-standing debate in the world of card sorting, and that’s whether it’s better to run card sorts in person (moderated) or remotely over the internet (unmoderated).

This article should give you some insight into the world of online card sorting. We've included an analysis of the benefits (and the downsides) as well as why people use this approach. Let's take a look!

How an online card sort works

Running a card sort remotely has quickly become a popular option just because of how time-intensive in-person card sorting is. Instead of needing to bring your participants in for dedicated card sorting sessions, you can simply set up your card sort using an online tool (like our very own OptimalSort) and then wait for the results to roll in.

So what’s involved in a typical online card sort? At a very high level, here’s what’s required. We’re going to assume you’re already set up with an online card sorting tool at this point.

  1. Define the cards: Depending on what you’re testing, add the items (cards) to your study. If you were testing the navigation menu of a hotel website, your cards might be things like “Home”, “Book a room”, “Our facilities” and “Contact us”.
  2. Work out whether to run a closed or open sort: Determine whether you’ll set the groups for participants to sort cards into (closed) or leave it up to them (open). You may also opt for a mix, where you create some categories but leave the option open for participants to create their own.
  3. Recruit your participants: Whether using a participant recruitment service or by recruiting through your own channels, send out invites to your online card sort.
  4. Wait for the data: Once you’ve sent out your invites, all that’s left to do is wait for the data to come in and then analyze the results.

That’s online card sorting in a nutshell – not entirely different from running a card sort in person. If you’re interested in learning about how to interpret your card sorting results, we’ve put together this article on open and hybrid card sorts and this one on closed card sorts.

Why is online card sorting so popular?

Online card sorting has a few distinct advantages over in-person card sorting that help to make it a popular option among information architects and user researchers. There are downsides too (as there are with any remote usability testing option), but we’ll get to those in a moment.

Where remote (unmoderated) card sorting excels:

  • Time savings: Online card sorting is essentially ‘set and forget’, meaning you can set up the study, send out invites to your participants and then sit back and wait for the results to come in. In-person card sorting requires you to moderate each session and collate the data at the end.
  • Easier for participants: It’s not often that researchers are on the other side of the table, but it’s important to consider the participant’s viewpoint. It’s much easier for someone to spend 15 minutes completing your online card sort in their own time instead of trekking across town to your office for an exercise that could take well over an hour.
  • Cheaper: In a similar vein, online card sorting is much cheaper than in-person testing. While it’s true that you may still need to recruit participants, you won’t need to reimburse people for travel expenses.
  • Analytics: Last but certainly not least, online card sorting tools (like OptimalSort) can take much of the analytical burden off you by transforming your data into actionable insights. Other tools will differ, but OptimalSort can generate a similarity matrix, dendrograms and a participant-centric analysis using your study data.

Where in-person (moderated) card sorting excels:

  • Qualitative insights: For all intents and purposes, online card sorting is the most effective way to run a card sort. It’s cheaper, faster and easier for you. But, there’s one area where in-person card sorting excels, and that’s qualitative feedback. When you’re sitting directly across the table from your participant you’re far more likely to learn about the why as well as the what. You can ask participants directly why they grouped certain cards together.

Online card sorting: Participant numbers

So that’s online card sorting in a nutshell, as well as some of the reasons why you should actually use this method. But what about participant numbers? Well, there’s no one right answer, but the general rule is that you need more people than you’d typically bring in for a usability test.

This all comes down to the fact that card sorting is what’s known as a generative method, whereas usability testing is an evaluation method. Here’s a little breakdown of what we mean by these terms:

Generative method: There’s no design, and you need to get a sense of how people think about the problem you’re trying to solve. For example, how people would arrange the items that need to go into your website’s navigation. As Nielsen Norman Group explains: “There is great variability in different people's mental models and in the vocabulary they use to describe the same concepts. We must collect data from a fair number of users before we can achieve a stable picture of the users' preferred structure and determine how to accommodate differences among users”.

Evaluation method: There’s already a design, and you basically need to work out whether it’s a good fit for your users. Any major problems are likely to crop up even after testing 5 or so users. For example, you have a wireframe of your website and need to identify any major usability issues.

Basically, because you’ll typically be using card sorting to generate a new design or structure from nothing, you need to sample a larger number of people. If you were testing an existing website structure, you could get by with a smaller group.

Where to from here?

Following on from our discussion of generative versus evaluation methods, you’ve really got a choice of 2 paths from here if you’re in the midst of a project. For those developing new structures, the best course of action is likely to be a card sort. However, if you’ve got an existing structure that you need to test in order to usability problems and possible areas of improvement, you’re likely best to run a tree test. We’ve got some useful information on getting started with a tree test right here on the blog.

Learn more
1 min read

Card Sorting vs Tree Testing: what's the best?

A great information architecture (IA) is essential for a great user experience (UX). And testing your website or app’s information architecture is necessary to get it right.

Card sorting and tree testing are the very best UX research methods for exactly this. But the big question is always: which one should you use, and when? Very possibly you need both. Let’s find out with this quick summary.

What is card sorting and tree testing? 🧐

Card sorting is used to test the information architecture of a website or app. Participants group individual labels (cards) into different categories according to  criteria that makes best sense to them. Each label represents an item that needs to be categorized. The results provide deep insights to guide decisions needed to create an intuitive navigation, comprehensive labeling and content that is organized in a user-friendly way.

Tree testing is also used to test the information architecture of a website or app. When using tree testing participants are presented with a site structure and a set of tasks they need to complete. The goal for participants is to find their way through the site and complete their task. The test shows whether the structure of your website corresponds to what users expect and how easily (or not) they can navigate and complete their tasks.

What are the differences? 🂱 👉🌴

Card sorting is a UX research method which helps to gather insights about your content categorization. It focuses on creating an information architecture that responds intuitively to the users’ expectations. Things like which items go best together, the best options for labeling, what categories users expect to find on each menu.

Doing a simple card sort can give you all those pieces of information and so much more. You start understanding your user’s thoughts and expectations. Gathering enough insights and information to enable you to develop several information architecture options.

Tree testing is a UX research method that is almost a card sort in reverse. Tree testing is used to evaluate an information architecture structure and simply allows you to see what works and what doesn’t. 

Using tree testing will provide insights around whether your information architecture is intuitive to navigate, the labels easy to follow and ultimately if your items are categorized in a place that makes sense. Conversely it will also show where your users get lost and how.

What method should you use? 🤷

You’ve got this far and fine-tuning your information architecture should be a priority. An intuitive IA is an integral component of a user-friendly product. Creating a product that is usable and an experience users will come back for.

If you are still wondering which method you should use - tree testing or card sorting. The answer is pretty simple - use both.

Just like many great things, these methods work best together. They complement each other, allowing you to get much deeper insights and a rounded view of how your IA performs and where to make improvements than when used separately. We cover more reasons why card sorting loves tree testing in our article which dives deeper into why to use both.

Ok, I'm using both, but which comes first? 🐓🥚

Wanting full, rounded insights into your information architecture is great. And we know that tree testing and card sorting work well together. But is there an order you should do the testing in? It really depends on the particular context of your research - what you’re trying to achieve and your situation. 

Tree testing is a great tool to use when you have a product that is already up and running. By running a tree test first you can quickly establish where there may be issues, or snags. Places where users get caught and need help. From there you can try and solve potential issues by moving on to a card sort. 

Card sorting is a super useful method that can be instigated at any stage of the design process, from planning to development and beyond.  As long as there is an IA structure that can be tested again. Testing against an already existing website navigation can be informative. Or testing a reorganization of items (new or existing) can ensure the organization can align with what users expect.

However, when you decide to implement both of the methods in your research, where possible, tree testing should come before card sorting. If you want a little more on the issue have a read of our article here.

Check out our OptimalSort and Treejack tools - we can help you with your research and the best way forward. Wherever you might be in the process.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.