October 31, 2024
10

Ready for take-off: Best practices for creating and launching remote user research studies

"Hi Optimal Work,I was wondering if there are some best practices you stick to when creating or sending out different UX research studies (i.e. Card sorts, Prototyye Test studies, etc)? Thank you! Mary"

Indeed I do! Over the years I’ve learned a lot about creating remote research studies and engaging participants. That experience has taught me a lot about what works, what doesn’t and what leaves me refreshing my results screen eagerly anticipating participant responses and getting absolute zip. Here are my top tips for remote research study creation and launch success!

Creating remote research studies

Use screener questions and post-study questions wisely

Screener questions are really useful for eliminating participants who may not fit the criteria you’re looking for but you can’t exactly stop them from being less than truthful in their responses. Now, I’m not saying all participants lie on the screener so they can get to the activity (and potentially claim an incentive) but I am saying it’s something you can’t control. To help manage this, I like to use the post-study questions to provide additional context and structure to the research.

Depending on the study, I might ask questions to which the answers might confirm or exclude specific participants from a specific group. For example, if I’m doing research on people who live in a specific town or area, I’ll include a location based question after the study. Any participant who says they live somewhere else is getting excluded via that handy toggle option in the results section. Post-study questions are also great for capturing additional ideas and feedback after participants complete the activity as remote research limits your capacity to get those — you’re not there with them so you can’t just ask. Post-study questions can really help bridge this gap. Use no more than five post-study questions at a time and consider not making them compulsory.

Do a practice run

No matter how careful I am, I always miss something! A typo, a card with a label in the wrong case, forgetting to update a new version of an information architecture after a change was made — stupid mistakes that we all make. By launching a practice version of your study and sharing it with your team or client, you can stop those errors dead in their tracks. It’s also a great way to get feedback from the team on your work before the real deal goes live. If you find an error, all you have to do is duplicate the study, fix the error and then launch. Just keep an eye on the naming conventions used for your studies to prevent the practice version and the final version from getting mixed up!

Sending out remote research studies

Manage expectations about how long the study will be open for

Something that has come back to bite me more than once is failing to clearly explain when the study will close. Understandably, participants can be left feeling pretty annoyed when they mentally commit to complete a study only to find it’s no longer available. There does come a point when you need to shut the study down to accurately report on quantitative data and you’re not going to be able to prevent every instance of this, but providing that information upfront will go a long way.

Provide contact details and be open to questions

You may think you’re setting yourself up to be bombarded with emails, but I’ve found that isn’t necessarily the case. I’ve noticed I get around 1-3 participants contacting me per study. Sometimes they just want to tell me they completed it and potentially provide additional information and sometimes they have a question about the project itself. I’ve also found that sometimes they have something even more interesting to share such as the contact details of someone I may benefit from connecting with — or something else entirely! You never know what surprises they have up their sleeves and it’s important to be open to it. Providing an email address or social media contact details could open up a world of possibilities.

Don’t forget to include the link!

It might seem really obvious, but I can’t tell you how many emails I received (and have been guilty of sending out) that are missing the damn link to the study. It happens! You’re so focused on getting that delivery right and it becomes really easy to miss that final yet crucial piece of information.

To avoid this irritating mishap, I always complete a checklist before hitting send:

  • Have I checked my spelling and grammar?
  • Have I replaced all the template placeholder content with the correct information?
  • Have I mentioned when the study will close?
  • Have I included contact details?
  • Have I launched my study and received confirmation that it is live?
  • Have I included the link to the study in my communications to participants?
  • Does the link work? (yep, I’ve broken it before)

General tips for both creating and sending out remote research studies

Know your audience

First and foremost, before you create or disseminate a remote research study, you need to understand who it’s going to and how they best receive this type of content. Posting it out when none of your followers are in your user group may not be the best approach. Do a quick brainstorm about the best way to reach them. For example if your users are internal staff, there might be an internal communications channel such as an all-staff newsletter, intranet or social media site that you can share the link and approach content to.

Keep it brief

And by that I’m talking about both the engagement mechanism and the study itself. I learned this one the hard way. Time is everything and no matter your intentions, no one wants to spend more time than they have to. Even more so in situations where you’re unable to provide incentives (yep, I’ve been there). As a rule, I always stick to no more than 10 questions in a remote research study and for card sorts, I’ll never include more than 60 cards. Anything more than that will see a spike in abandonment rates and of course only serve to annoy and frustrate your participants. You need to ensure that you’re balancing your need to gain insights with their time constraints.

As for the accompanying approach content, short and snappy equals happy! In the case of an email, website, other social media post, newsletter, carrier pigeon etc, keep your approach spiel to no more than a paragraph. Use an audience appropriate tone and stick to the basics such as: a high level sentence on what you’re doing, roughly how long the study will take participants to complete, details of any incentives on offer and of course don’t forget to thank them.

Set clear instructions

The default instructions in Optimal Workshop’s suite of tools are really well designed and I’ve learned to borrow from them for my approach content when sending the link out. There’s no need for wheel reinvention and it usually just needs a slight tweak to suit the specific study. This also helps provide participants with a consistent experience and minimizes confusion allowing them to focus on sharing those valuable insights!

Create a template

When you’re on to something that works — turn it into a template! Every time I create a study or send one out, I save it for future use. It still needs minor tweaks each time, but I use them to iterate my template.What are your top tips for creating and sending out remote user research studies? Comment below!

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Event Recap: Measuring the Value of UX Research at UXDX

Last week Optimal Workshop was delighted to sponsor UXDX USA 2024 in New York. The User Experience event brings together Product, Design, UX, CX, and Engineering professionals and our team had an amazing time meeting with customers, industry experts, and colleagues throughout the conference. This year, we also had the privilege of sharing some of our industry expertise by running an interactive forum on “Measuring the Value of UX Research” - a topic very close to our hearts.

Our forum, hosted by Optimal Workshop CEO Alex Burke and Product Lead Ella Fielding, was focused on exploring the value of User Experience Research (UXR) from both an industry-wide perspective and within the diverse ecosystem of individual companies and teams conducting this type of research today.

The session brought together a global mix of UX professionals for a rich discussion on measuring and demonstrating the effectiveness of and the challenges facing organizations who are trying to tie UXR to tangible business value today.

The main topics for the discuss were: 

  • Metrics that Matter: How do you measure UXR's impact on sales, customer satisfaction, and design influence?
  • Challenges & Strategies: What are the roadblocks to measuring UXR impact, and how can we overcome them?
  • Beyond ROI:  UXR's value beyond just financial metrics

Some of the key takeaways from our discussions during the session were: 

  1. The current state of UX maturity and value
    • Many UX teams don’t measure the impact of UXR on core business metrics and there were more attendees who are not measuring the impact of their work than those that are measuring it. 
    • Alex & Ella discussed with the attendees the current state of UX research maturity and the ability to prove value across different organizations represented in the room. Most organizations were still early in their UX research maturity with only 5% considering themselves advanced in having research culturally embedded.
  1. Defining and proving the value of UX research
    • The industry doesn’t have clear alignment or understanding of what good measurement looks like. Many teams don’t know how to accurately measure UXR impact or don’t have the tools or platforms to measure it, which serve as core roadblocks for measuring UXRs’ impact. 
    • Alex and Ella discussed challenges in defining and proving the value of UX research, with common values being getting closer to customers, innovating faster, de-risking product decisions, and saving time and money. However, the value of research is hard to quantify compared to other product metrics like lines of code or features shipped.
  1. Measuring and advocating for UX research
    • When teams are measuring UXR today there is a strong bias for customer feedback, but little ability or understanding about how to measure impact on business metrics like revenue. 
    • The most commonly used metrics for measuring UXR are quantitative and qualitative feedback from customers as opposed to internal metrics like stakeholder involvement or tieing UXR to business performance metrics (including financial performance). 
    • Attendees felt that in organizations where research is more embedded, researchers spend significant time advocating for research and proving its value to stakeholders rather than just conducting studies. This included tactics like research repositories and pointing to past study impacts as well as ongoing battles to shape decision making processes. 
    • One of our attendees highlighted that engaging stakeholders in the process of defining key research metrics prior to running research was a key for them in proving value internally. 
  1. Relating user research to financial impact
    • Alex and Ella asked the audience if anyone had examples of demonstrating financial impact of research to justify investment in the team and we got some excellent examples from the audience proving that there are tangible ways to tie research outcomes to core business metrics including:
    • Calculating time savings for employees from internal tools as a financial impact metric. 
    • Measuring a reduction in calls to service desks as a way to quantify financial savings from research.
  1. Most attendees recognise the value in embedding UXR more deeply in all levels of their organization - but feel like they’re not succeeding at this today. 
    • Most attendees feel that UXR is not fully embedded in their orgnaization or culture, but that if it was - they would be more successful in proving its overall value.
    • Stakeholder buy-in and engagement with UXR, particularly from senior leadership varied enormously across organizations, and wasn’t regularly measured as an indicator of UXR value 
    • In organizations where research was more successfully embedded, researchers had to spend significant time and effort building relationships with internal stakeholders before and after running studies. This took time and effort away from actual research, but ended up making the research more valuable to the business in the long run. 

With the large range of UX maturity and the democratization of research across teams, we know there’s a lot of opportunity for our customers to improve their ability to tie their user research to tangible business outcomes and embed UX more deeply in all levels of their organizations. To help fill this gap, Optimal Workshop is currently running a large research project on Measuring the Value of UX which will be released in a few weeks.

Keep up to date with the latest news and events by following us on LinkedIn.

Learn more
1 min read

Card Sorting vs Tree Testing: what's the best?

A great information architecture (IA) is essential for a great user experience (UX). And testing your website or app’s information architecture is necessary to get it right.

Card sorting and tree testing are the very best UX research methods for exactly this. But the big question is always: which one should you use, and when? Very possibly you need both. Let’s find out with this quick summary.

What is card sorting and tree testing? 🧐

Card sorting is used to test the information architecture of a website or app. Participants group individual labels (cards) into different categories according to  criteria that makes best sense to them. Each label represents an item that needs to be categorized. The results provide deep insights to guide decisions needed to create an intuitive navigation, comprehensive labeling and content that is organized in a user-friendly way.

Tree testing is also used to test the information architecture of a website or app. When using tree testing participants are presented with a site structure and a set of tasks they need to complete. The goal for participants is to find their way through the site and complete their task. The test shows whether the structure of your website corresponds to what users expect and how easily (or not) they can navigate and complete their tasks.

What are the differences? 🂱 👉🌴

Card sorting is a UX research method which helps to gather insights about your content categorization. It focuses on creating an information architecture that responds intuitively to the users’ expectations. Things like which items go best together, the best options for labeling, what categories users expect to find on each menu.

Doing a simple card sort can give you all those pieces of information and so much more. You start understanding your user’s thoughts and expectations. Gathering enough insights and information to enable you to develop several information architecture options.

Tree testing is a UX research method that is almost a card sort in reverse. Tree testing is used to evaluate an information architecture structure and simply allows you to see what works and what doesn’t. 

Using tree testing will provide insights around whether your information architecture is intuitive to navigate, the labels easy to follow and ultimately if your items are categorized in a place that makes sense. Conversely it will also show where your users get lost and how.

What method should you use? 🤷

You’ve got this far and fine-tuning your information architecture should be a priority. An intuitive IA is an integral component of a user-friendly product. Creating a product that is usable and an experience users will come back for.

If you are still wondering which method you should use - tree testing or card sorting. The answer is pretty simple - use both.

Just like many great things, these methods work best together. They complement each other, allowing you to get much deeper insights and a rounded view of how your IA performs and where to make improvements than when used separately. We cover more reasons why card sorting loves tree testing in our article which dives deeper into why to use both.

Ok, I'm using both, but which comes first? 🐓🥚

Wanting full, rounded insights into your information architecture is great. And we know that tree testing and card sorting work well together. But is there an order you should do the testing in? It really depends on the particular context of your research - what you’re trying to achieve and your situation. 

Tree testing is a great tool to use when you have a product that is already up and running. By running a tree test first you can quickly establish where there may be issues, or snags. Places where users get caught and need help. From there you can try and solve potential issues by moving on to a card sort. 

Card sorting is a super useful method that can be instigated at any stage of the design process, from planning to development and beyond.  As long as there is an IA structure that can be tested again. Testing against an already existing website navigation can be informative. Or testing a reorganization of items (new or existing) can ensure the organization can align with what users expect.

However, when you decide to implement both of the methods in your research, where possible, tree testing should come before card sorting. If you want a little more on the issue have a read of our article here.

Check out our OptimalSort and Treejack tools - we can help you with your research and the best way forward. Wherever you might be in the process.

Learn more
1 min read

Workspaces delivers new privacy controls and improved collaboration

Improved organization, privacy controls, and more with new Workspaces 🚀

One of our key priorities in 2024 is making Optimal Workshop easier for large organizations to manage teams and collaborate more effectively on delivering optimal digital experiences. Workspaces is going live this week, which replaces teams, and introduces projects and folders for improved organization and privacy controls. Our latest release lays the foundations to provide more control over managing users, licenses, and user roles in the app in the near future.

More control with project privacy 🔒

Private projects allow greater flexibility on who can see what in your workspace, with the ability to make projects public or private and manage who can access a project. Find out more about how to set up private projects in this help article.

What changes for Enterprise customers? 😅

  • The teams you have set up today will remain the same; they are renamed workspaces.
  • Studies will be moved to a 'Default project' within the new workspace, from here you can decide how you would like to organize your studies and access to them.

  • You can create new projects, move studies into them, and use the new privacy features to control who has access to studies or leave them as public access.

  • Optimal Workshop are here to help if you would like to review your account structure and make changes, please reach out to your Customer Success Manager.

Watch the video 🎞️

What changes for Professional and Team customers? 😨

Customers on either a Professional or Team plan will notice the studies tab will now be called Workspace. We have introduced another layer of organization called projects, and there is a new-look sidebar on the left to create projects, folders, and studies.

What's next for Workspaces? 🔮

This new release is an essential step towards improving how we manage users, licenses, and different role types in Optimal Workshop. We hope to deliver more updates, such as the ability to move studies between workspaces, in the near future. If you have any feedback or ideas you want to share on workspaces or Optimal Workshop, please email product@optimalworkshop.com; we'd love to hear from you.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.