June 6, 2024
1 min read

Event Recap: Measuring the Value of UX Research at UXDX

Last week Optimal Workshop was delighted to sponsor UXDX USA 2024 in New York. The User Experience event brings together Product, Design, UX, CX, and Engineering professionals and our team had an amazing time meeting with customers, industry experts, and colleagues throughout the conference. This year, we also had the privilege of sharing some of our industry expertise by running an interactive forum on “Measuring the Value of UX Research” - a topic very close to our hearts.

Our forum, hosted by Optimal Workshop CEO Alex Burke and Product Lead Ella Fielding, was focused on exploring the value of User Experience Research (UXR) from both an industry-wide perspective and within the diverse ecosystem of individual companies and teams conducting this type of research today.

The session brought together a global mix of UX professionals for a rich discussion on measuring and demonstrating the effectiveness of and the challenges facing organizations who are trying to tie UXR to tangible business value today.

The main topics for the discuss were: 

  • Metrics that Matter: How do you measure UXR's impact on sales, customer satisfaction, and design influence?
  • Challenges & Strategies: What are the roadblocks to measuring UXR impact, and how can we overcome them?
  • Beyond ROI:  UXR's value beyond just financial metrics

Some of the key takeaways from our discussions during the session were: 

  1. The current state of UX maturity and value
    • Many UX teams don’t measure the impact of UXR on core business metrics and there were more attendees who are not measuring the impact of their work than those that are measuring it. 
    • Alex & Ella discussed with the attendees the current state of UX research maturity and the ability to prove value across different organizations represented in the room. Most organizations were still early in their UX research maturity with only 5% considering themselves advanced in having research culturally embedded.
  1. Defining and proving the value of UX research
    • The industry doesn’t have clear alignment or understanding of what good measurement looks like. Many teams don’t know how to accurately measure UXR impact or don’t have the tools or platforms to measure it, which serve as core roadblocks for measuring UXRs’ impact. 
    • Alex and Ella discussed challenges in defining and proving the value of UX research, with common values being getting closer to customers, innovating faster, de-risking product decisions, and saving time and money. However, the value of research is hard to quantify compared to other product metrics like lines of code or features shipped.
  1. Measuring and advocating for UX research
    • When teams are measuring UXR today there is a strong bias for customer feedback, but little ability or understanding about how to measure impact on business metrics like revenue. 
    • The most commonly used metrics for measuring UXR are quantitative and qualitative feedback from customers as opposed to internal metrics like stakeholder involvement or tieing UXR to business performance metrics (including financial performance). 
    • Attendees felt that in organizations where research is more embedded, researchers spend significant time advocating for research and proving its value to stakeholders rather than just conducting studies. This included tactics like research repositories and pointing to past study impacts as well as ongoing battles to shape decision making processes. 
    • One of our attendees highlighted that engaging stakeholders in the process of defining key research metrics prior to running research was a key for them in proving value internally. 
  1. Relating user research to financial impact
    • Alex and Ella asked the audience if anyone had examples of demonstrating financial impact of research to justify investment in the team and we got some excellent examples from the audience proving that there are tangible ways to tie research outcomes to core business metrics including:
    • Calculating time savings for employees from internal tools as a financial impact metric. 
    • Measuring a reduction in calls to service desks as a way to quantify financial savings from research.
  1. Most attendees recognise the value in embedding UXR more deeply in all levels of their organization - but feel like they’re not succeeding at this today. 
    • Most attendees feel that UXR is not fully embedded in their orgnaization or culture, but that if it was - they would be more successful in proving its overall value.
    • Stakeholder buy-in and engagement with UXR, particularly from senior leadership varied enormously across organizations, and wasn’t regularly measured as an indicator of UXR value 
    • In organizations where research was more successfully embedded, researchers had to spend significant time and effort building relationships with internal stakeholders before and after running studies. This took time and effort away from actual research, but ended up making the research more valuable to the business in the long run. 

With the large range of UX maturity and the democratization of research across teams, we know there’s a lot of opportunity for our customers to improve their ability to tie their user research to tangible business outcomes and embed UX more deeply in all levels of their organizations. To help fill this gap, Optimal Workshop is currently running a large research project on Measuring the Value of UX which will be released in a few weeks.

Keep up to date with the latest news and events by following us on LinkedIn.

Share this article
Author
Amberlie
Denny

Related articles

View all blog articles
Learn more
1 min read

Ready for take-off: Best practices for creating and launching remote user research studies

"Hi Optimal Work,I was wondering if there are some best practices you stick to when creating or sending out different UX research studies (i.e. Card sorts, Prototyye Test studies, etc)? Thank you! Mary"

Indeed I do! Over the years I’ve learned a lot about creating remote research studies and engaging participants. That experience has taught me a lot about what works, what doesn’t and what leaves me refreshing my results screen eagerly anticipating participant responses and getting absolute zip. Here are my top tips for remote research study creation and launch success!

Creating remote research studies

Use screener questions and post-study questions wisely

Screener questions are really useful for eliminating participants who may not fit the criteria you’re looking for but you can’t exactly stop them from being less than truthful in their responses. Now, I’m not saying all participants lie on the screener so they can get to the activity (and potentially claim an incentive) but I am saying it’s something you can’t control. To help manage this, I like to use the post-study questions to provide additional context and structure to the research.

Depending on the study, I might ask questions to which the answers might confirm or exclude specific participants from a specific group. For example, if I’m doing research on people who live in a specific town or area, I’ll include a location based question after the study. Any participant who says they live somewhere else is getting excluded via that handy toggle option in the results section. Post-study questions are also great for capturing additional ideas and feedback after participants complete the activity as remote research limits your capacity to get those — you’re not there with them so you can’t just ask. Post-study questions can really help bridge this gap. Use no more than five post-study questions at a time and consider not making them compulsory.

Do a practice run

No matter how careful I am, I always miss something! A typo, a card with a label in the wrong case, forgetting to update a new version of an information architecture after a change was made — stupid mistakes that we all make. By launching a practice version of your study and sharing it with your team or client, you can stop those errors dead in their tracks. It’s also a great way to get feedback from the team on your work before the real deal goes live. If you find an error, all you have to do is duplicate the study, fix the error and then launch. Just keep an eye on the naming conventions used for your studies to prevent the practice version and the final version from getting mixed up!

Sending out remote research studies

Manage expectations about how long the study will be open for

Something that has come back to bite me more than once is failing to clearly explain when the study will close. Understandably, participants can be left feeling pretty annoyed when they mentally commit to complete a study only to find it’s no longer available. There does come a point when you need to shut the study down to accurately report on quantitative data and you’re not going to be able to prevent every instance of this, but providing that information upfront will go a long way.

Provide contact details and be open to questions

You may think you’re setting yourself up to be bombarded with emails, but I’ve found that isn’t necessarily the case. I’ve noticed I get around 1-3 participants contacting me per study. Sometimes they just want to tell me they completed it and potentially provide additional information and sometimes they have a question about the project itself. I’ve also found that sometimes they have something even more interesting to share such as the contact details of someone I may benefit from connecting with — or something else entirely! You never know what surprises they have up their sleeves and it’s important to be open to it. Providing an email address or social media contact details could open up a world of possibilities.

Don’t forget to include the link!

It might seem really obvious, but I can’t tell you how many emails I received (and have been guilty of sending out) that are missing the damn link to the study. It happens! You’re so focused on getting that delivery right and it becomes really easy to miss that final yet crucial piece of information.

To avoid this irritating mishap, I always complete a checklist before hitting send:

  • Have I checked my spelling and grammar?
  • Have I replaced all the template placeholder content with the correct information?
  • Have I mentioned when the study will close?
  • Have I included contact details?
  • Have I launched my study and received confirmation that it is live?
  • Have I included the link to the study in my communications to participants?
  • Does the link work? (yep, I’ve broken it before)

General tips for both creating and sending out remote research studies

Know your audience

First and foremost, before you create or disseminate a remote research study, you need to understand who it’s going to and how they best receive this type of content. Posting it out when none of your followers are in your user group may not be the best approach. Do a quick brainstorm about the best way to reach them. For example if your users are internal staff, there might be an internal communications channel such as an all-staff newsletter, intranet or social media site that you can share the link and approach content to.

Keep it brief

And by that I’m talking about both the engagement mechanism and the study itself. I learned this one the hard way. Time is everything and no matter your intentions, no one wants to spend more time than they have to. Even more so in situations where you’re unable to provide incentives (yep, I’ve been there). As a rule, I always stick to no more than 10 questions in a remote research study and for card sorts, I’ll never include more than 60 cards. Anything more than that will see a spike in abandonment rates and of course only serve to annoy and frustrate your participants. You need to ensure that you’re balancing your need to gain insights with their time constraints.

As for the accompanying approach content, short and snappy equals happy! In the case of an email, website, other social media post, newsletter, carrier pigeon etc, keep your approach spiel to no more than a paragraph. Use an audience appropriate tone and stick to the basics such as: a high level sentence on what you’re doing, roughly how long the study will take participants to complete, details of any incentives on offer and of course don’t forget to thank them.

Set clear instructions

The default instructions in Optimal Workshop’s suite of tools are really well designed and I’ve learned to borrow from them for my approach content when sending the link out. There’s no need for wheel reinvention and it usually just needs a slight tweak to suit the specific study. This also helps provide participants with a consistent experience and minimizes confusion allowing them to focus on sharing those valuable insights!

Create a template

When you’re on to something that works — turn it into a template! Every time I create a study or send one out, I save it for future use. It still needs minor tweaks each time, but I use them to iterate my template.What are your top tips for creating and sending out remote user research studies? Comment below!

Learn more
1 min read

Product Roadmap Update

At Optimal Workshop, we're dedicated to building the best user research platform to empower you with the tools to better understand your customers and create intuitive digital experiences. We're thrilled to announce some game-changing updates and new products that are on the horizon to help elevate the way you gather insights and keep customers at the heart of everything you do. 

What’s new…

Integration with Figma 🚀

Last month, we joined forces with design powerhouse Figma to launch our integration. You can import images from Figma into Chalkmark (our click-testing tool) in just a few clicks, streamlining your workflows and getting insights to make decisions based on data not hunches and opinions.  

What’s coming next…

Session Replays 🧑‍💻

With session replay you can focus on other tasks while Optimal Workshop automatically captures card sort sessions for you to watch in your own time.  Gain valuable insights into how participants engage and interpret a card sort without the hassle of running moderated sessions. The first iteration of session replays captures the study interactions, and will not include audio or face recording, but this is something we are exploring for future iterations. Session replays will be available in tree testing and click-testing later in 2024.  

Reframer Transcripts 🔍

Say goodbye to juggling note-taking and hello to more efficient ways of working with Transcripts! We're continuing to add more capability to Reframer, our qualitative research tool, to now include the importing of interview transcripts. Save time, reduce human errors and oversights by importing transcripts, tagging and analyzing observations all within Reframer. We’re committed to build on transcripts with video and audio transcription capability in the future,  we’ll keep you in the loop and when to expect those releases. 

Prototype testing 🧪

The team is fizzing to be working on a new Prototype testing product designed to expand your research methods and help test prototypes easily from the Optimal Workshop platform. Testing prototypes early and often is an important step in the design process, saving you time and money before you invest too heavily in the build. We are working with customers and on delivering the first iteration of this exciting new product. Stay tuned for Prototypes coming in the second quarter of 2024.   

Workspaces 🎉

Making Optimal Workshop easier for large organizations to manage teams and collaborate more effectively on projects is a big focus for 2024. Workspaces are the first step towards empowering organizations to better manage multiple teams with projects. Projects will allow greater flexibility on who can see what, encouraging working in the open and collaboration alongside the ability to make projects private. The privacy feature is available on Enterprise plans.

Questions upgrade❓

Our survey product Questions is in for a glow up in 2024 💅. The team are enjoying working with customers, collecting and reviewing feedback on how to improve Questions and will be sharing more on this in the coming months. 

Help us build a better Optimal Workshop

We are looking for new customers to join our research panel to help influence product development. From time to time, you’ll be invited to join us for interviews or surveys, and you’ll be rewarded for your time with a thank-you gift.  If you’d like to join the team, email product@optimalworkshop.com

Learn more
1 min read

UX research methods for each product phase

What is UX research? 🤔

User experience (UX) research, or user research as it’s commonly referred to, is an important part of the product design process. Primarily, UX research involves using different research methods to gather information about how your users interact with your product. It is an essential part of developing, building and launching a product that truly meets the requirements of your users. 

UX research is essential at all stages of a products' life cycle:

  1. Planning
  2. Building
  3. Introduction
  4. Growth & Maturity

While there is no one single time to conduct UX research it is best-practice to continuously gather information throughout the lifetime of your product. The good news is many of the UX research methods do not fit just one phase either, and can (and should) be used repeatedly. After all, there are always new pieces of functionality to test and new insights to discover. We introduce you to best-practice UX research methods for each lifecycle phase of your product.

1. Product planning phase 🗓️

While the planning phase it is about creating a product that fits your organization, your organization’s needs and meeting a gap in the market it’s also about meeting the needs, desires and requirements of your users. Through UX research you’ll learn which features are necessary to be aligned with your users. And of course, user research lets you test your UX design before you build, saving you time and money.

Qualitative Research Methods

Usability Testing - Observational

One of the best ways to learn about your users and how they interact with your product is to observe them in their own environment. Watch how they accomplish tasks, the order they do things, what frustrates them, and what makes the task easier and/or more enjoyable for your subject. The data can be collated to inform the usability of your product, improving intuitive design, and what resonates with users.

Competitive Analysis

Reviewing products already in the market can be a great start to the planning process. Why are your competitors’ products successful and how well do they behave for users. Learn from their successes, and even better build on where they may not be performing the best and find your niche in the market.

Quantitative Research Methods

Surveys and Questionnaires

Surveys are useful for collecting feedback or understanding attitudes. You can use the learnings from your survey of a subset of users to draw conclusions about a larger population of users.

There are two types of survey questions:

Closed questions are designed to capture quantitative information. Instead of asking users to write out answers, these questions often use multi-choice answers.

Open questions are designed to capture qualitative information such as motivations and context.  Typically, these questions require users to write out an answer in a text field.

2. Product building phase 🧱

Once you've completed your product planning research, you’re ready to begin the build phase for your product. User research studies undertaken during the build phase enable you to validate the UX team’s deliverables before investing in the technical development.

Qualitative Research Methods

Focus groups

Generally involve 5-10 participants and include demographically similar individuals. The study is set up so that members of the group can interact with one another and can be carried out in person or remotely.


Besides learning about the participants’ impressions and perceptions of your product, focus group findings also include what users believe to be a product’s most important features, problems they might encounter while using the product, as well as their experiences with other products, both good and bad.

Quantitative Research Methods

Card sorting gives insight into how users think. Tools like card sorting reveal where your users expect to find certain information or complete specific tasks. This is especially useful for products with complex or multiple navigations and contributes to the creation of an intuitive information architecture and user experience.

Tree testing gives insight into where users expect to find things and where they’re getting lost within your product. Tools like tree testing help you test your information architecture.
Card sorting and tree testing are often used together. Depending on the purpose of your research and where you are at with your product, they can provide a fully rounded view of your information architecture.

3. Product introduction phase 📦

You’ve launched your product, wahoo! And you’re ready for your first real life, real time users. Now it’s time to optimize your product experience. To do this, you’ll need to understand how your new users actually use your product.

Qualitative Research Methods

Usability testing involves testing a product with users. Typically it involves observing users as they try to follow and complete a series of tasks. As a result you can evaluate if the design is intuitive and if there are any usability problems.

User Interviews - A user interview is designed to get a deeper understanding of a particular topic. Unlike a usability test, where you’re more likely to be focused on how people use your product, a user interview is a guided conversation aimed at better understanding your users. This means you’ll be capturing details like their background, pain points, goals and motivations.

Quantitative Research Methods

A/B Testing is a way to compare two versions of a design in order to work out which is more effective. It’s typically used to test two versions of the same webpage, for example, using a different headline, image or call to action to see which one converts more effectively. This method offers a way to validate smaller design choices where you might not have the data to make an informed decision, like the color of a button or the layout of a particular image.

Flick-click testing shows you where people click first when trying to complete a task on a website. In most cases, first-click testing is performed on a very simple wireframe of a website, but it can also be carried out on a live website using a tool like first-time clicking.

4. Growth and maturity phase 🪴

If you’ve reached the growth stage, fantastic news! You’ve built a great product that’s been embraced by your users. Next on your to-do list is growing your product by increasing your user base and then eventually reaching maturity and making a profit on your hard work.

Growing your product involves building new or advanced features to satisfy specific customer segments. As you plan and build these enhancements, go through the same research and testing process you used to create the first release. The same holds true for enhancements as well as a new product build — user research ensures you’re building the right thing in the best way for your customers.

Qualitative research methods

User interviews will focus on how your product is working or if it’s missing any features, enriching your knowledge about your product and users.

It allows you to test your current features, discover new possibilities for additional features and think about discarding  existing ones. If your customers aren’t using certain features, it might be time to stop supporting them to reduce costs and help you grow your profits during the maturity stage.

Quantitative research methods

Surveys and questionnaires can help gather information around which features will work best for your product, enhancing and improving the user experience. 

A/B testing during growth and maturity occurs within your sales and onboarding processes. Making sure you have a smooth onboarding process increases your conversion rate and reduces wasted spend — improving your bottom line.

Wrap up 🌮

UX research testing throughout the lifecycle of your product helps you continuously evolve and develop a product that responds to what really matters - your users.

Talking to, testing, and knowing your users will allow you to push your product in ways that make sense with the data to back up decisions. Go forth and create the product that meets your organizations needs by delivering the very best user experience for your users.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.