July 21, 2025
4

Product Managers: How Optimal Streamlines Your User Research

As product managers, we all know the struggle of truly understanding our users. It's the cornerstone of everything we do, yet the path to those valuable insights can often feel like navigating a maze. The endless back-and-forth emails, the constant asking for favors from other teams, and the sheer time spent trying to find the right people to talk to, sound familiar? For years, this was our reality. But there’s a better way, Optimal's participant recruitment is a game-changer, transforming your approach to user research and freeing you to focus on what truly matters: understanding our users.

The Challenge We All Faced

User research processes often hit a significant bottleneck: finding participants. Like many, you may rely heavily on sales and support teams to connect you with users. While they were always incredibly helpful, this approach has its limitations. It creates internal dependencies, slows down timelines, and often means you are limited to a specific segment of our user base. You frequently find ourselves asking, "Does anyone know someone who fits this profile?" which inevitably leads to delays and sometimes, missed crucial feedback opportunities.

A Game-Changing Solution: Optimal's Participant Recruitment

Enter Optimal's participant recruitment. This service fundamentally shifts how you approach user research, offering a hugely increased level of efficiency and insight. Here’s how it can level up your research process:

  • Diverse Participant Pool: Gone are the days of repeatedly reaching out to the same familiar faces. Optimal Workshop provides access to a global pool of participants who genuinely represent our target audience. The fresh perspectives and varied experiences gained can be truly eye-opening, uncovering insights you might have otherwise missed.
  • Time-Saving Independence: The constant "Does anyone know someone who...?" emails are a thing of the past. You now have the autonomy to independently recruit participants for a wide range of research activities, from quick prototype tests to more in-depth user interviews and usability studies. This newfound independence dramatically accelerates your research timeline, allowing you to gather feedback much faster.
  • Faster Learning Cycles: When a critical question arises, or you need to quickly validate a new concept, you can now launch research and recruit participants almost immediately. This quick turnaround means you’re making informed decisions based on real user feedback at a much faster pace than ever before. This agility is invaluable in today's fast-paced product development environment.
  • Reduced Bias: By accessing external participants who have no prior relationship with your company, you're receiving more honest and unfiltered feedback. This unbiased perspective is crucial for making confident, user-driven decisions and avoiding the pitfalls of internal assumptions.

Beyond Just Recruitment: A Seamless Research Ecosystem

The participant recruitment service integrates with the Optimal platform. Whether you're conducting tree testing to evaluate information architecture, running card sorting exercises to understand user mental models, or performing first-click tests to assess navigation, everything is available within one intuitive platform. It really can become your one-stop shop for all things user research.

Building a Research-First Culture

Perhaps the most unexpected and significant benefit of streamlined participant recruitment comes from the positive shift in your team's culture. With research becoming so accessible and efficient, you're naturally more inclined to validate our assumptions and explore user needs before making key product decisions. Every product decision is now more deeply grounded in real user insights, fostering a truly user-centric approach throughout your development process.

The Bottom Line

If you're still wrestling with the time-consuming and often frustrating process of participant recruitment for your user research, why not give Optimal Workshop a try. It can transform what is a significant bottleneck in your workflow into a streamlined and efficient process that empowers you to build truly user-centric products. It's not just about saving time; it's about gaining deeper, more diverse insights that ultimately lead to better products and happier users. Give it a shot, you might be surprised at the difference it makes.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

What do you prioritize when doing qualitative research?

Qualitative user research is about exploration. Exploration is about the journey, not only the destination (or outcome). Gaining information and insights about your users through interviews, usability testing, contextual, observations and diary entries. Using these qualitative research methods to not only answer your direct queries, but to uncover and unravel your users ‘why’.

It can be important to use qualitative research to really dig deep, get to know your users and get inside their heads, and their reasons. Creating intuitive and engaging products that deliver the best user experience. 

What is qualitative research? 🔎

The term ‘qualitative’ refers to things that cannot be measured numerically and qualitative user research is no exception. Qualitative research is primarily an exploratory research method that is typically done early in the design process and is useful for uncovering insights into people’s thoughts, opinions, and motivations. It allows us to gain a deeper understanding of problems and provides answers to questions we didn’t know we needed to ask. 

Qualitative research could be considered the ‘why’. Where quantitative user research uncovers the how or the what users want. Qualitative user research will uncover why they make decisions (and possibly much more).

Priorities ⚡⚡⚡⚡

When undertaking user research it is great to do a mix of quantitative and qualitative research. Which will round out the numbers with human driven insights.

Quantitative user research methods, such as card sorting or tree testing, will answer the ‘what’ your users want, and provide data to support this. These insights are number driven and are based on testing direct interaction with your product. This is super valuable to report to stakeholders. Hard data is difficult to argue what changes need to be made to how your information architecture (IA) is ordered, sorted or designed. To find out more about the quantitative research options, take a read.

Qualitative user research, on the other hand, may uncover a deeper understanding of ‘why’ your users want the IA ordered, sorted or designed a certain way.  The devil is in the detail afterall and great user insights are discoverable. 

Priorities for your qualitative research needs to be less about the numbers, and more on discovering your users ‘why’. Observing, listening, questioning and looking at reasons for users decisions will provide valuable insights for product design and ultimately improve user experience.

Usability Testing - this research method is used to evaluate how easy and intuitive a product is to use.  Observing, noting and watching the participant complete tasks without interference or questions can uncover a lot of insights that data alone can’t give. This method can be done in a couple of ways, moderated or unmoderated. While it can be quicker to do unmoderated and easier to arrange, the deep insights will come out of moderated testing. 

Observational - with this qualitative research method your insights will be uncovered from observing and noting what the participant is doing, paying particular attention to their non-verbal communication. Where do they demonstrate frustration, or turn away from the task, or change their approach? Factual note taking, meaning there shouldn’t be any opinions attached to what is being observed, is important to keep the insights unbiased.

Contextual - paying attention to the context in which the interview or testing is done is important. Is it hot, loud, cold or is the screen of their laptop covered in post-its that make it difficult to see? Or do they struggle with navigating using the laptop tracker? All of this noted, in a factual manner, without personal inferring or added opinion based observations can give a window into why the participant struggled or was frustrated at any point.

These research methods can be done as purely observational research (you don’t interview or converse with your participant) and noting how they interact (more interested in the process than the outcome of their product interaction). Or, these qualitative research methods can be coupled with an

Interview - a series of questions asked around a particular task or product. Careful note taking around what the participant says as well as noting any observations. This method should allow a conversation to flow. Whilst the interviewer should be prepared with a list of questions around their topic, remain flexible enough to dig deeper where there might be details or insights of interest. An interviewer that is comfortable in getting to know their participants unpicks reservations and allows a flow of conversation, and generates amazing insights.

With an interview it can be of use to have a second person in the room to act as the note taker. This can free up the interviewer to engage with the participant and unpick the insights.

Using a great note taking side kick, like our Reframer, can take the pain out of recording all these juicy and deep insights. Time-stamping, audio or video recordings and notes all stored in one place. Easily accessed by the team, reviewed, reports generated and stored for later.

Let’s consider 🤔

You’re creating a new app to support your gym and it’s website. You’re looking to generate personal training bookings, allow members to book classes or have updates and personalise communication for your members. But before investing in final development it needs to be tested. How do your users interact with it? Why would they want to? Does it behave in a way that improves the user experience? Or does it simply not deliver? But why?

First off, using quantitative research like Chalkmark would show how the interface is working. Where are users clicking, where do they go after that. Is it simple to use? You now have direct data that supports your questions, or possibly suggests a change of design to support quicker task completion, or further engagement.

While all of this is great data for the design, does it dig deep enough to really get an understanding of why your users are frustrated? Do they find what they need quickly? Or get completely lost? Finding out these insights and improving on them can make the most of your users’ experience.

When quantitative research is coupled with robust qualitative research that prioritizes an in-depth understanding of what your users need, ultimately the app can make the most of your users’ experience.

Using moderated usability testing for your gym app, observations can be made about how the participant interacts with the interface. Where do they struggle, get lost, or where do they complete a task quickly and simply. This type of research enhances the quantitative data and gives insight into where and why the app is or isn't performing.

Then interviewing participants about why they make decisions on the app, how they use it and why they would use it. These focussed questions, with some free flow conversation will round out your research. Giving valuable insights that can be reviewed, analyzed and reported to the product team and key stakeholders. Focussing the outcome, and designing a product that delivers on not just what users need, but in-depth understand of why. 

Wrap Up 🥙

Quantitative and qualitative user research do work hand in hand, each offering a side to the same coin. Hard number driven data with quantitative user research will deliver the what needs to be addressed. With focussed quantitative research it is possible to really get a handle on why your users interact with your product in a certain way, and how. 

The Optimal Workshop platform has all the tools, research methods and even the note taking tools you need to get started with your user research, now, not next week! See you soon.

Learn more
1 min read

Avoiding bias in the oh-so-human world of user testing

"Dear Optimal WorkshopMy question is about biasing users with the wording of questions. It seems that my co-workers and I spend too much time debating the wording of task items in usability tests or questions on surveys. Do you have any 'best practices' for wordings that evoke unbiased feedback from users?" — Dominic

Dear Dominic, Oh I feel your pain! I once sat through a two hour meeting that was dominated by a discussion on the merits of question marks!It's funny how wanting to do right by users and clients can tangle us up like fine chains in an old jewellery box. In my mind, we risk provoking bias when any aspect of our research (from question wording to test environment) influences participants away from an authentic response. So there are important things to consider outside of the wording of questions as well. I'll share my favorite tips, and then follow it up with a must-read resource or two.

Balance your open and closed questions

The right balance of open and closed questions is essential to obtaining unbiased feedback from your users. Ask closed questions only when you want a very specific answer like 'How old are you?' or 'Are you employed?' and ask open questions when you want to gain an understanding of what they think or feel. For example, don’t ask the participant'Would you be pleased with that?' (closed question). Instead, ask 'How do you feel about that?' or even better 'How do you think that might work?' Same advice goes for surveys, and be sure to give participants enough space to respond properly — fifty characters isn’t going to cut it.

Avoid using words that are linked to an emotion

The above questions lead me to my next point — don’t use words like ‘happy’. Don’t ask if they like or dislike something. Planting emotion based words in a survey or usability test is an invite for them to tell you what they think you want to hear . No one wants to be seen as being disagreeable. If you word a question like this, chances are they will end up agreeing with the question itself, not the content or meaning behind it...does that make sense? Emotion based questions only serve to distract from the purpose of the testing — leave them at home.

Keep it simple and avoid jargon

No one wants to look stupid by not understanding the terms used in the question. If it’s too complicated, your user might just agree or tell you what they think you want to hear to avoid embarrassment. Another issue with jargon is that some terms may have multiple meanings which can trigger a biased reaction depending on the user’s understanding of the term. A friend of mine once participated in user testing where they were asked if what they were seeing made them feel ‘aroused’. From a psychology perspective, that means you’re awake and reacting to stimuli.

From the user's perspective? I’ll let you fill in the blanks on that one. Avoid using long, wordy sentences when asking questions or setting tasks in surveys and usability testing. I’ve seen plenty of instances of overly complicated questions that make the user tune out (trust me, you would too!). And because people don't tend to admit their attention has wandered during a task, you risk getting a response that lacks authenticity — maybe even one that aims to please (just a thought...).

Encourage participants to share their experiences (instead of tying them up in hypotheticals)

Instead of asking your user what they think they would do in a given scenario, ask them to share an example of a time when they actually did do it. Try asking questions along the lines of 'Can you tell me about a time when you….?' or 'How many times in the last 12 months have you...?' Asking them to recall an experience they had allows you to gain factual insights from your survey or usability test, not hypothetical maybes that are prone to bias.

Focus the conversation by asking questions in a logical order

If you ask usability testing or survey questions in an order that doesn’t quite follow a logical flow, the user may think that the order holds some sort of significance which in turn may change the way they respond. It’s a good idea to ensure that the questions tell a story and follow a logical progression for example the steps in a process — don’t ask me if I’d be interested in registering for a service if you haven’t introduced the concept yet (you’d be surprised how often this happens!). For further reading on this, be sure to check out this great article from usertesting.com.

More than words — the usability testing experience as a whole

Reducing bias by asking questions the right way is really just one part of the picture. You can also reduce bias by influencing the wider aspects of the user testing process, and ensuring the participant is comfortable and relaxed.

Don’t let the designer facilitate the testing

This isn’t always possible, but it’s a good idea to try to get someone else to facilitate the usability testing on your design (and choose to observe if you like). This will prevent you from bringing your own bias into the room, and participants will be more comfortable being honest when the designer isn't asking the questions. I've seen participants visibly relax when I've told them I'm not the designer of a particular website, when it's apparent they've arrived expecting that to be the case.

Minimize discomfort and give observers a role

The more comfortable your participants are, with both the tester and the observer, the more they can be themselves. There are labs out there with two-way mirrors to hide observers, but in all honesty the police interrogation room isn’t always the greatest look! I prefer to have the observer in the testing room, while being conscious that participants may instinctively be uncomfortable with being observed. I’ve seen observer guidelines that insist observers (in the room) stay completely silent the entire time, but I think that can be pretty creepy for participants! Here's what works best (in my humble opinion).

The facilitator leads the testing session, of course, but the observer is able to pipe up occasionally, mostly for clarification purposes, and certainly join in the welcoming, 'How's the weather?' chit chat before the session begins. In fact, when I observe usability testing, I like to be the one who collects the participant from the foyer. I’m the first person they see and it’s my job to make them feel welcome and comfortable, so when they find out I'll be observing, they know me already. Anything you can do to make the participant feel at home will increase the authenticity of their responses.

A note to finish

At the end of the day the reality is we’re all susceptible to bias. Despite your best efforts you’re never going to eradicate it completely, but just being aware of and understanding it goes a long way to reducing its impacts. Usability testing is, after all, something we design. I’ll leave you with this quote from Jeff Sauro's must-read article on 9 biases to watch out for in usability testing:

"We do the best we can to simulate a scenario that is as close to what users would actually do .... However, no amount of realism in the tasks, data, software or environment can change the fact that the whole thing is contrived. This doesn't mean it's not worth doing."

Learn more
1 min read

Optimal Recruitment Relaunch: More Panels, Better Quality, Zero Hassle

Recruiting high quality participants can be a hassle and time-consuming. That’s why we’ve relaunched Optimal Recruitment with expanded profiling capabilities, enhanced quality controls, and full-service support—to let you focus on what matters most: powerful insights to drive better business outcomes.

What does Optimal Recruitment offer?

With Optimal Recruitment, our in-house team makes it easy to connect with participants. We take care of all the details—from feasibility checks, recruitment, reminders, confirmations, and admin.

Thanks to our four award-winning panel providers, we can tailor recruitment to every recruitment need, giving you access to a vast pool of high-quality participants from across 150+ countries.

  • User Interviews: Rated the #1 top-rated panel software on G2, fully dedicated to quality user research
  • PureSpectrum: Recognized as the Market Research Supplier of the Year
  • Respondent: Ensures a consistent 95% participants show-up rate
  • Cint: Winner of the Data Quality Award, 2024

And this is just the beginning—our network will continue to grow, offering even greater targeting capabilities and expanded reach in the future.

How does it work?

All you need to do is provide your participant criteria and our team will handle the rest! We’ll select the best panel for your needs and ensure everything in your study is set up perfectly so you can sit back and watch the results flow in. 

Ready to dive in?

To get started, head over to the Recruit tab under an Optimal study.

Need more info? Find out more about getting started or reach out to Support or your account team for more details.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.