July 21, 2025
4

Product Managers: How Optimal Streamlines Your User Research

As product managers, we all know the struggle of truly understanding our users. It's the cornerstone of everything we do, yet the path to those valuable insights can often feel like navigating a maze. The endless back-and-forth emails, the constant asking for favors from other teams, and the sheer time spent trying to find the right people to talk to, sound familiar? For years, this was our reality. But there’s a better way, Optimal's participant recruitment is a game-changer, transforming your approach to user research and freeing you to focus on what truly matters: understanding our users.

The Challenge We All Faced

User research processes often hit a significant bottleneck: finding participants. Like many, you may rely heavily on sales and support teams to connect you with users. While they were always incredibly helpful, this approach has its limitations. It creates internal dependencies, slows down timelines, and often means you are limited to a specific segment of our user base. You frequently find ourselves asking, "Does anyone know someone who fits this profile?" which inevitably leads to delays and sometimes, missed crucial feedback opportunities.

A Game-Changing Solution: Optimal's Participant Recruitment

Enter Optimal's participant recruitment. This service fundamentally shifts how you approach user research, offering a hugely increased level of efficiency and insight. Here’s how it can level up your research process:

  • Diverse Participant Pool: Gone are the days of repeatedly reaching out to the same familiar faces. Optimal Workshop provides access to a global pool of participants who genuinely represent our target audience. The fresh perspectives and varied experiences gained can be truly eye-opening, uncovering insights you might have otherwise missed.
  • Time-Saving Independence: The constant "Does anyone know someone who...?" emails are a thing of the past. You now have the autonomy to independently recruit participants for a wide range of research activities, from quick prototype tests to more in-depth user interviews and usability studies. This newfound independence dramatically accelerates your research timeline, allowing you to gather feedback much faster.
  • Faster Learning Cycles: When a critical question arises, or you need to quickly validate a new concept, you can now launch research and recruit participants almost immediately. This quick turnaround means you’re making informed decisions based on real user feedback at a much faster pace than ever before. This agility is invaluable in today's fast-paced product development environment.
  • Reduced Bias: By accessing external participants who have no prior relationship with your company, you're receiving more honest and unfiltered feedback. This unbiased perspective is crucial for making confident, user-driven decisions and avoiding the pitfalls of internal assumptions.

Beyond Just Recruitment: A Seamless Research Ecosystem

The participant recruitment service integrates with the Optimal platform. Whether you're conducting tree testing to evaluate information architecture, running card sorting exercises to understand user mental models, or performing first-click tests to assess navigation, everything is available within one intuitive platform. It really can become your one-stop shop for all things user research.

Building a Research-First Culture

Perhaps the most unexpected and significant benefit of streamlined participant recruitment comes from the positive shift in your team's culture. With research becoming so accessible and efficient, you're naturally more inclined to validate our assumptions and explore user needs before making key product decisions. Every product decision is now more deeply grounded in real user insights, fostering a truly user-centric approach throughout your development process.

The Bottom Line

If you're still wrestling with the time-consuming and often frustrating process of participant recruitment for your user research, why not give Optimal Workshop a try. It can transform what is a significant bottleneck in your workflow into a streamlined and efficient process that empowers you to build truly user-centric products. It's not just about saving time; it's about gaining deeper, more diverse insights that ultimately lead to better products and happier users. Give it a shot, you might be surprised at the difference it makes.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

How many participants do I need for qualitative research?

For those new to the qualitative research space, there’s one question that’s usually pretty tough to figure out, and that’s the question of how many participants to include in a study. Regardless of whether it’s research as part of the discovery phase for a new product, or perhaps an in-depth canvas of the users of an existing service, researchers can often find it difficult to agree on the numbers. So is there an easy answer? Let’s find out.

Here, we’ll look into the right number of participants for qualitative research studies. If you want to know about participants for quantitative research, read Nielsen Norman Group’s article.

Getting the numbers right

So you need to run a series of user interviews or usability tests and aren’t sure exactly how many people you should reach out to. It can be a tricky situation – especially for those without much experience. Do you test a small selection of 1 or 2 people to make the recruitment process easier? Or, do you go big and test with a series of 10 people over the course of a month? The answer lies somewhere in between.

It’s often a good idea (for qualitative research methods like interviews and usability tests) to start with 5 participants and then scale up by a further 5 based on how complicated the subject matter is. You may also find it helpful to add additional participants if you’re new to user research or you’re working in a new area.

What you’re actually looking for here is what’s known as saturation.

Understanding saturation

Whether it’s qualitative research as part of a master’s thesis or as research for a new online dating app, saturation is the best metric you can use to identify when you’ve hit the right number of participants.

In a nutshell, saturation is when you’ve reached the point where adding further participants doesn’t give you any further insights. It’s true that you may still pick up on the occasional interesting detail, but all of your big revelations and learnings have come and gone. A good measure is to sit down after each session with a participant and analyze the number of new insights you’ve noted down.

Interestingly, in a paper titled How Many Interviews Are Enough?, authors Greg Guest, Arwen Bunce and Laura Johnson noted that saturation usually occurs with around 12 participants in homogeneous groups (meaning people in the same role at an organization, for example). However, carrying out ethnographic research on a larger domain with a diverse set of participants will almost certainly require a larger sample.

Ensuring you’ve hit the right number of participants

How do you know when you’ve reached saturation point? You have to keep conducting interviews or usability tests until you’re no longer uncovering new insights or concepts.

While this may seem to run counter to the idea of just gathering as much data from as many people as possible, there’s a strong case for focusing on a smaller group of participants. In The logic of small samples in interview-based, authors Mira Crouch and Heather McKenzie note that using fewer than 20 participants during a qualitative research study will result in better data. Why? With a smaller group, it’s easier for you (the researcher) to build strong close relationships with your participants, which in turn leads to more natural conversations and better data.

There's also a school of thought that you should interview 5 or so people per persona. For example, if you're working in a company that has well-defined personas, you might want to use those as a basis for your study, and then you would interview 5 people based on each persona. This maybe worth considering or particularly important when you have a product that has very distinct user groups (e.g. students and staff, teachers and parents etc).

How your domain affects sample size

The scope of the topic you’re researching will change the amount of information you’ll need to gather before you’ve hit the saturation point. Your topic is also commonly referred to as the domain.

If you’re working in quite a confined domain, for example, a single screen of a mobile app or a very specific scenario, you’ll likely find interviews with 5 participants to be perfectly fine. Moving into more complicated domains, like the entire checkout process for an online shopping app, will push up your sample size.

As Mitchel Seaman notes: “Exploring a big issue like young peoples’ opinions about healthcare coverage, a broad emotional issue like postmarital sexuality, or a poorly-understood domain for your team like mobile device use in another country can drastically increase the number of interviews you’ll want to conduct.”

In-person or remote

Does the location of your participants change the number you need for qualitative user research? Well, not really – but there are other factors to consider.

  • Budget: If you choose to conduct remote interviews/usability tests, you’ll likely find you’ve got lower costs as you won’t need to travel to your participants or have them travel to you. This also affects…
  • Participant access: Remote qualitative research can be a lifesaver when it comes to participant access. No longer are you confined to the people you have physical access to, instead you can reach out to anyone you’d like.
  • Quality: On the other hand, remote research does have its downsides. For one, you’ll likely find you’re not able to build the same kinds of relationships over the internet or phone as those in person, which in turn means you never quite get the same level of insights.

Is there value in outsourcing recruitment?

Recruitment is understandably an intensive logistical exercise with many moving parts. If you’ve ever had to recruit people for a study before, you’ll understand the need for long lead times (to ensure you have enough participants for the project) and the countless long email chains as you discuss suitable times.

Outsourcing your participant recruitment is just one way to lighten the logistical load during your research. Instead of having to go out and look for participants, you have them essentially delivered to you in the right number and with the right attributes.

We’ve got one such service at Optimal, which means it’s the perfect accompaniment if you’re also using our platform of UX tools. Read more about that here.

Wrap-up

So that’s really most of what there is to know about participant recruitment in a qualitative research context. As we said at the start, while it can appear quite tricky to figure out exactly how many people you need to recruit, it’s actually not all that difficult in reality.

Overall, the number of participants you need for your qualitative research can depend on your project among other factors. It’s important to keep saturation in mind, as well as the locale of participants. You also need to get the most you can out of what’s available to you. Remember: Some research is better than none!

Learn more
1 min read

What do you prioritize when doing qualitative research?

Qualitative user research is about exploration. Exploration is about the journey, not only the destination (or outcome). Gaining information and insights about your users through interviews, usability testing, contextual, observations and diary entries. Using these qualitative research methods to not only answer your direct queries, but to uncover and unravel your users ‘why’.

It can be important to use qualitative research to really dig deep, get to know your users and get inside their heads, and their reasons. Creating intuitive and engaging products that deliver the best user experience. 

What is qualitative research? 🔎

The term ‘qualitative’ refers to things that cannot be measured numerically and qualitative user research is no exception. Qualitative research is primarily an exploratory research method that is typically done early in the design process and is useful for uncovering insights into people’s thoughts, opinions, and motivations. It allows us to gain a deeper understanding of problems and provides answers to questions we didn’t know we needed to ask. 

Qualitative research could be considered the ‘why’. Where quantitative user research uncovers the how or the what users want. Qualitative user research will uncover why they make decisions (and possibly much more).

Priorities ⚡⚡⚡⚡

When undertaking user research it is great to do a mix of quantitative and qualitative research. Which will round out the numbers with human driven insights.

Quantitative user research methods, such as card sorting or tree testing, will answer the ‘what’ your users want, and provide data to support this. These insights are number driven and are based on testing direct interaction with your product. This is super valuable to report to stakeholders. Hard data is difficult to argue what changes need to be made to how your information architecture (IA) is ordered, sorted or designed. To find out more about the quantitative research options, take a read.

Qualitative user research, on the other hand, may uncover a deeper understanding of ‘why’ your users want the IA ordered, sorted or designed a certain way.  The devil is in the detail afterall and great user insights are discoverable. 

Priorities for your qualitative research needs to be less about the numbers, and more on discovering your users ‘why’. Observing, listening, questioning and looking at reasons for users decisions will provide valuable insights for product design and ultimately improve user experience.

Usability Testing - this research method is used to evaluate how easy and intuitive a product is to use.  Observing, noting and watching the participant complete tasks without interference or questions can uncover a lot of insights that data alone can’t give. This method can be done in a couple of ways, moderated or unmoderated. While it can be quicker to do unmoderated and easier to arrange, the deep insights will come out of moderated testing. 

Observational - with this qualitative research method your insights will be uncovered from observing and noting what the participant is doing, paying particular attention to their non-verbal communication. Where do they demonstrate frustration, or turn away from the task, or change their approach? Factual note taking, meaning there shouldn’t be any opinions attached to what is being observed, is important to keep the insights unbiased.

Contextual - paying attention to the context in which the interview or testing is done is important. Is it hot, loud, cold or is the screen of their laptop covered in post-its that make it difficult to see? Or do they struggle with navigating using the laptop tracker? All of this noted, in a factual manner, without personal inferring or added opinion based observations can give a window into why the participant struggled or was frustrated at any point.

These research methods can be done as purely observational research (you don’t interview or converse with your participant) and noting how they interact (more interested in the process than the outcome of their product interaction). Or, these qualitative research methods can be coupled with an

Interview - a series of questions asked around a particular task or product. Careful note taking around what the participant says as well as noting any observations. This method should allow a conversation to flow. Whilst the interviewer should be prepared with a list of questions around their topic, remain flexible enough to dig deeper where there might be details or insights of interest. An interviewer that is comfortable in getting to know their participants unpicks reservations and allows a flow of conversation, and generates amazing insights.

With an interview it can be of use to have a second person in the room to act as the note taker. This can free up the interviewer to engage with the participant and unpick the insights.

Using a great note taking side kick, like our Reframer, can take the pain out of recording all these juicy and deep insights. Time-stamping, audio or video recordings and notes all stored in one place. Easily accessed by the team, reviewed, reports generated and stored for later.

Let’s consider 🤔

You’re creating a new app to support your gym and it’s website. You’re looking to generate personal training bookings, allow members to book classes or have updates and personalise communication for your members. But before investing in final development it needs to be tested. How do your users interact with it? Why would they want to? Does it behave in a way that improves the user experience? Or does it simply not deliver? But why?

First off, using quantitative research like Chalkmark would show how the interface is working. Where are users clicking, where do they go after that. Is it simple to use? You now have direct data that supports your questions, or possibly suggests a change of design to support quicker task completion, or further engagement.

While all of this is great data for the design, does it dig deep enough to really get an understanding of why your users are frustrated? Do they find what they need quickly? Or get completely lost? Finding out these insights and improving on them can make the most of your users’ experience.

When quantitative research is coupled with robust qualitative research that prioritizes an in-depth understanding of what your users need, ultimately the app can make the most of your users’ experience.

Using moderated usability testing for your gym app, observations can be made about how the participant interacts with the interface. Where do they struggle, get lost, or where do they complete a task quickly and simply. This type of research enhances the quantitative data and gives insight into where and why the app is or isn't performing.

Then interviewing participants about why they make decisions on the app, how they use it and why they would use it. These focussed questions, with some free flow conversation will round out your research. Giving valuable insights that can be reviewed, analyzed and reported to the product team and key stakeholders. Focussing the outcome, and designing a product that delivers on not just what users need, but in-depth understand of why. 

Wrap Up 🥙

Quantitative and qualitative user research do work hand in hand, each offering a side to the same coin. Hard number driven data with quantitative user research will deliver the what needs to be addressed. With focussed quantitative research it is possible to really get a handle on why your users interact with your product in a certain way, and how. 

The Optimal Workshop platform has all the tools, research methods and even the note taking tools you need to get started with your user research, now, not next week! See you soon.

Learn more
1 min read

Dan Dixon and Stéphan Willemse: HCD is dead, long live HCD

There is strong backlash about the perceived failures of Human Centred Design (HCD) and its contribution to contemporary macro problems. There seems to be a straightforward connection: HCD and Design Thinking have been adopted by organizations and are increasingly part of product/experience development, especially in big tech. However the full picture is more complex, and HCD does have some issues.

Dan Dixon, UX and Design Research Director and Stéphan Willemse, Strategy Director/Head of Strategy, both from the Digital Arts Network, recently spoke at UX New Zealand, the leading UX and IA conference in New Zealand hosted by Optimal Workshop, about the evolution and future of HCD.

In their talk, Dan and Stéphan cover the history of HCD, its use today, and its limitations, before presenting a Post HCD future. What could it be, and how should it be different? Dan and Stéphan help us to step outside of ourselves as we meet new problems with new ways of Design Thinking.

Dan Dixon and Stéphan Willemse bios

Dan is a long-term practitioner of human-centred experience design and has a wealth of experience in discovery and qual research. He’s worked in academic, agency and client-side roles in both the UK and NZ, covering diverse fields such as digital, product design, creative technology and game design. His history has blended a background in the digital industry with creative technology teaching and user experience research. He has taken pragmatic real-world knowledge into a higher education setting as well as bringing deeper research skills from academia into commercial design projects. In higher education, as well as talks and workshops, Dan has been teaching and sharing these skills for the last 16 years. 

Stéphan uses creativity, design and strategy to help organizations innovate towards positive, progressive futures. He works across innovation, experience design, emerging technologies, cultural intelligence and futures projects with clients including Starbucks, ANZ, Countdown, TradeMe and the public sector. He holds degrees in PPE, Development Studies, Education and an Executive MBA. However, he doesn’t like wearing a suit and his idea of the perfect board meeting is at a quiet surf break. He thinks ideas are powerful and that his young twins ask the best questions about the world we live in.

Contact Details:

Email: dan.dixon@digitalartsnetwork.com

Find Dan on LinkedIn  

HCD IS DEAD, LONG LIVE HCD 👑

Dan and Stéphan take us through the evolving landscape of Human Centred Design (HCD) and Design Thinking. Can HCD effectively respond to the challenges of the modern era, and can we get ahead of the unintended consequences of our design? They examine the inputs and processes of design, not just the output, to scrutinize the very essence of design practice.

A brief history of HCD

In the 1950s and 1960s, designers began exploring the application of scientific processes to design, aiming to transform it into a systematic problem-solving approach. Later in the 1960s, design thinkers in Scandinavia initiated the shift towards cooperative and participative design practices. Collaboration and engagement with diverse stakeholders became integral to design processes. Then, the 1970s and 1980s marked a shift in perspective, viewing design as a fundamentally distinct way of approaching problems. 

Moving into the late 1980s and 1990s, design thinking expanded to include user-centered design, and the idea of humans and technology becoming intertwined. Then the 2000s witnessed a surge in design thinking, where human-centered design started to make its mark.

Limitations of the “design process”

Dan and Stéphan discuss the “design squiggle”, a concept that portrays the messy and iterative nature of design, starting chaotically and gradually converging toward a solution. For 20 years, beginning in the early 90s, this was a popular way to explain how the design process feels. However, in the past 10 years or so, efforts to teach and pass down design processes have become common practice. Here enter concepts like the “double diamond” and “pattern problem”, which seek to be repeatable and process-driven. These neat processes, however, demand rigid adherence to specific design methods, which can ultimately stifle innovation. 

Issues with HCD and its evolution

The critique of such rigid design processes, which developed alongside HCD, highlights the need to acknowledge that humans are just one element in an intricate network of actors. By putting ourselves at the center of our design processes and efforts, we already limit our design. Design is just as much about the ecosystem surrounding any given problem as it is about the user. A limitation of HCD is that we humans are not actually at the center of anything except our own minds. So, how can we address this limitation?

Post-anthropocentric design starts to acknowledge that we are far less rational than we believe ourselves to be. It captures the idea that there are no clear divisions between ‘being human’ and everything else. This concept has become important as we adopt more and more technology into our lives, and we’re getting more enmeshed in it. 

Post-human design extends this further by removing ourselves from the center of design and empathizing with “things”, not just humans. This concept embraces the complexity of our world and emphasizes how we need to think about the problem just as much as we think about the solution. In other words, post-human design encourages us to “live” in our design problem(s) and consider multiple interventions.

Finally, Dan and Stéphan discuss the concept of Planetary design, which stresses that everything we create, and everything we do, has the possibility to impact everything else in the world. In fact, our designs do impact everything else, and we need to try and be aware of all possibilities.

Integrating new ways of thinking about design

To think beyond HCD and to foster innovation in design, we can begin by embracing emerging design practices and philosophies such as "life-centered design," "Society-centered design," and "Humanity-centered design." These emerging practices have toolsets that are readily available online and can be seamlessly integrated into your design approach, helping us to break away from traditional, often linear, methodologies. Or, taking a more proactive stance, we can craft our own unique design tools and frameworks. 

Why it matters 🎯

To illustrate how design processes can evolve to meet current and future challenges of our time, Dan and Stéphan present their concept of “Post human-centered design” (Post HCD). At its heart, it seeks to take what's great about HCD and build upon it, all while understanding its issues/limitations.

Dan and Stéphan put forward, as a starting point, some challenges for designers to consider as we move our practice to its next phase.

Suggested Post HCD principles:

  • Human to context: Moving from human-centered to a context-centred or context sensitive point of view.
  • Design Process to Design Behaviour: Not being beholden to design processes like the “double diamond”. Instead of thinking about designing for problems, we should design for behaviors instead. 
  • Problem-solutions to Interventions: Thinking more broadly about interventions in the problem space, rather than solutions to the problems
  • Linear to Dynamic: Understand ‘networks’ and complex systems.
  • Repeated to Reflexive: Challenging status quo processes and evolving with challenges that we’re trying to solve.

The talk wraps up by encouraging designers to incorporate some of this thinking into everyday practice. Some key takeaways are: 

  • Expand your web of context: Don’t just think about things having a center, think about networks.
  • Have empathy for “things”: Consider how you might then have empathy for all of those different things within that network, not just the human elements of the network.
  • Design practice is exploration and design exploration is our practice: Ensure that we're exploring both our practice as well as the design problem.
  • Make it different every time: Every time we design, try to make it different, don't just try and repeat the same loop over and over again.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.