March 12, 2025
2

Efficient Research: Maximizing the ROI of Understanding Your Customers

Introduction

User research is invaluable, but in fast-paced environments, researchers often struggle with tight deadlines, limited resources, and the need to prove their impact. In our recent UX Insider webinar, Weidan Li, Senior UX Researcher at Seek, shared insights on Efficient Research—an approach that optimizes Speed, Quality, and Impact to maximize the return on investment (ROI) of understanding customers.

At the heart of this approach is the Efficient Research Framework, which balances these three critical factors:

  • Speed – Conducting research quickly without sacrificing key insights.
  • Quality – Ensuring rigor and reliability in findings.
  • Impact – Making sure research leads to meaningful business and product changes.

Within this framework, Weidan outlined nine tactics that help UX researchers work more effectively. Let’s dive in.

1. Time Allocation: Invest in What Matters Most

Not all research requires the same level of depth. Efficient researchers prioritize their time by categorizing projects based on urgency and impact:

  • High-stakes decisions (e.g., launching a new product) require deep research.
  • Routine optimizations (e.g., tweaking UI elements) can rely on quick testing methods.
  • Low-impact changes may not need research at all.

By allocating time wisely, researchers can avoid spending weeks on minor issues while ensuring critical decisions are well-informed.

2. Assistance of AI: Let Technology Handle the Heavy Lifting

AI is transforming UX research, enabling faster and more scalable insights. Weidan suggests using AI to:

  • Automate data analysis – AI can quickly analyze survey responses, transcripts, and usability test results.
  • Generate research summaries – Tools like ChatGPT can help synthesize findings into digestible insights.
  • Speed up recruitment – AI-powered platforms can help find and screen participants efficiently.

While AI can’t replace human judgment, it can free up researchers to focus on higher-value tasks like interpreting results and influencing strategy.

3. Collaboration: Make Research a Team Sport

Research has a greater impact when it’s embedded into the product development process. Weidan emphasizes:

  • Co-creating research plans with designers, PMs, and engineers to align on priorities.
  • Involving stakeholders in synthesis sessions so insights don’t sit in a report.
  • Encouraging non-researchers to run lightweight studies, such as A/B tests or quick usability checks.

When research is shared and collaborative, it leads to faster adoption of insights and stronger decision-making.

4. Prioritization: Focus on the Right Questions

With limited resources, researchers must choose their battles wisely. Weidan recommends using a prioritization framework to assess:

  • Business impact – Will this research influence a high-stakes decision?
  • User impact – Does it address a major pain point?
  • Feasibility – Can we conduct this research quickly and effectively?

By filtering out low-priority projects, researchers can avoid research for research’s sake and focus on what truly drives change.

5. Depth of Understanding: Go Beyond Surface-Level Insights

Speed is important, but efficient research isn’t about cutting corners. Weidan stresses that even quick studies should provide a deep understanding of users by:

  • Asking why, not just what – Observing behavior is useful, but uncovering motivations is key.
  • Using triangulation – Combining methods (e.g., usability tests + surveys) to validate findings.
  • Revisiting past research – Leveraging existing insights instead of starting from scratch.

Balancing speed with depth ensures research is not just fast, but meaningful.

6. Anticipation: Stay Ahead of Research Needs

Proactive researchers don’t wait for stakeholders to request studies—they anticipate needs and set up research ahead of time. This means:

  • Building a research roadmap that aligns with upcoming product decisions.
  • Running continuous discovery research so teams have a backlog of insights to pull from.
  • Creating self-serve research repositories where teams can find relevant past studies.

By anticipating research needs, UX teams can reduce last-minute requests and deliver insights exactly when they’re needed.

7. Justification of Methodology: Explain Why Your Approach Works

Stakeholders may question research methods, especially when they seem time-consuming or expensive. Weidan highlights the importance of educating teams on why specific methods are used:

  • Clearly explain why qualitative research is needed when stakeholders push for just numbers.
  • Show real-world examples of how past research has led to business success.
  • Provide a trade-off analysis (e.g., “This method is faster but provides less depth”) to help teams make informed choices.

A well-justified approach ensures research is respected and acted upon.

8. Individual Engagement: Tailor Research Communication to Your Audience

Not all stakeholders consume research the same way. Weidan recommends adapting insights to fit different audiences:

  • Executives – Focus on high-level impact and key takeaways.
  • Product teams – Provide actionable recommendations tied to specific features.
  • Designers & Engineers – Share usability findings with video clips or screenshots.

By delivering insights in the right format, researchers increase the likelihood of stakeholder buy-in and action.

9. Business Actions: Ensure Research Leads to Real Change

The ultimate goal of research is not just understanding users—but driving business decisions. To ensure research leads to action:

  • Follow up on implementation – Track whether teams apply the insights.
  • Tie findings to key metrics – Show how research affects conversion rates, retention, or engagement.
  • Advocate for iterative research – Encourage teams to re-test and refine based on new data.

Research is most valuable when it translates into real business outcomes.

Final Thoughts: Research That Moves the Needle

Efficient research is not just about doing more, faster—it’s about balancing speed, quality, and impact to maximize its influence. Weidan’s nine tactics help UX researchers work smarter by:


✔️  Prioritizing high-impact work
✔️  Leveraging AI and collaboration
✔️  Communicating research in a way that drives action

By adopting these strategies, UX teams can ensure their research is not just insightful, but transformational.

Watch the full webinar here

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Create a user research plan with these steps

A great user experience (UX) is one of the largest drivers of growth and revenue through user satisfaction. However, when budgets get tight, or there is a squeeze on timelines, user research is one of the first things to go. Often at the cost of user satisfaction.  

This short sighted view can mean project managers are preoccupied with achieving milestones and short term goals. And UX teams get stuck researching products they weren’t actually involved with developing. As a result no one has the space and understanding to really develop a product that speaks to users needs, desires and wants. There must  be a better way to produce a product that is user-driven.  Thankfully there is.

What is user research and why should project managers care about it? 👨🏻💻

User research is an important part of the product development process. Primarily, user research involves using different research methods to gather information about your end users. 

Essentially it aims to create the best possible experience for your users by listening and learning directly from those that already or potentially will use your product. You might conduct interviews to help you understand a particular problem, carry out a tree test to identify bottlenecks or problems in your navigation, or do some usability testing to directly observe your users as they perform different tasks on your website or in your app. Or a combination of these to understand what users really want.

To a project manager and team, this likely sounds fairly familiar, that any project can’t be managed in a silo. Regular check-ins and feedback are essential to making smart decisions. The same with UX research. It can make the whole process quicker and more efficient. By taking a step back, digging into your users’ minds, and gaining a fuller understanding of what they want upfront, it can curtail short-term views and decisions.

Bringing more user research into your development process has major benefits for the team, and the ultimately the quality of that final product. There are three key benefits:

  1. Saves your development team time and effort. Ensuring the team is working on what users want, not wasting time on features that don’t measure up.
  2. Gives your users a better experience by meeting their requirements.
  3. Helps your team innovate quickly by understanding what users really want.

As a project manager, making space and planning for user research can be one of the best ways to ensure the team is creating a product that truly is user-driven.

How to bring research into your product development process 🤔

There are a couple of ways you can bring UX research into your product development process

  1. Start with a dedicated research project.
  2. Integrate UX research throughout the development project.

It can be more difficult to integrate UX research throughout the process, as it means planning the project with various stages of research built in to check the development of features. But ultimately this approach is likely to turn out the best product. One that has been considered, checked and well thought out through the whole product development process. To help you on the way we have laid out 6 key steps to help you integrate UX research into your product development process.

6 key steps to integrate UX research 👟

Step 1: Define your research questions

Take a step back, look at your product and define your research questions

It may be tempting just to ask, ‘do users like our latest release?’ This however does not get to why or what your users like or don’t like. Try instead:

  • What do our users really want from our product?
  • Where are they currently struggling while using our website?
  • How can we design a better product for our users?

These questions help to form the basis of specific questions about your product and specific areas of research to explore which in turn help shape the type of research you undertake.

Step 2: Create your research plan

With a few key research questions to focus on, it’s time to create your research plan.

A great research plan covers your project’s goals, scope, timing, and deliverables. It’s essential for keeping yourself organized but also for getting key stakeholder signoff.

Step 3: Prepare any research logistics

Every project plan requires attention to detail including a user research project. And with any good project there are a set of steps to help make sense of it.

  1. Method: Based on your questions, what is the best user research method to use? 
  2. Schedule: When will the research take place? How long will it go on for? If this is ongoing research, plan how it will be implemented and how often.
  3. Location: Where will the research take place? 
  4. Resources: What resources do you need? This could be technical support or team members.
  5. Participants: Define who you want to research. Who is eligible to take part in this research? How will you find the right people?
  6. Data: How will you capture the research data? Where will it be stored? How will you analyze the data and create insights and reports that can be used?
  7. Deliverables: What is the ultimate goal for your research project?

Step 4: Decide which method will be used

Many user research methods benefit from an observational style of testing. Particularly if you are looking into why users undertake a specific task or struggle.

Typically, there are two approaches to testing:

  1. Moderated testing is when a moderator is present during the test to answer questions, guide the participant, or dig deeper with further questions.
  2. Unmoderated testing is when a participant is left on their own to carry out the task. Often this is done remotely and with very specific instructions.Your key questions will determine which method will works best for your research.  Find our more about the differences.

Step 5: Run your research session

It’s time to gather insights and data. The questions you are asking will influence how you run your research sessions and the methods you’ve chosen. 

If you are running surveys you will be asking users through a banner or invitation to fill out your survey. Unmoderated and very specific questions. Gathering qualitative data and analyzing patterns.

If you’re using something qualitative like interviews or heat mapping, you’ll want to implement software and gather as much information as possible.

Step 6: Prepare a research findings report and share with stakeholders

Analyze your findings, interrogate your data and find those insights that dive into the way your users think. How do they love your product? But how do they also struggle?

Pull together your findings and insights into an easy to understand report. And get socializing. Bring your key stakeholders together and share your findings. Bringing everyone across the findings together can bring everyone on the journey. And for the development process can mean decisions can be user-driven. 

Wrap Up 🥙

Part of any project, UX research should be essential to developing a product that is user-driven. Integrating user research into your development process can be challenging. But with planning and strategy it can be hugely beneficial to saving time and money in the long run. 

Learn more
1 min read

Using paper prototypes in UX

In UX research we are told again and again that to ensure truly user-centered design, it’s important to test ideas with real users as early as possible. There are many benefits that come from introducing the voice of the people you are designing for in the early stages of the design process. The more feedback you have to work with, the more you can inform your design to align with real needs and expectations. In turn, this leads to better experiences that are more likely to succeed in the real world.It is not surprising then that paper prototypes have become a popular tool used among researchers. They allow ideas to be tested as they emerge, and can inform initial designs before putting in the hard yards of building the real thing. It would seem that they’re almost a no-brainer for researchers, but just like anything out there, along with all the praise, they have also received a fair share of criticism, so let’s explore paper prototypes a little further.

What’s a paper prototype anyway? 🧐📖

Paper prototyping is a simple usability testing technique designed to test interfaces quickly and cheaply. A paper prototype is nothing more than a visual representation of what an interface could look like on a piece of paper (or even a whiteboard or chalkboard). Unlike high-fidelity prototypes that allow for digital interactions to take place, paper prototypes are considered to be low-fidelity, in that they don’t allow direct user interaction. They can also range in sophistication, from a simple sketch using a pen and paper to simulate an interface, through to using designing or publishing software to create a more polished experience with additional visual elements.

Screen Shot 2016-04-15 at 9.26.30 AM
Different ways of designing paper prototypes, using OptimalSort as an example

Showing a research participant a paper prototype is far from the real deal, but it can provide useful insights into how users may expect to interact with specific features and what makes sense to them from a basic, user-centered perspective. There are some mixed attitudes towards paper prototypes among the UX community, so before we make any distinct judgements, let's weigh up their pros and cons.

Advantages 🏆

  • They’re cheap and fastPen and paper, a basic word document, Photoshop. With a paper prototype, you can take an idea and transform it into a low-fidelity (but workable) testing solution very quickly, without having to write code or use sophisticated tools. This is especially beneficial to researchers who work with tight budgets, and don’t have the time or resources to design an elaborate user testing plan.
  • Anyone can do itPaper prototypes allow you to test designs without having to involve multiple roles in building them. Developers can take a back seat as you test initial ideas, before any code work begins.
  • They encourage creativityFrom both the product teams participating in their design, but also from the users. They require the user to employ their imagination, and give them the opportunity express their thoughts and ideas on what improvements can be made. Because they look unfinished, they naturally invite constructive criticism and feedback.
  • They help minimize your chances of failurePaper prototypes and user-centered design go hand in hand. Introducing real people into your design as early as possible can help verify whether you are on the right track, and generate feedback that may give you a good idea of whether your idea is likely to succeed or not.

Disadvantages 😬

  • They’re not as polished as interactive prototypesIf executed poorly, paper prototypes can appear unprofessional and haphazard. They lack the richness of an interactive experience, and if our users are not well informed when coming in for a testing session, they may be surprised to be testing digital experiences on pieces of paper.
  • The interaction is limitedDigital experiences can contain animations and interactions that can’t be replicated on paper. It can be difficult for a user to fully understand an interface when these elements are absent, and of course, the closer the interaction mimics the final product, the more reliable our findings will be.
  • They require facilitationWith an interactive prototype you can assign your user tasks to complete and observe how they interact with the interface. Paper prototypes, however, require continuous guidance from a moderator in communicating next steps and ensuring participants understand the task at hand.
  • Their results have to be interpreted carefullyPaper prototypes can’t emulate the final experience entirely. It is important to interpret their findings while keeping their limitations in mind. Although they can help minimize your chances of failure, they can’t guarantee that your final product will be a success. There are factors that determine success that cannot be captured on a piece of paper, and positive feedback at the prototyping stage does not necessarily equate to a well-received product further down the track.

Improving the interface of card sorting, one prototype at a time 💡

We recently embarked on a research project looking at the user interface of our card-sorting tool, OptimalSort. Our research has two main objectives — first of all to benchmark the current experience on laptops and tablets and identify ways in which we can improve the current interface. The second objective is to look at how we can improve the experience of card sorting on a mobile phone.

Rather than replicating the desktop experience on a smaller screen, we want to create an intuitive experience for mobiles, ensuring we maintain the quality of data collected across devices.Our current mobile experience is a scaled down version of the desktop and still has room for improvement, but despite that, 9 per cent of our users utilize the app. We decided to start from the ground up and test an entirely new design using paper prototypes. In the spirit of testing early and often, we decided to jump right into testing sessions with real users. In our first testing sprint, we asked participants to take part in two tasks. The first was to perform an open or closed card sort on a laptop or tablet. The second task involved using paper prototypes to see how people would respond to the same experience on a mobile phone.

blog_artwork_01-03

Context is everything 🎯

What did we find? In the context of our research project, paper prototypes worked remarkably well. We were somewhat apprehensive at first, trying to figure out the exact flow of the experience and whether the people coming into our office would get it. As it turns out, people are clever, and even those with limited experience using a smartphone were able to navigate and identify areas for improvement just as easily as anyone else. Some participants even said they prefered the experience of testing paper prototypes over a laptop. In an effort to make our prototype-based tasks easy to understand and easy to explain to our participants, we reduced the full card sort to a few key interactions, minimizing the number of branches in the UI flow.

This could explain a preference for the mobile task, where we only asked participants to sort through a handful of cards, as opposed to a whole set.The main thing we found was that no matter how well you plan your test, paper prototypes require you to be flexible in adapting the flow of your session to however your user responds. We accepted that deviating from our original plan was something we had to embrace, and in the end these additional conversations with our participants helped us generate insights above and beyond the basics we aimed to address. We now have a whole range of feedback that we can utilize in making more sophisticated, interactive prototypes.

Whether our success with using paper prototypes was determined by the specific setup of our testing sessions, or simply by their pure usefulness as a research technique is hard to tell. By first performing a card sorting task on a laptop or tablet, our participants approached the paper prototype with an understanding of what exactly a card sort required. Therefore there is no guarantee that we would have achieved the same level of success in testing paper prototypes on their own. What this does demonstrate, however, is that paper prototyping is heavily dependent on the context of your assessment.

Final thoughts 💬

Paper prototypes are not guaranteed to work for everybody. If you’re designing an entirely new experience and trying to describe something complex in an abstracted form on paper, people may struggle to comprehend your idea. Even a careful explanation doesn’t guarantee that it will be fully understood by the user. Should this stop you from testing out the usefulness of paper prototypes in the context of your project? Absolutely not.

In a perfect world we’d test high fidelity interactive prototypes that resemble the real deal as closely as possible, every step of the way. However, if we look at testing from a practical perspective, before we can fully test sophisticated designs, paper prototypes provide a great solution for generating initial feedback.In his article criticizing the use of paper prototypes, Jake Knapp makes the point that when we show customers a paper prototype we’re inviting feedback, not reactions. What we found in our research however, was quite the opposite.

In our sessions, participants voiced their expectations and understanding of what actions were possible at each stage, without us having to probe specifically for feedback. Sure we also received general comments on icon or colour preferences, but for the most part our users gave us insights into what they felt throughout the experience, in addition to what they thought.

Further reading 🧠

Learn more
1 min read

The Evolution of UX Research: Digital Twins and the Future of User Insight

Introduction

User Experience (UX) research has always been about people. How they think, how they behave, what they need, and—just as importantly—what they don’t yet realise they need. Traditional UX methodologies have long relied on direct human input: interviews, usability testing, surveys, and behavioral observation. The assumption was clear—if you want to understand people, you have to engage with real humans.

But in 2025, that assumption is being challenged.

The emergence of digital twins and synthetic users—AI-powered simulations of human behavior—is changing how researchers approach user insights. These technologies claim to solve persistent UX research problems: slow participant recruitment, small sample sizes, high costs, and research timelines that struggle to keep pace with product development. The promise is enticing: instantly accessible, infinitely scalable users who can test, interact, and generate feedback without the logistical headaches of working with real participants.

Yet, as with any new technology, there are trade-offs. While digital twins may unlock efficiencies, they also raise important questions: Can they truly replicate human complexity? Where do they fit within existing research practices? What risks do they introduce?

This article explores the evolving role of digital twins in UX research—where they excel, where they fall short, and what their rise means for the future of human-centered design.

The Traditional UX Research Model: Why Change?

For decades, UX research has been grounded in methodologies that involve direct human participation. The core methods—usability testing, user interviews, ethnographic research, and behavioral analytics—have been refined to account for the unpredictability of human nature.

This approach works well, but it has challenges:

  1. Participant recruitment is time-consuming. Finding the right users—especially niche audiences—can be a logistical hurdle, often requiring specialised panels, incentives, and scheduling gymnastics.
  2. Research is expensive. Incentives, moderation, analysis, and recruitment all add to the cost. A single usability study can run into tens of thousands of dollars.
  3. Small sample sizes create risk. Budget and timeline constraints often mean testing with small groups, leaving room for blind spots and bias.
  4. Long feedback loops slow decision-making. By the time research is completed, product teams may have already moved on, limiting its impact.

In short: traditional UX research provides depth and authenticity, but it’s not always fast or scalable.

Digital twins and synthetic users aim to change that.

What Are Digital Twins and Synthetic Users?

While the terms digital twins and synthetic users are sometimes used interchangeably, they are distinct concepts.

Digital Twins: Simulating Real-World Behavior

A digital twin is a data-driven virtual representation of a real-world entity. Originally developed for industrial applications, digital twins replicate machines, environments, and human behavior in a digital space. They can be updated in real time using live data, allowing organisations to analyse scenarios, predict outcomes, and optimise performance.

In UX research, human digital twins attempt to replicate real users' behavioral patterns, decision-making processes, and interactions. They draw on existing datasets to mirror real-world users dynamically, adapting based on real-time inputs.

Synthetic Users: AI-Generated Research Participants

While a digital twin is a mirror of a real entity, a synthetic user is a fabricated research participant—a simulation that mimics human decision-making, behaviors, and responses. These AI-generated personas can be used in research scenarios to interact with products, answer questions, and simulate user journeys.

Unlike traditional user personas (which are static profiles based on aggregated research), synthetic users are interactive and capable of generating dynamic feedback. They aren’t modeled after a specific real-world person, but rather a combination of user behaviors drawn from large datasets.

Think of it this way:

  • A digital twin is a highly detailed, data-driven clone of a specific person, customer segment, or process.
  • A synthetic user is a fictional but realistic simulation of a potential user, generated based on behavioral patterns and demographic characteristics.

Both approaches are still evolving, but their potential applications in UX research are already taking shape.

Where Digital Twins and Synthetic Users Fit into UX Research

The appeal of AI-generated users is undeniable. They can:

  • Scale instantly – Test designs with thousands of simulated users, rather than just a handful of real participants.
  • Eliminate recruitment bottlenecks – No need to chase down participants or schedule interviews.
  • Reduce costs – No incentives, no travel, no last-minute no-shows.
  • Enable rapid iteration – Get user insights in real time and adjust designs on the fly.
  • Generate insights on sensitive topics – Synthetic users can explore scenarios that real participants might find too personal or intrusive.

These capabilities make digital twins particularly useful for:

  • Early-stage concept validation – Rapidly test ideas before committing to development.
  • Edge case identification – Run simulations to explore rare but critical user scenarios.
  • Pre-testing before live usability sessions – Identify glaring issues before investing in human research.

However, digital twins and synthetic users are not a replacement for human research. Their effectiveness is limited in areas where emotional, cultural, and contextual factors play a major role.

The Risks and Limitations of AI-Driven UX Research

For all their promise, digital twins and synthetic users introduce new challenges.

  1. They lack genuine emotional responses.
    AI can analyse sentiment, but it doesn’t feel frustration, delight, or confusion the way a human does. UX is often about unexpected moments—the frustrations, workarounds, and “aha” realisations that define real-world use.
  2. Bias is a real problem.
    AI models are trained on existing datasets, meaning they inherit and amplify biases in those datasets. If synthetic users are based on an incomplete or non-diverse dataset, the research insights they generate will be skewed.
  3. They struggle with novelty.
    Humans are unpredictable. They find unexpected uses for products, misunderstand instructions, and behave irrationally. AI models, no matter how advanced, can only predict behavior based on past patterns—not the unexpected ways real users might engage with a product.
  4. They require careful validation.
    How do we know that insights from digital twins align with real-world user behavior? Without rigorous validation against human data, there’s a risk of over-reliance on synthetic feedback that doesn’t reflect reality.

A Hybrid Future: AI + Human UX Research

Rather than viewing digital twins as a replacement for human research, the best UX teams will integrate them as a complementary tool.

Where AI Can Lead:

  • Large-scale pattern identification
  • Early-stage usability evaluations
  • Speeding up research cycles
  • Automating repetitive testing

Where Humans Remain Essential:

  • Understanding emotion, frustration, and delight
  • Detecting unexpected behaviors
  • Validating insights with real-world context
  • Ethical considerations and cultural nuance

The future of UX research is not about choosing between AI and human research—it’s about blending the strengths of both.

Final Thoughts: Proceeding With Caution and Curiosity

Digital twins and synthetic users are exciting, but they are not a magic bullet. They cannot fully replace human users, and relying on them exclusively could lead to false confidence in flawed insights.

Instead, UX researchers should view these technologies as powerful, but imperfect tools—best used in combination with traditional research methods.

As with any new technology, thoughtful implementation is key. The real opportunity lies in designing research methodologies that harness the speed and scale of AI without losing the depth, nuance, and humanity that make UX research truly valuable.

The challenge ahead isn’t about choosing between human or synthetic research. It’s about finding the right balance—one that keeps user experience truly human-centered, even in an AI-driven world.

This article was researched with the help of Perplexity.ai. 

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.