March 21, 2025
10

The Evolution of UX Research: Digital Twins and the Future of User Insight

Introduction

User Experience (UX) research has always been about people. How they think, how they behave, what they need, and—just as importantly—what they don’t yet realise they need. Traditional UX methodologies have long relied on direct human input: interviews, usability testing, surveys, and behavioral observation. The assumption was clear—if you want to understand people, you have to engage with real humans.

But in 2025, that assumption is being challenged.

The emergence of digital twins and synthetic users—AI-powered simulations of human behavior—is changing how researchers approach user insights. These technologies claim to solve persistent UX research problems: slow participant recruitment, small sample sizes, high costs, and research timelines that struggle to keep pace with product development. The promise is enticing: instantly accessible, infinitely scalable users who can test, interact, and generate feedback without the logistical headaches of working with real participants.

Yet, as with any new technology, there are trade-offs. While digital twins may unlock efficiencies, they also raise important questions: Can they truly replicate human complexity? Where do they fit within existing research practices? What risks do they introduce?

This article explores the evolving role of digital twins in UX research—where they excel, where they fall short, and what their rise means for the future of human-centered design.

The Traditional UX Research Model: Why Change?

For decades, UX research has been grounded in methodologies that involve direct human participation. The core methods—usability testing, user interviews, ethnographic research, and behavioral analytics—have been refined to account for the unpredictability of human nature.

This approach works well, but it has challenges:

  1. Participant recruitment is time-consuming. Finding the right users—especially niche audiences—can be a logistical hurdle, often requiring specialised panels, incentives, and scheduling gymnastics.
  2. Research is expensive. Incentives, moderation, analysis, and recruitment all add to the cost. A single usability study can run into tens of thousands of dollars.
  3. Small sample sizes create risk. Budget and timeline constraints often mean testing with small groups, leaving room for blind spots and bias.
  4. Long feedback loops slow decision-making. By the time research is completed, product teams may have already moved on, limiting its impact.

In short: traditional UX research provides depth and authenticity, but it’s not always fast or scalable.

Digital twins and synthetic users aim to change that.

What Are Digital Twins and Synthetic Users?

While the terms digital twins and synthetic users are sometimes used interchangeably, they are distinct concepts.

Digital Twins: Simulating Real-World Behavior

A digital twin is a data-driven virtual representation of a real-world entity. Originally developed for industrial applications, digital twins replicate machines, environments, and human behavior in a digital space. They can be updated in real time using live data, allowing organisations to analyse scenarios, predict outcomes, and optimise performance.

In UX research, human digital twins attempt to replicate real users' behavioral patterns, decision-making processes, and interactions. They draw on existing datasets to mirror real-world users dynamically, adapting based on real-time inputs.

Synthetic Users: AI-Generated Research Participants

While a digital twin is a mirror of a real entity, a synthetic user is a fabricated research participant—a simulation that mimics human decision-making, behaviors, and responses. These AI-generated personas can be used in research scenarios to interact with products, answer questions, and simulate user journeys.

Unlike traditional user personas (which are static profiles based on aggregated research), synthetic users are interactive and capable of generating dynamic feedback. They aren’t modeled after a specific real-world person, but rather a combination of user behaviors drawn from large datasets.

Think of it this way:

  • A digital twin is a highly detailed, data-driven clone of a specific person, customer segment, or process.
  • A synthetic user is a fictional but realistic simulation of a potential user, generated based on behavioral patterns and demographic characteristics.

Both approaches are still evolving, but their potential applications in UX research are already taking shape.

Where Digital Twins and Synthetic Users Fit into UX Research

The appeal of AI-generated users is undeniable. They can:

  • Scale instantly – Test designs with thousands of simulated users, rather than just a handful of real participants.
  • Eliminate recruitment bottlenecks – No need to chase down participants or schedule interviews.
  • Reduce costs – No incentives, no travel, no last-minute no-shows.
  • Enable rapid iteration – Get user insights in real time and adjust designs on the fly.
  • Generate insights on sensitive topics – Synthetic users can explore scenarios that real participants might find too personal or intrusive.

These capabilities make digital twins particularly useful for:

  • Early-stage concept validation – Rapidly test ideas before committing to development.
  • Edge case identification – Run simulations to explore rare but critical user scenarios.
  • Pre-testing before live usability sessions – Identify glaring issues before investing in human research.

However, digital twins and synthetic users are not a replacement for human research. Their effectiveness is limited in areas where emotional, cultural, and contextual factors play a major role.

The Risks and Limitations of AI-Driven UX Research

For all their promise, digital twins and synthetic users introduce new challenges.

  1. They lack genuine emotional responses.
    AI can analyse sentiment, but it doesn’t feel frustration, delight, or confusion the way a human does. UX is often about unexpected moments—the frustrations, workarounds, and “aha” realisations that define real-world use.
  2. Bias is a real problem.
    AI models are trained on existing datasets, meaning they inherit and amplify biases in those datasets. If synthetic users are based on an incomplete or non-diverse dataset, the research insights they generate will be skewed.
  3. They struggle with novelty.
    Humans are unpredictable. They find unexpected uses for products, misunderstand instructions, and behave irrationally. AI models, no matter how advanced, can only predict behavior based on past patterns—not the unexpected ways real users might engage with a product.
  4. They require careful validation.
    How do we know that insights from digital twins align with real-world user behavior? Without rigorous validation against human data, there’s a risk of over-reliance on synthetic feedback that doesn’t reflect reality.

A Hybrid Future: AI + Human UX Research

Rather than viewing digital twins as a replacement for human research, the best UX teams will integrate them as a complementary tool.

Where AI Can Lead:

  • Large-scale pattern identification
  • Early-stage usability evaluations
  • Speeding up research cycles
  • Automating repetitive testing

Where Humans Remain Essential:

  • Understanding emotion, frustration, and delight
  • Detecting unexpected behaviors
  • Validating insights with real-world context
  • Ethical considerations and cultural nuance

The future of UX research is not about choosing between AI and human research—it’s about blending the strengths of both.

Final Thoughts: Proceeding With Caution and Curiosity

Digital twins and synthetic users are exciting, but they are not a magic bullet. They cannot fully replace human users, and relying on them exclusively could lead to false confidence in flawed insights.

Instead, UX researchers should view these technologies as powerful, but imperfect tools—best used in combination with traditional research methods.

As with any new technology, thoughtful implementation is key. The real opportunity lies in designing research methodologies that harness the speed and scale of AI without losing the depth, nuance, and humanity that make UX research truly valuable.

The challenge ahead isn’t about choosing between human or synthetic research. It’s about finding the right balance—one that keeps user experience truly human-centered, even in an AI-driven world.

This article was researched with the help of Perplexity.ai. 

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Moderated vs unmoderated research: which approach is best?

Knowing and understanding why and how your users use your product is invaluable for getting to the nitty gritty of usability. Delving deep with probing questions into motivation or skimming over looking for issues can equally be informative. 

Put super simply, usability testing literally is testing how usable your product is for your users. If your product isn’t usable users often won’t complete their task, let alone come back for more. No one wants to lose users before they even get started. Usability testing gets under their skin and really into the how, why and what they want (and equally what they don’t).

As we have been getting used to video calling regularly and using the internet for interactions, usability testing has followed suit. Being able to access participants remotely has allowed us to diversify the participant pool by not being restricted to those that are close enough to be in-person. This has also allowed an increase in the number of participants per test, as it becomes more cost-effective to perform remote usability testing.

But if we’re remote, does this mean it can’t be moderated? No - remote testing, along with modern technology, can mean that remote testing can be facilitated and moderated. But what is the best method - moderated or unmoderated?

What is moderated remote research testing? 🙋🏻

In traditional usability testing, moderated research is done in person. With the moderator and the participant in the same physical space. This, of course, allows for conversation and observational behavioral monitoring. Meaning the moderator can note not only what the participant answers but how and even make note of the body language, surroundings, and other influencing factors. 

This has also meant that traditionally, the participant pool has been limited to those that can be available (and close enough) to make it into a facility for testing. And being in person has meant it takes time (and money) to perform these tests.

As technology has moved along and the speed of internet connections and video calling has increased, this has opened up a world of opportunities for usability testing. Allowing usability testing to be done remotely. Moderators can now set up testing remotely and ‘dial in’ to observe participants anywhere they are. And potentially even running focus groups or other testing in a group format across the internet. 

Pros:

- In-depth gathering of insights through a back-and-forth conversation and observing of the participants.

- Follow-up questions don’t underestimate the value of being available to ask questions throughout the testing. And following up in the moment.

- Observational monitoring noticing and noting the environment and how the participants are behaving, can give more insight into how or why they choose to make a decision.

- Quick remote testing can be quicker to start, find participants, and complete than in-person. This is because you only need to set up a time to connect via the internet, rather than coordinating travel times, etc.

- Location (local and/or international) Testing online removes reliance on participants being physically present for the testing. This broadens your ability to broaden the pool, and participants can be either within your country or global. 

Cons:

- Time-consuming having to be present at each test takes time. As does analyzing the data and insights generated. But remember, this is quality data.

- Limited interactions with any remote testing there is only so much you can observe or understand across the window of a computer screen. It can be difficult to have a grasp on all the factors that might be influencing your participants.

What is unmoderated remote research testing? 😵💫

In its most simple sense, unmoderated user testing removes the ‘moderated’ part of the equation. Instead of having a facilitator guide participants through the test, participants are left to complete the testing by themselves and in their own time. For the most part, everything else stays the same. 

Removing the moderator, means that there isn’t anyone to respond to queries or issues in the moment. This can either delay, influence, or even potentially force participants to not complete or maybe not be as engaged as you may like. Unmoderated research testing suits a very simple and direct type of test. With clear instructions and no room for inference. 

Pros:

- Speed and turnaround,  as there is no need to schedule meetings with each and every participant. Unmoderated usability testing is usually much faster to initiate and complete.

- Size of study (participant numbers) unmoderated usability testing allows you to collect feedback from dozens or even hundreds of users at the same time. 


- Location (local and/or international) Testing online removes reliance on participants being physically present for the testing, which broadens your participant pool.  And unmoderated testing means that it literally can be anywhere while participants complete the test in their own time.

Cons:

- Follow-up questions as your participants are working on their own and in their own time, you can’t facilitate and ask questions in the moment. You may be able to ask limited follow-up questions.

- Products need to be simple to use unmoderated testing does not allow for prototypes or any product or site that needs guidance. 

- Low participant support without the moderator any issues with the test or the product can’t be picked up immediately and could influence the output of the test.

When should you do which? 🤔

Each moderated and unmoderated remote usability testing have its use and place in user research. It really depends on the question you are asking and what you are wanting to know.

Moderated testing allows you to gather in-depth insights, follow up with questions, and engage the participants in the moment. The facilitator has the ability to guide participants to what they want to know, to dig deeper, or even ask why at certain points. This method doesn’t need as much careful setup as the participants aren’t on their own. While this is all done online, it does still allow connection and conversation. This method allows for more investigative research. Looking at why users might prefer one prototype to another. Or possibly tree testing a new website navigation to understand where they might get lost and querying why the participant made certain choices.

Unmoderated testing, on the other hand, is literally leaving the participants to it. This method needs very careful planning and explaining upfront. The test needs to be able to be set and run without a moderator. This lends itself more to wanting to know a direct answer to a query. Such as a card sort on a website to understand how your users might sort information. Or a first click to see how/where users will click on a new website.

Wrap Up 🌯

With the ability to expand our pool of participants across the globe with all of the advances (and acceptance of) technology and video calling etc, the ability to expand our understanding of users’ experiences is growing. Remote usability testing is a great option when you want to gather information from users in the real world. Depending on your query, moderated or unmoderated usability testing will suit your study. As with all user testing, being prepared and planning ahead will allow you to make the most of your test.

Learn more
1 min read

Making the Complex Simple: Clarity as a UX Superpower in Financial Services

In the realm of financial services, complexity isn't just a challenge, it's the default state. From intricate investment products to multi-layered insurance policies to complex fee structures, financial services are inherently complicated. But your users don't want complexity; they want confidence, clarity, and control over their financial lives.

How to keep things simple with good UX research 

Understanding how users perceive and navigate complexity requires systematic research. Optimal's platform offers specialized tools to identify complexity pain points and validate simplification strategies:

Uncover Navigation Challenges with Tree Testing

Complex financial products often create equally complex navigation structures:

How can you solve this? 

  • Test how easily users can find key information within your financial platform
  • Identify terminology and organizational structures that confuse users
  • Compare different information architectures to find the most intuitive organization

Identify Confusion Points with First-Click Testing

Understanding where users instinctively look for information reveals valuable insights about mental models:

How can you solve this? 

  • Test where users click when trying to accomplish common financial tasks
  • Compare multiple interface designs for complex financial tools
  • Identify misalignments between expected and actual user behavior

Understand User Mental Models with Card Sorting

Financial terminology and categorization often don't align with how customers think:

How can you solve this? 

  • Use open card sorts to understand how users naturally group financial concepts
  • Test comprehension of financial terminology
  • Identify intuitive labels for complex financial products

Practical Strategies for Simplifying Financial UX

1. Progressive Information Disclosure

Rather than bombarding users with all information at once, layer information from essential to detailed:

  • Start with core concepts and benefits
  • Provide expandable sections for those who want deeper dives
  • Use tooltips and contextual help for terminology
  • Create information hierarchies that guide users from basic to advanced understanding

2. Visual Representation of Numerical Concepts

Financial services are inherently numerical, but humans don't naturally think in numbers—we think in pictures and comparisons.

What could this look like? 

  • Use visual scales and comparisons instead of just presenting raw numbers
  • Implement interactive calculators that show real-time impact of choices
  • Create visual hierarchies that guide attention to most relevant figures
  • Design comparative visualizations that put numbers in context

3. Contextual Decision Support

Users don't just need information; they need guidance relevant to their specific situation.

How do you solve for this? 

  • Design contextual recommendations based on user data
  • Provide comparison tools that highlight differences relevant to the user
  • Offer scenario modeling that shows outcomes of different choices
  • Implement guided decision flows for complex choices

4. Language Simplification and Standardization

Financial jargon is perhaps the most visible form of unnecessary complexity. So, what can you do? 

  • Develop and enforce a simplified language style guide
  • Create a financial glossary integrated contextually into the experience
  • Test copy with actual users, measuring comprehension, not just preference
  • Replace industry terms with everyday language when possible

Measuring Simplification Success

To determine whether your simplification efforts are working, establish a continuous measurement program:

1. Establish Complexity Baselines

Use Optimal's tools to create baseline measurements:

  • Success rates for completing complex tasks
  • Time required to find critical information
  • Comprehension scores for key financial concepts
  • User confidence ratings for financial decisions

2. Implement Iterative Testing

Before launching major simplification initiatives, validate improvements through:

  • A/B testing of alternative explanations and designs
  • Comparative testing of current vs. simplified interfaces
  • Comprehension testing of revised terminology and content

3. Track Simplification Metrics Over Time

Create a dashboard of key simplification indicators:

  • Task success rates for complex financial activities
  • Support call volume related to confusion
  • Feature adoption rates for previously underutilized tools
  • User-reported confidence in financial decisions

Where rubber hits the road: Organizational Commitment to Clarity

True simplification goes beyond interface design. It requires organizational commitment at the most foundational level:

  • Product development: Are we creating inherently understandable products?
  • Legal and compliance: Can we satisfy requirements while maintaining clarity?
  • Marketing: Are we setting appropriate expectations about complexity?
  • Customer service: Are we gathering intelligence about confusion points?

When there is a deep commitment from the entire organization to simplification, it becomes part of a businesses’ UX DNA. 

Conclusion: The Future Belongs to the Clear

As financial services become increasingly digital and self-directed, clarity bcomes essential for business success. The financial brands that will thrive in the coming decade won't necessarily be those with the most features or the lowest fees, but those that make the complex world of finance genuinely understandable to everyday users.

By embracing clarity as a core design principle and supporting it with systematic user research, you're not just improving user experience, you're democratizing financial success itself.

Learn more
1 min read

6 things to consider when setting up a research practice

With UX research so closely tied to product success, setting up a dedicated research practice is fast becoming important for many organizations. It’s not an easy process, especially for organizations that have had little to do with research, but the end goal is worth the effort.

But where exactly are you supposed to start? This article provides 6 key things to keep in mind when setting up a research practice, and should hopefully ensure you’ve considered all of the relevant factors.

1) Work out what your organization needs

The first and most simple step is to take stock of the current user research situation within the organization. How much research is currently being done? Which teams or individuals are talking to customers on an ongoing basis? Consider if there are any major pain points with the current way research is being carried out or bottlenecks in getting research insights to the people that need them. If research isn't being practiced, identify teams or individuals that don't currently have access to the resources they need, and consider ways to make insights available to the people that need them.

2) Consolidate your insights

UX research should be communicating with nearly every part of an organization, from design teams to customer support, engineering departments and C-level management. The insights that stem from user research are valuable everywhere. Of course, the opposite is also true: insights from support and sales are useful for understanding customers and how the current product is meeting people's needs.

When setting up a research practice, identify which teams you should align with, and then reach out. Sit down with these teams and explore how you can help each other. For your part, you’ll probably need to explain the what and why of user research within the context of your organization, and possibly even explain at a basic level some of the techniques you use and the data you can obtain.

Then, get in touch with other teams with the goal of learning from them. A good research practice needs a strong connection to other parts of the business with the express purpose of learning. For example, by working with your organization’s customer support team, you’ll have a direct line to some of the issues that customers deal with on a regular basis. A good working relationship here means they’ll likely feed these insights back to you, in order to help you frame your research projects.

By working with your sales team, they’ll be able to share issues prospective customers are dealing with. You can follow up on this information with research, the results of which can be fed into the development of your organization’s products.

It can also be fruitful to develop an insights repository, where researchers can store any useful insights and log research activities. This means that sales, customer support and other interested parties can access the results of your research whenever they need to.

When your research practice is tightly integrated other key areas of the business, the organization is likely to see innumerable benefits from the insights>product loop.

3) Figure out which tools you will use

By now you’ve hopefully got an idea of how your research practice will fit into the wider organization – now it’s time to look at the ways in which you’ll do your research. We’re talking, of course, about research methods and testing tools.

We won’t get into every different type of method here (there are plenty of other articles and guides for that), but we will touch on the importance of qualitative and quantitative methods. If you haven’t come across these terms before, here’s a quick breakdown:

  • Qualitative research – Focused on exploration. It’s about discovering things we cannot measure with numbers, and often involves speaking with users through observation or user interviews.
  • Quantitative research – Focused on measurement. It’s all about gathering data and then turning this data into usable statistics.

All user research methods are designed to deliver either qualitative or quantitative data, and as part of your research practice, you should ensure that you always try to gather both types. By using this approach, you’re able to generate a clearer overall picture of whatever it is you’re researching.

Next comes the software. A solid stack of user research testing tools will help you to put research methods into practice, whether for the purposes of card sorting, carrying out more effective user interviews or running a tree test.

There are myriad tools available now, and it can be difficult to separate the useful software from the chaff. Here’s a list of research and productivity tools that we recommend.

Tools for research

Here’s a collection of research tools that can help you gather qualitative and quantitative data, using a number of methods.

  • Treejack – Tree testing can show you where people get lost on your website, and help you take the guesswork out of information architecture decisions. Like OptimalSort, Treejack makes it easy to sort through information and pairs this with in-depth analysis features.
  • dScout – Imagine being able to get video snippets of your users as they answer questions about your product. That’s dScout. It’s a video research platform that collects in-context “moments” from a network of global participants, who answer your questions either by video or through photos.
  • Ethnio – Like dScout, this is another tool designed to capture information directly from your users. It works by showing an intercept pop-up to people who land on your website. Then, once they agree, it runs through some form of research.
  • OptimalSort – Card sorting allows you to get perspective on whatever it is you’re sorting and understand how people organize information. OptimalSort makes it easier and faster to sort through information, and you can access powerful analysis features.
  • Reframer – Taking notes during user interviews and usability tests can be quite time-consuming, especially when it comes to analyze the data. Reframer gives individuals and teams a single tool to store all of their notes, along with a set of powerful analysis features to make sense of their data.
  • Chalkmark – First-click testing can show you what people click on first in a user interface when they’re asked to complete a task. This is useful, as when people get their first click correct, they’re much more likely to complete their task. Chalkmark makes the process of setting up and running a first-click test easy. What’s more, you’re given comprehensive analysis tools, including a click heatmap.

Tools for productivity

These tools aren’t necessarily designed for user research, but can provide vital links in the process.

  • Whimsical – A fantastic tool for user journeys, flow charts and any other sort of diagram. It also solves one of the biggest problems with online whiteboards – finicky object placement.
  • Descript – Easily transcribe your interview and usability test audio recordings into text.
  • Google Slides – When it inevitably comes time to present your research findings to stakeholders, use Google Slides to create readable, clear presentations.

4) Figure out how you’ll track findings over time

With some idea of the research methods and testing tools you’ll be using to collect data, now it’s time to think about how you’ll manage all of this information. A carefully ordered spreadsheet and folder system can work – but only to an extent. Dedicated software is a much better choice, especially given that you can scale these systems much more easily.

A dedicated home for your research data serves a few distinct purposes. There’s the obvious benefit of being able to access all of your findings whenever you need them, which means it’s much easier to create personas if the need arises. A dedicated home also means your findings will remain accessible and useful well into the future.

When it comes to software, Reframer stands as one of the better options for creating a detailed customer insights repository as you’re able to capture your sessions directly in the tool and then apply tags afterwards. You can then easily review all of your observations and findings using the filtering options. Oh, and there’s obviously the analysis side of the tool as well.

If you’re looking for a way to store high-level findings – perhaps if you’re intending to share this data with other parts of your organization – then a tool like Confluence or Notion is a good option. These tools are basically wikis, and include capable search and navigation options too.

5) Where will you get participants from?

A pool of participants you can draw from for your user research is another important part of setting up a research practice. Whenever you need to run a study, you’ll have real people you can call on to test, ask questions and get feedback from.

This is where you’ll need to partner other teams, likely sales and customer support. They’ll have direct access to your customers, so make sure to build a strong relationship with these teams. If you haven’t made introductions, it can helpful to put together a one-page sheet of information explaining what UX research is and the benefits of working with your team.

You may also want to consider getting in some external help. Participant recruitment services are a great way to offload the heavy lifting of sourcing quality participants – often one of the hardest parts of the research process.

6) Work out how you'll communicate your research

Perhaps one of the most important parts of being a user researcher is taking the findings you uncover and communicating them back to the wider organization. By feeding insights back to product, sales and customer support teams, you’ll form an effective link between your organization’s customers and your organization. The benefits here are obvious. Product teams can build products that actually address customer pain points, and sales and support teams will better understand the needs and expectations of customers.

Of course, it isn’t easy to communicate findings. Here are a few tips:

  • Document your research activities: With a clear record of your research, you’ll find it easier to pull out relevant findings and communicate these to the right teams.
  • Decide who needs what: You’ll probably find that certain roles (like managers) will be best served by a high-level overview of your research activities (think a one-page summary), while engineers, developers and designers will want more detailed research findings.

Continue reading

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.