March 21, 2025
10

The Evolution of UX Research: Digital Twins and the Future of User Insight

Introduction

User Experience (UX) research has always been about people. How they think, how they behave, what they need, and—just as importantly—what they don’t yet realise they need. Traditional UX methodologies have long relied on direct human input: interviews, usability testing, surveys, and behavioral observation. The assumption was clear—if you want to understand people, you have to engage with real humans.

But in 2025, that assumption is being challenged.

The emergence of digital twins and synthetic users—AI-powered simulations of human behavior—is changing how researchers approach user insights. These technologies claim to solve persistent UX research problems: slow participant recruitment, small sample sizes, high costs, and research timelines that struggle to keep pace with product development. The promise is enticing: instantly accessible, infinitely scalable users who can test, interact, and generate feedback without the logistical headaches of working with real participants.

Yet, as with any new technology, there are trade-offs. While digital twins may unlock efficiencies, they also raise important questions: Can they truly replicate human complexity? Where do they fit within existing research practices? What risks do they introduce?

This article explores the evolving role of digital twins in UX research—where they excel, where they fall short, and what their rise means for the future of human-centered design.

The Traditional UX Research Model: Why Change?

For decades, UX research has been grounded in methodologies that involve direct human participation. The core methods—usability testing, user interviews, ethnographic research, and behavioral analytics—have been refined to account for the unpredictability of human nature.

This approach works well, but it has challenges:

  1. Participant recruitment is time-consuming. Finding the right users—especially niche audiences—can be a logistical hurdle, often requiring specialised panels, incentives, and scheduling gymnastics.
  2. Research is expensive. Incentives, moderation, analysis, and recruitment all add to the cost. A single usability study can run into tens of thousands of dollars.
  3. Small sample sizes create risk. Budget and timeline constraints often mean testing with small groups, leaving room for blind spots and bias.
  4. Long feedback loops slow decision-making. By the time research is completed, product teams may have already moved on, limiting its impact.

In short: traditional UX research provides depth and authenticity, but it’s not always fast or scalable.

Digital twins and synthetic users aim to change that.

What Are Digital Twins and Synthetic Users?

While the terms digital twins and synthetic users are sometimes used interchangeably, they are distinct concepts.

Digital Twins: Simulating Real-World Behavior

A digital twin is a data-driven virtual representation of a real-world entity. Originally developed for industrial applications, digital twins replicate machines, environments, and human behavior in a digital space. They can be updated in real time using live data, allowing organisations to analyse scenarios, predict outcomes, and optimise performance.

In UX research, human digital twins attempt to replicate real users' behavioral patterns, decision-making processes, and interactions. They draw on existing datasets to mirror real-world users dynamically, adapting based on real-time inputs.

Synthetic Users: AI-Generated Research Participants

While a digital twin is a mirror of a real entity, a synthetic user is a fabricated research participant—a simulation that mimics human decision-making, behaviors, and responses. These AI-generated personas can be used in research scenarios to interact with products, answer questions, and simulate user journeys.

Unlike traditional user personas (which are static profiles based on aggregated research), synthetic users are interactive and capable of generating dynamic feedback. They aren’t modeled after a specific real-world person, but rather a combination of user behaviors drawn from large datasets.

Think of it this way:

  • A digital twin is a highly detailed, data-driven clone of a specific person, customer segment, or process.
  • A synthetic user is a fictional but realistic simulation of a potential user, generated based on behavioral patterns and demographic characteristics.

Both approaches are still evolving, but their potential applications in UX research are already taking shape.

Where Digital Twins and Synthetic Users Fit into UX Research

The appeal of AI-generated users is undeniable. They can:

  • Scale instantly – Test designs with thousands of simulated users, rather than just a handful of real participants.
  • Eliminate recruitment bottlenecks – No need to chase down participants or schedule interviews.
  • Reduce costs – No incentives, no travel, no last-minute no-shows.
  • Enable rapid iteration – Get user insights in real time and adjust designs on the fly.
  • Generate insights on sensitive topics – Synthetic users can explore scenarios that real participants might find too personal or intrusive.

These capabilities make digital twins particularly useful for:

  • Early-stage concept validation – Rapidly test ideas before committing to development.
  • Edge case identification – Run simulations to explore rare but critical user scenarios.
  • Pre-testing before live usability sessions – Identify glaring issues before investing in human research.

However, digital twins and synthetic users are not a replacement for human research. Their effectiveness is limited in areas where emotional, cultural, and contextual factors play a major role.

The Risks and Limitations of AI-Driven UX Research

For all their promise, digital twins and synthetic users introduce new challenges.

  1. They lack genuine emotional responses.
    AI can analyse sentiment, but it doesn’t feel frustration, delight, or confusion the way a human does. UX is often about unexpected moments—the frustrations, workarounds, and “aha” realisations that define real-world use.
  2. Bias is a real problem.
    AI models are trained on existing datasets, meaning they inherit and amplify biases in those datasets. If synthetic users are based on an incomplete or non-diverse dataset, the research insights they generate will be skewed.
  3. They struggle with novelty.
    Humans are unpredictable. They find unexpected uses for products, misunderstand instructions, and behave irrationally. AI models, no matter how advanced, can only predict behavior based on past patterns—not the unexpected ways real users might engage with a product.
  4. They require careful validation.
    How do we know that insights from digital twins align with real-world user behavior? Without rigorous validation against human data, there’s a risk of over-reliance on synthetic feedback that doesn’t reflect reality.

A Hybrid Future: AI + Human UX Research

Rather than viewing digital twins as a replacement for human research, the best UX teams will integrate them as a complementary tool.

Where AI Can Lead:

  • Large-scale pattern identification
  • Early-stage usability evaluations
  • Speeding up research cycles
  • Automating repetitive testing

Where Humans Remain Essential:

  • Understanding emotion, frustration, and delight
  • Detecting unexpected behaviors
  • Validating insights with real-world context
  • Ethical considerations and cultural nuance

The future of UX research is not about choosing between AI and human research—it’s about blending the strengths of both.

Final Thoughts: Proceeding With Caution and Curiosity

Digital twins and synthetic users are exciting, but they are not a magic bullet. They cannot fully replace human users, and relying on them exclusively could lead to false confidence in flawed insights.

Instead, UX researchers should view these technologies as powerful, but imperfect tools—best used in combination with traditional research methods.

As with any new technology, thoughtful implementation is key. The real opportunity lies in designing research methodologies that harness the speed and scale of AI without losing the depth, nuance, and humanity that make UX research truly valuable.

The challenge ahead isn’t about choosing between human or synthetic research. It’s about finding the right balance—one that keeps user experience truly human-centered, even in an AI-driven world.

This article was researched with the help of Perplexity.ai. 

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Mixed methods research in 2021

User experience research is super important to developing a product that truly engages, compels and energises people. We all want a website that is easy to navigate, simple to follow and compels our users to finish their tasks. Or an app that supports and drives engagement.

We’ve talked a lot about the various types of research tools that help improve these outcomes. 

There is a rising research trend in 2021.

Mixed method research - what is more compelling than these user research quantitative tools? Combining these with awesome qualitative research! Asking the same questions in various ways can provide deeper insights into how our users think and operate. Empowering you to develop products that truly talk to your users, answer their queries or even address their frustrations.

Though it isn’t enough to simply ‘do research’, as with anything you need to approach it with strategy, focus and direction. This will funnel your time, money and energy into areas that will generate the best results.

Mixed Method UX research is the research trend of 2021

With the likes of Facebook, Amazon, Etsy, eBay, Ford and many more big organizations offering newly formed job openings for mixed methods researchers it becomes very obvious where the research trend is heading.

It’s not only good to have, but now becoming imperative, to gather data, dive deeper and generate insights that provide more information on our users than ever before. And you don't need to be Facebook to reap the benefits. Mixed method research can be implemented across the board and can be as narrow as finding out how your homepage is performing through to analysing in depth the entirety of your product design.

And with all of these massive organizations making the move to increase their data collection and research teams. Why wouldn’t you?

The value in mixed method research is profound. Imagine understanding what, where, how and why your customers would want to use your service. And catering directly for them. The more we understand our customers, the deeper the relationship and the more likely we are to keep them engaged.

Although of course by diving deep into the reasons our users like (or don’t like) how our products operate can drive your organization to target and operate better at a higher level. Gearing your energies to attracting and keeping the right type of customer, providing the right level of service and after care. Potentially reducing overheads, by not delivering to expected levels.

What is mixed method research?

Mixed methods research isn’t overly complicated, and doesn’t take years for you to master. It simply is a term used to refer to using a combination of quantitative and qualitative data. This may mean using a research tool such as card sorting alongside interviews with users. 

Quantitative research is the tangible numbers and metrics that can be gathered through user research such as card sorting or tree testing.

Qualitative research is research around users’ behaviour and experiences. This can be through usability tests, interviews or surveys.

For instance you may be asking ‘how should I order the products on my site?’. With card sorting you can get the data insights that will inform how a user would like to see the products sorted. Coupled with interviews you will get the why.

Understanding the thinking behind the order, and why one user likes to see gym shorts stored under shorts and another would like to see them under active wear. With a deeper understanding of how and why users decide how content should be sorted are made will create a highly intuitive website. 

Another great reason for mixed method research would be to back up data insights for stakeholders. With a depth and breadth of qualitative and quantitative research informing decisions, it becomes clearer why changes may need to be made, or product designs need to be challenged.

How to do mixed method research

Take a look at our article for more examples of the uses of mixed method research. 

Simply put mixed method research means coupling quantitative research, such as tree testing, card sorting or first click testing, with qualitative research such as surveys, interviews or diary entry.

Say, for instance, the product manager has identified that there is an issue with keeping users engaged on the homepage of your website. We would start with asking where they get stuck, and when they are leaving.

This can be done using a first-click tool, such as Chalkmark, which will map where users head when they land on your homepage and beyond. 

This will give you the initial qualitative data. However, it may only give you some of the picture. Coupled with qualitative data, such as watching (and reporting on) body language. Or conducting interviews with users directly after their experience so we can understand why they found the process confusing or misleading.

A fuller picture, means a better understanding.

Key is to identify what your question is and honing in on this through both methods. Ultimately, we are answering your question from both sides of the coin.

Upcoming research trends to watch

Keeping an eye on the progression of the mixed method research trend, will mean keeping an eye on these:

1. Integrated Surveys

Rather than thinking of user surveys as being a one time, in person event, we’re seeing more and more often surveys being implemented through social media, on websites and through email. This means that data can be gathered frequently and across the board. This longitude data allows organizations to continuously analyse, interpret and improve products without really ever stopping. 

Rather than relying on users' memories for events and experiences data can be gathered in the moment. At the time of purchase or interaction. Increasing the reliability and quality of the data collected. 

2. Return to the social research

Customer research is rooted in the focus group. The collection of participants in one space, that allows them to voice their opinions and reach insights collectively. This did used to be an overwhelming task with days or even weeks to analyse unstructured forums and group discussions.

However, now with the advent of online research tools this can also be a way to round out mixed method research.

3. Co-creation

The ability to use your customers input to build better products. This has long been thought a way to increase innovative development. Until recently it too has been cumbersome and difficult to wrangle more than a few participants. But, there are a number of resources in development that will make co-creation the buzzword of the decade.

4. Owned Panels & Community

Beyond community engagement in the social sphere. There is a massive opportunity to utilise these engaged users in product development. Through a trusted forum, users are far more likely to actively and willingly participate in research. Providing insights into the community that will drive stronger product outcomes.

What does this all mean for me

So, there is a lot to keep in mind when conducting any effective user research. And there are a lot of very compelling reasons to do mixed method research and do it regularly. 

To remain innovative, and ahead of the ball it remains very important to be engaged with your users and their needs. Using qualitative and qualitative research to inform product decisions means you can operate knowing a fuller picture.

One of the biggest challenges with user research can be the coordination and participant recruitment. That’s where we come in.

Taking the pain out of the process and streamlining your research. Take a look at our Qualitative Research option, Reframer. Giving you an insight into how we can help make your mixed method research easier and analyse your data efficiently and in a format that is easy to understand.

User research doesn’t need to take weeks or months. With our participant recruitment we can provide reliable and quality participants across the board that will provide data you can rely on.

Why not get in deeper with mixed method research today!

Learn more
1 min read

The Evolution of UX Research: Digital Twins and the Future of User Insight

Introduction

User Experience (UX) research has always been about people. How they think, how they behave, what they need, and—just as importantly—what they don’t yet realise they need. Traditional UX methodologies have long relied on direct human input: interviews, usability testing, surveys, and behavioral observation. The assumption was clear—if you want to understand people, you have to engage with real humans.

But in 2025, that assumption is being challenged.

The emergence of digital twins and synthetic users—AI-powered simulations of human behavior—is changing how researchers approach user insights. These technologies claim to solve persistent UX research problems: slow participant recruitment, small sample sizes, high costs, and research timelines that struggle to keep pace with product development. The promise is enticing: instantly accessible, infinitely scalable users who can test, interact, and generate feedback without the logistical headaches of working with real participants.

Yet, as with any new technology, there are trade-offs. While digital twins may unlock efficiencies, they also raise important questions: Can they truly replicate human complexity? Where do they fit within existing research practices? What risks do they introduce?

This article explores the evolving role of digital twins in UX research—where they excel, where they fall short, and what their rise means for the future of human-centered design.

The Traditional UX Research Model: Why Change?

For decades, UX research has been grounded in methodologies that involve direct human participation. The core methods—usability testing, user interviews, ethnographic research, and behavioral analytics—have been refined to account for the unpredictability of human nature.

This approach works well, but it has challenges:

  1. Participant recruitment is time-consuming. Finding the right users—especially niche audiences—can be a logistical hurdle, often requiring specialised panels, incentives, and scheduling gymnastics.
  2. Research is expensive. Incentives, moderation, analysis, and recruitment all add to the cost. A single usability study can run into tens of thousands of dollars.
  3. Small sample sizes create risk. Budget and timeline constraints often mean testing with small groups, leaving room for blind spots and bias.
  4. Long feedback loops slow decision-making. By the time research is completed, product teams may have already moved on, limiting its impact.

In short: traditional UX research provides depth and authenticity, but it’s not always fast or scalable.

Digital twins and synthetic users aim to change that.

What Are Digital Twins and Synthetic Users?

While the terms digital twins and synthetic users are sometimes used interchangeably, they are distinct concepts.

Digital Twins: Simulating Real-World Behavior

A digital twin is a data-driven virtual representation of a real-world entity. Originally developed for industrial applications, digital twins replicate machines, environments, and human behavior in a digital space. They can be updated in real time using live data, allowing organisations to analyse scenarios, predict outcomes, and optimise performance.

In UX research, human digital twins attempt to replicate real users' behavioral patterns, decision-making processes, and interactions. They draw on existing datasets to mirror real-world users dynamically, adapting based on real-time inputs.

Synthetic Users: AI-Generated Research Participants

While a digital twin is a mirror of a real entity, a synthetic user is a fabricated research participant—a simulation that mimics human decision-making, behaviors, and responses. These AI-generated personas can be used in research scenarios to interact with products, answer questions, and simulate user journeys.

Unlike traditional user personas (which are static profiles based on aggregated research), synthetic users are interactive and capable of generating dynamic feedback. They aren’t modeled after a specific real-world person, but rather a combination of user behaviors drawn from large datasets.

Think of it this way:

  • A digital twin is a highly detailed, data-driven clone of a specific person, customer segment, or process.
  • A synthetic user is a fictional but realistic simulation of a potential user, generated based on behavioral patterns and demographic characteristics.

Both approaches are still evolving, but their potential applications in UX research are already taking shape.

Where Digital Twins and Synthetic Users Fit into UX Research

The appeal of AI-generated users is undeniable. They can:

  • Scale instantly – Test designs with thousands of simulated users, rather than just a handful of real participants.
  • Eliminate recruitment bottlenecks – No need to chase down participants or schedule interviews.
  • Reduce costs – No incentives, no travel, no last-minute no-shows.
  • Enable rapid iteration – Get user insights in real time and adjust designs on the fly.
  • Generate insights on sensitive topics – Synthetic users can explore scenarios that real participants might find too personal or intrusive.

These capabilities make digital twins particularly useful for:

  • Early-stage concept validation – Rapidly test ideas before committing to development.
  • Edge case identification – Run simulations to explore rare but critical user scenarios.
  • Pre-testing before live usability sessions – Identify glaring issues before investing in human research.

However, digital twins and synthetic users are not a replacement for human research. Their effectiveness is limited in areas where emotional, cultural, and contextual factors play a major role.

The Risks and Limitations of AI-Driven UX Research

For all their promise, digital twins and synthetic users introduce new challenges.

  1. They lack genuine emotional responses.
    AI can analyse sentiment, but it doesn’t feel frustration, delight, or confusion the way a human does. UX is often about unexpected moments—the frustrations, workarounds, and “aha” realisations that define real-world use.
  2. Bias is a real problem.
    AI models are trained on existing datasets, meaning they inherit and amplify biases in those datasets. If synthetic users are based on an incomplete or non-diverse dataset, the research insights they generate will be skewed.
  3. They struggle with novelty.
    Humans are unpredictable. They find unexpected uses for products, misunderstand instructions, and behave irrationally. AI models, no matter how advanced, can only predict behavior based on past patterns—not the unexpected ways real users might engage with a product.
  4. They require careful validation.
    How do we know that insights from digital twins align with real-world user behavior? Without rigorous validation against human data, there’s a risk of over-reliance on synthetic feedback that doesn’t reflect reality.

A Hybrid Future: AI + Human UX Research

Rather than viewing digital twins as a replacement for human research, the best UX teams will integrate them as a complementary tool.

Where AI Can Lead:

  • Large-scale pattern identification
  • Early-stage usability evaluations
  • Speeding up research cycles
  • Automating repetitive testing

Where Humans Remain Essential:

  • Understanding emotion, frustration, and delight
  • Detecting unexpected behaviors
  • Validating insights with real-world context
  • Ethical considerations and cultural nuance

The future of UX research is not about choosing between AI and human research—it’s about blending the strengths of both.

Final Thoughts: Proceeding With Caution and Curiosity

Digital twins and synthetic users are exciting, but they are not a magic bullet. They cannot fully replace human users, and relying on them exclusively could lead to false confidence in flawed insights.

Instead, UX researchers should view these technologies as powerful, but imperfect tools—best used in combination with traditional research methods.

As with any new technology, thoughtful implementation is key. The real opportunity lies in designing research methodologies that harness the speed and scale of AI without losing the depth, nuance, and humanity that make UX research truly valuable.

The challenge ahead isn’t about choosing between human or synthetic research. It’s about finding the right balance—one that keeps user experience truly human-centered, even in an AI-driven world.

This article was researched with the help of Perplexity.ai. 

Learn more
1 min read

Making the Complex Simple: Clarity as a UX Superpower in Financial Services

In the realm of financial services, complexity isn't just a challenge, it's the default state. From intricate investment products to multi-layered insurance policies to complex fee structures, financial services are inherently complicated. But your users don't want complexity; they want confidence, clarity, and control over their financial lives.

How to keep things simple with good UX research 

Understanding how users perceive and navigate complexity requires systematic research. Optimal's platform offers specialized tools to identify complexity pain points and validate simplification strategies:

Uncover Navigation Challenges with Tree Testing

Complex financial products often create equally complex navigation structures:

How can you solve this? 

  • Test how easily users can find key information within your financial platform
  • Identify terminology and organizational structures that confuse users
  • Compare different information architectures to find the most intuitive organization

Identify Confusion Points with First-Click Testing

Understanding where users instinctively look for information reveals valuable insights about mental models:

How can you solve this? 

  • Test where users click when trying to accomplish common financial tasks
  • Compare multiple interface designs for complex financial tools
  • Identify misalignments between expected and actual user behavior

Understand User Mental Models with Card Sorting

Financial terminology and categorization often don't align with how customers think:

How can you solve this? 

  • Use open card sorts to understand how users naturally group financial concepts
  • Test comprehension of financial terminology
  • Identify intuitive labels for complex financial products

Practical Strategies for Simplifying Financial UX

1. Progressive Information Disclosure

Rather than bombarding users with all information at once, layer information from essential to detailed:

  • Start with core concepts and benefits
  • Provide expandable sections for those who want deeper dives
  • Use tooltips and contextual help for terminology
  • Create information hierarchies that guide users from basic to advanced understanding

2. Visual Representation of Numerical Concepts

Financial services are inherently numerical, but humans don't naturally think in numbers—we think in pictures and comparisons.

What could this look like? 

  • Use visual scales and comparisons instead of just presenting raw numbers
  • Implement interactive calculators that show real-time impact of choices
  • Create visual hierarchies that guide attention to most relevant figures
  • Design comparative visualizations that put numbers in context

3. Contextual Decision Support

Users don't just need information; they need guidance relevant to their specific situation.

How do you solve for this? 

  • Design contextual recommendations based on user data
  • Provide comparison tools that highlight differences relevant to the user
  • Offer scenario modeling that shows outcomes of different choices
  • Implement guided decision flows for complex choices

4. Language Simplification and Standardization

Financial jargon is perhaps the most visible form of unnecessary complexity. So, what can you do? 

  • Develop and enforce a simplified language style guide
  • Create a financial glossary integrated contextually into the experience
  • Test copy with actual users, measuring comprehension, not just preference
  • Replace industry terms with everyday language when possible

Measuring Simplification Success

To determine whether your simplification efforts are working, establish a continuous measurement program:

1. Establish Complexity Baselines

Use Optimal's tools to create baseline measurements:

  • Success rates for completing complex tasks
  • Time required to find critical information
  • Comprehension scores for key financial concepts
  • User confidence ratings for financial decisions

2. Implement Iterative Testing

Before launching major simplification initiatives, validate improvements through:

  • A/B testing of alternative explanations and designs
  • Comparative testing of current vs. simplified interfaces
  • Comprehension testing of revised terminology and content

3. Track Simplification Metrics Over Time

Create a dashboard of key simplification indicators:

  • Task success rates for complex financial activities
  • Support call volume related to confusion
  • Feature adoption rates for previously underutilized tools
  • User-reported confidence in financial decisions

Where rubber hits the road: Organizational Commitment to Clarity

True simplification goes beyond interface design. It requires organizational commitment at the most foundational level:

  • Product development: Are we creating inherently understandable products?
  • Legal and compliance: Can we satisfy requirements while maintaining clarity?
  • Marketing: Are we setting appropriate expectations about complexity?
  • Customer service: Are we gathering intelligence about confusion points?

When there is a deep commitment from the entire organization to simplification, it becomes part of a businesses’ UX DNA. 

Conclusion: The Future Belongs to the Clear

As financial services become increasingly digital and self-directed, clarity bcomes essential for business success. The financial brands that will thrive in the coming decade won't necessarily be those with the most features or the lowest fees, but those that make the complex world of finance genuinely understandable to everyday users.

By embracing clarity as a core design principle and supporting it with systematic user research, you're not just improving user experience, you're democratizing financial success itself.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.