October 1, 2025
4 minutes

Why Understanding Users Has Never Been Easier...or Harder

Product, design and research teams today are drowning in user data while starving for user understanding. Never before have teams had such access to user information, analytics dashboards, heatmaps, session recordings, survey responses, social media sentiment, support tickets, and endless behavioral data points. Yet despite this volume of data, teams consistently build features users don't want and miss needs hiding in plain sight.

It’s a true paradox for product, design and research teams: more information has made genuine understanding more elusive. 

Because with  all this data, teams feel informed. They can say with confidence: "Users spend 3.2 minutes on this page," "42% abandon at this step," "Power users click here." But what this data doesn't tell you is Why. 

The Difference between Data and Insight

Data tells you what happened. Understanding tells you why it matters.

Here’s a good example of this: Your analytics show that 60% of users abandon a new feature after first use. You know they're leaving. You can see where they click before they go. You have their demographic data and behavioral patterns.

But you don't know:

  • Were they confused or simply uninterested?
  • Did it solve their problem too slowly or not at all?
  • Would they return if one thing changed, or is the entire approach wrong?
  • Are they your target users or the wrong segment entirely?

One team sees "60% abandonment" and adds onboarding tooltips. Another talks to users and discovers the feature solves the wrong problem entirely. Same data, completely different understanding.

Modern tools make it dangerously easy to mistake observation for comprehension, but some aspects of user experience exist beyond measurement:

  • Emotional context, like the frustration of trying to complete a task while handling a crying baby, or the anxiety of making a financial decision without confidence.
  • The unspoken needs of users which can only be demonstrated through real interactions. Users develop workarounds without reporting bugs. They live with friction because they don't know better solutions exist.
  • Cultural nuances that numbers don't capture, like how language choice resonates differently across cultures, or how trust signals vary by context.
  • Data shows what users do within your current product. It doesn't reveal what they'd do if you solved their problems differently to help you identify new opportunities. 

Why Human Empathy is More Important than Ever 

The teams building truly user-centered products haven't abandoned data but they've learned to combine quantitative and qualitative insights. 

  • Combine analytics (what happens), user interviews (why it happens), and observation (context in which it happens).
  • Understanding builds over time. A single study provides a snapshot; continuous engagement reveals the movie.
  • Use data to form theories, research to validate them, and real-world live testing to confirm understanding.
  • Different team members see different aspects. Engineers notice system issues, designers spot usability gaps, PMs identify market fit, researchers uncover needs.

Adding AI into the mix also emphasizes the need for human validation. While AI can help significantly speed up workflows and can augment human expertise, it still requires oversight and review from real people. 

AI can spot trends humans miss, processing millions of data points instantly but it can't understand human emotion, cultural context, or unspoken needs. It can summarize what users say but humans must interpret what they mean.

Understanding users has never been easier from a data perspective. We have tools our predecessors could only dream of.  But understanding users has never been harder from an empathy perspective. The sheer volume of data available to us creates an illusion of knowledge that's more dangerous than ignorance.

The teams succeeding aren't choosing between data and empathy, they're investing equally in both. They use analytics to spot patterns and conversations to understand meaning. They measure behavior and observe context. They quantify outcomes and qualify experiences.

Because at the end of the day, you can track every click, measure every metric, and analyze every behavior, but until you understand why, you're just collecting data, not creating understanding.

Share this article
Author
Optimal
Workshop
Topics

Related articles

View all blog articles
Learn more
1 min read

5 ways to measure UX return on investment

Return on investment (ROI) is often the term on everyone’s lips when starting a big project or even when reviewing a website. It’s especially popular with those that hold the purse strings.  As UX researchers it is important to consider the ROI of the work we do and understand how to measure this. 

We’ve lined up 5 key ways to measure ROI for UX research to help you get the conversation underway with stakeholders so you can show real and tangible benefits to your organization. 

1. Meet and exceed user expectations

Put simply, a product that meets and exceeds user expectations leads to increased revenue. When potential buyers are able to find and purchase what they’re looking for, easily, they’ll complete their purchase, and are far more likely to come back. The simple fact that users can finish their task will increase sales and improve overall customer satisfaction which has an influence on their loyalty. Repeat business means repeat sales. Means increased revenue.

Creating, developing and maintaining a usable website is more important than you might think. And this is measurable! Tracking and analyzing website performance prior to the UX research and after can be insightful and directly influenced by changes made based on UX research.

Measurable: review the website (product) performance prior to UX research and after changes have been made. The increase in clicks, completed tasks and/or baskets will tell the story.

2. Reduce development time

UX research done at the initial stages of a project can lead to a reduction in development time of by 33% to 50%! And reduced time developing, means reduced costs (people and overheads) and a speedier to market date. What’s not to love? 

Measurable: This one is a little more tricky as you have saved time (and cost) up front. Aiding in speed to market and performance prior to execution. Internal stakeholder research may be of value post the live date to understand how the project went.

3. Ongoing development costs

And the double hitter? Creating a product that has the user in mind up front, reduces the need to rehash or revisit as quickly. Reducing ongoing costs. Early UX research can help with the detection of errors early on in the development process. Fixing errors after development costs a company up to 100 times more than dealing with the same error before development.

Measureable: Again, as UX research has saved time and money up front this one can be difficult to track. Though depending on your organization and previous projects you could conduct internal research to understand how the project compares and the time and cost savings.

4. Meeting user requirements

Did you know that 70% of projects fail due to the lack of user acceptance? This is often because project managers fail to understand the user requirements properly. Thanks to UX research early on, gaining insights into users and only spending time developing the functions users actually want, saving time and reducing development costs. Make sure you get confirmation on those requirements by iterative testing. As always, fail early, fail often. Robust testing up front means that in the end, you’ll have a product that will meet the needs of the user.

Measurable: Where is the product currently? How does it perform? Set a benchmark up front and review post UX research. The deliverables should make the ROI obvious.

5. Investing in UX research leads to an essential competitive advantage.

Thanks to UX research you can find out exactly what your customers want, need and expect from you. This gives you a competitive advantage over other companies in your market. But you should be aware that more and more companies are investing in UX while customers are ever more demanding, their expectations continue to grow and they don’t tolerate bad experiences. And going elsewhere is an easy decision to make.

Measurable: Murky this one, but no less important. Knowing, understanding and responding to competitors can help keep you in the lead, and developing products that meet and exceed those user expectations.

Wrap up

Showing the ROI on the work we do is an essential part of getting key stakeholders on board with our research. It can be challenging to talk the same language, ultimately we all want the same outcome…a product that works well for our users, and delivers additional revenue.

For some continued reading (or watching in this case), Anna Bek, Product and Delivery Manager at Xplor explored the same concept of "How to measure experience" during her UX New Zealand 2020 – watch it here as she shares a perspective on UX ROI.

Learn more
1 min read

The Evolution of UX Research: Digital Twins and the Future of User Insight

Introduction

User Experience (UX) research has always been about people. How they think, how they behave, what they need, and—just as importantly—what they don’t yet realise they need. Traditional UX methodologies have long relied on direct human input: interviews, usability testing, surveys, and behavioral observation. The assumption was clear—if you want to understand people, you have to engage with real humans.

But in 2025, that assumption is being challenged.

The emergence of digital twins and synthetic users—AI-powered simulations of human behavior—is changing how researchers approach user insights. These technologies claim to solve persistent UX research problems: slow participant recruitment, small sample sizes, high costs, and research timelines that struggle to keep pace with product development. The promise is enticing: instantly accessible, infinitely scalable users who can test, interact, and generate feedback without the logistical headaches of working with real participants.

Yet, as with any new technology, there are trade-offs. While digital twins may unlock efficiencies, they also raise important questions: Can they truly replicate human complexity? Where do they fit within existing research practices? What risks do they introduce?

This article explores the evolving role of digital twins in UX research—where they excel, where they fall short, and what their rise means for the future of human-centered design.

The Traditional UX Research Model: Why Change?

For decades, UX research has been grounded in methodologies that involve direct human participation. The core methods—usability testing, user interviews, ethnographic research, and behavioral analytics—have been refined to account for the unpredictability of human nature.

This approach works well, but it has challenges:

  1. Participant recruitment is time-consuming. Finding the right users—especially niche audiences—can be a logistical hurdle, often requiring specialised panels, incentives, and scheduling gymnastics.
  2. Research is expensive. Incentives, moderation, analysis, and recruitment all add to the cost. A single usability study can run into tens of thousands of dollars.
  3. Small sample sizes create risk. Budget and timeline constraints often mean testing with small groups, leaving room for blind spots and bias.
  4. Long feedback loops slow decision-making. By the time research is completed, product teams may have already moved on, limiting its impact.

In short: traditional UX research provides depth and authenticity, but it’s not always fast or scalable.

Digital twins and synthetic users aim to change that.

What Are Digital Twins and Synthetic Users?

While the terms digital twins and synthetic users are sometimes used interchangeably, they are distinct concepts.

Digital Twins: Simulating Real-World Behavior

A digital twin is a data-driven virtual representation of a real-world entity. Originally developed for industrial applications, digital twins replicate machines, environments, and human behavior in a digital space. They can be updated in real time using live data, allowing organisations to analyse scenarios, predict outcomes, and optimise performance.

In UX research, human digital twins attempt to replicate real users' behavioral patterns, decision-making processes, and interactions. They draw on existing datasets to mirror real-world users dynamically, adapting based on real-time inputs.

Synthetic Users: AI-Generated Research Participants

While a digital twin is a mirror of a real entity, a synthetic user is a fabricated research participant—a simulation that mimics human decision-making, behaviors, and responses. These AI-generated personas can be used in research scenarios to interact with products, answer questions, and simulate user journeys.

Unlike traditional user personas (which are static profiles based on aggregated research), synthetic users are interactive and capable of generating dynamic feedback. They aren’t modeled after a specific real-world person, but rather a combination of user behaviors drawn from large datasets.

Think of it this way:

  • A digital twin is a highly detailed, data-driven clone of a specific person, customer segment, or process.
  • A synthetic user is a fictional but realistic simulation of a potential user, generated based on behavioral patterns and demographic characteristics.

Both approaches are still evolving, but their potential applications in UX research are already taking shape.

Where Digital Twins and Synthetic Users Fit into UX Research

The appeal of AI-generated users is undeniable. They can:

  • Scale instantly – Test designs with thousands of simulated users, rather than just a handful of real participants.
  • Eliminate recruitment bottlenecks – No need to chase down participants or schedule interviews.
  • Reduce costs – No incentives, no travel, no last-minute no-shows.
  • Enable rapid iteration – Get user insights in real time and adjust designs on the fly.
  • Generate insights on sensitive topics – Synthetic users can explore scenarios that real participants might find too personal or intrusive.

These capabilities make digital twins particularly useful for:

  • Early-stage concept validation – Rapidly test ideas before committing to development.
  • Edge case identification – Run simulations to explore rare but critical user scenarios.
  • Pre-testing before live usability sessions – Identify glaring issues before investing in human research.

However, digital twins and synthetic users are not a replacement for human research. Their effectiveness is limited in areas where emotional, cultural, and contextual factors play a major role.

The Risks and Limitations of AI-Driven UX Research

For all their promise, digital twins and synthetic users introduce new challenges.

  1. They lack genuine emotional responses.
    AI can analyse sentiment, but it doesn’t feel frustration, delight, or confusion the way a human does. UX is often about unexpected moments—the frustrations, workarounds, and “aha” realisations that define real-world use.
  2. Bias is a real problem.
    AI models are trained on existing datasets, meaning they inherit and amplify biases in those datasets. If synthetic users are based on an incomplete or non-diverse dataset, the research insights they generate will be skewed.
  3. They struggle with novelty.
    Humans are unpredictable. They find unexpected uses for products, misunderstand instructions, and behave irrationally. AI models, no matter how advanced, can only predict behavior based on past patterns—not the unexpected ways real users might engage with a product.
  4. They require careful validation.
    How do we know that insights from digital twins align with real-world user behavior? Without rigorous validation against human data, there’s a risk of over-reliance on synthetic feedback that doesn’t reflect reality.

A Hybrid Future: AI + Human UX Research

Rather than viewing digital twins as a replacement for human research, the best UX teams will integrate them as a complementary tool.

Where AI Can Lead:

  • Large-scale pattern identification
  • Early-stage usability evaluations
  • Speeding up research cycles
  • Automating repetitive testing

Where Humans Remain Essential:

  • Understanding emotion, frustration, and delight
  • Detecting unexpected behaviors
  • Validating insights with real-world context
  • Ethical considerations and cultural nuance

The future of UX research is not about choosing between AI and human research—it’s about blending the strengths of both.

Final Thoughts: Proceeding With Caution and Curiosity

Digital twins and synthetic users are exciting, but they are not a magic bullet. They cannot fully replace human users, and relying on them exclusively could lead to false confidence in flawed insights.

Instead, UX researchers should view these technologies as powerful, but imperfect tools—best used in combination with traditional research methods.

As with any new technology, thoughtful implementation is key. The real opportunity lies in designing research methodologies that harness the speed and scale of AI without losing the depth, nuance, and humanity that make UX research truly valuable.

The challenge ahead isn’t about choosing between human or synthetic research. It’s about finding the right balance—one that keeps user experience truly human-centered, even in an AI-driven world.

This article was researched with the help of Perplexity.ai. 

Learn more
1 min read

Optimal vs. Maze: Deep User Insights or Surface-Level Design Feedback

Product teams face an important decision when selecting the right user research platform: do they prioritize speed and simplicity, or invest in a more comprehensive platform that offers real research depth and insights? This choice becomes even more critical as user research scales and those insights directly influence major product decisions.

Maze has gained popularity in recent years among design and product teams for its focus on rapid prototype testing and design workflow integration. However, as teams scale their research programs and require more sophisticated insights, many discover that Maze's limitations outweigh its simplicity. Platform stability issues, restricted tools and functionality, and a lack of enterprise features creates friction that end up compromising insight speed, quality and overall business impact.

Why Choose Optimal instead of Maze?

Platform Depth

Test Design Limitations

  • Maze has Rigid Question Types: Maze's focus on speed comes with design inflexibility, including rigid question structures and limited customization options that reduce overall test effectiveness.
  • Optimal Offers Comprehensive Test Flexibility: Optimal has a Figma integration, image import capabilities, and fully customizable test flows designed for agile product teams.

Prototype Testing Capabilities

  • Maze has Limited Prototype Support: Users report difficulties with Maze's prototype testing capabilities, particularly with complex interactions and advanced design systems that modern products require.
  • Optimal has Advanced Prototype Testing: Optimal supports sophisticated prototype testing with full Figma integration, comprehensive interaction capture, and flexible testing methods that accommodate modern product design and development workflows.

Analysis and Reporting Quality

  • Maze Only Offers Surface-Level Reporting: Maze provides basic metrics and surface-level analysis without the depth required for strategic decision-making or comprehensive user insight.
  • Optimal has Rich, Actionable Insights: Optimal delivers AI-powered analysis with layered insights, export-ready reports, and sophisticated visualizations that transform data into actionable business intelligence.

Enterprise Features

  • Maze has a Reactive Support Model: Maze provides responsive support primarily for critical issues but lacks the proactive, dedicated support enterprise product teams require.
  • Optimal Provides Dedicated Enterprise Support: Optimal offers fast, personalized support with dedicated account teams and comprehensive training resources built by user experience experts that ensure your team is set up for success.

Enterprise Readiness

  • Maze is Buit for Individuals: Maze was built primarily for individual designers and small teams, lacking the enterprise features, compliance capabilities, and scalability that large organizations need.
  • Optimal is an Enterprise-Built Platform: Optimal was designed for enterprise use with comprehensive security protocols, compliance certifications, and scalability features that support large research programs across multiple teams and business units. Optimal is currently trusted by some of the world’s biggest brands including Netflix, Lego and Nike. 

Enterprises Need Reliable, Scalable User Insights

While Maze's focus on speed appeals to design teams seeking rapid iteration, enterprise product teams need the stability and reliability that only mature platforms provide. Optimal delivers both speed and dependability, enabling teams to iterate quickly without compromising research quality or business impact.Platform reliability isn't just about uptime, it's about helping product teams make high quality strategic decisions and to build organizational confidence in user insights. Mature product, design and UX teams need to choose platforms that enhance rather than undermine their research credibility.

Don't let platform limitations compromise your research potential.

Ready to see how leading brands including Lego, Netflix and Nike achieve better research outcomes? Experience how Optimal's platform delivers user insights that adapt to your team's growing needs.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.