Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

User Research

Learn more
1 min read

Why User Interviews Haven't Evolved in 20 Years (And How We're Changing That)

Are we exaggerating when we say that the way the researchers run and analyze user interviews hasn’t changed in 20 years? We don’t think so. When we talk to our customers to try and understand their current workflows, they look exactly the same as they did when we started this business 17 years ago: record, transcribe, analyze manually, create reports. See the problem?

Despite  advances in technology across every industry, the fundamental process of conducting and analyzing user interviews has remained largely unchanged. While we've transformed how we design, develop, and deploy products, the way we understand our users is still trapped in workflows that would feel familiar to product, design and research teams from decades ago.

The Same Old Interview Analysis Workflow 

For most researchers, in the best case scenario, Interview analysis can take several hours over the span of multiple days. Yet in that same timeframe, in part thanks to new and emerging AI tools, an engineering team can design, build, test, and deploy new features. That just doesn't make sense.

The problem isn't that researchers  lack tools. It's that they haven’t had the right ones. Most tools focus on transcription and storage, treating interviews like static documents rather than dynamic sources of intelligence. Testing with just 5 users can uncover 85% of usability problems, yet most teams struggle to complete even basic analysis in time to influence product decisions. Luckily, things are finally starting to change.

When it comes to user research, three things are happening in the industry right now that are forcing a transformation:

  1. The rise of AI means UX research matters more than ever. With AI accelerating product development cycles, the cost of building the wrong thing has never been higher. Companies that invest in UX early cut development time by 33-50%, and with AI, that advantage compounds exponentially.
  2. We're drowning in data and have fewer resources.  We’re seeing the need for UX research increase, while simultaneously UX research teams are more resource constrained than ever. Tasks like analyzing hours of video content to gather insights, just isn’t something teams have time for anymore. 
  3. AI finally understands research. AI has evolved to a place where it can actually provide valuable insights. Not just transcription. Real research intelligence that recognizes patterns, emotions, and the gap between what users say and what they actually mean.

A Dirty Little Research Secret + A Solution 

We’re just going to say it; most user insights from interviews never make it past the recording stage. When it comes to talking to users, the vast majority of researchers in our audience talk about recruiting pain because the most commonly discussed challenge around interviews is usually finding enough participants who match their criteria. But on top of the challenge of finding the right people to talk to, there’s another challenge that’s even worse: finding time to analyze what users tell us. But, what if you had a tool where using AI, the moment you uploaded an interview video, key themes, pain points, and opportunities surfaced automatically? What if you could ask your interview footage questions and get back evidence-based answers with video citations?

This isn't about replacing human expertise, it's about augmenting  it. AI-powered tools can process and categorize data within hours or days, significantly reducing workload. But more importantly, they can surface patterns and connections that human analysts might miss when rushing through analysis under deadline pressure. Thanks to AI, we're witnessing the beginning of a research renaissance and a big part of that is reimagining the way we do user interviews.

Why AI for User Interviews is a Game Changer 

When interview analysis accelerates from weeks to hours, everything changes.

Product teams can validate ideas before building them. Design teams can test concepts in real-time. Engineering teams can prioritize features based on actual user need, not assumptions. Product, Design and Research teams who embrace AI to help with these workflows, will be surfacing insights, generating evidence-backed recommendations, and influencing product decisions at the speed of thought.

We know that 32% of all customers would stop doing business with a brand they loved after one bad experience. Talking to your users more often makes it possible to prevent these experiences by acting on user feedback before problems become critical. When every user insight comes with video evidence, when every recommendation links to supporting clips, when every user story includes the actual user telling it, research stops being opinion and becomes impossible to ignore. When you can more easily gather, analyze and share the content from user interviews those real user voices start to get referenced in executive meetings. Product decisions begin to include user clips. Engineering sprints start to reference actual user needs. Marketing messages reflect real user voices and language.

The best product, design and research teams are already looking for tools that can support this transformation. They know that when interviews become intelligent, the entire organization becomes more user-centric. At Optimal, we're focused on improving the traditional user interviews workflow by incorporating revolutionary AI features into our tools. Stay tuned for exciting updates on how we're reimagining user interviews.

Learn more
1 min read

Optimal vs. Great Question: Why Enterprise Teams Need Comprehensive Research Platforms

The decision between interview-focused research tools and comprehensive user insight platforms fundamentally shapes how teams generate, analyze, and act on user feedback. This choice affects not only immediate research capabilities but also long-term strategic planning and organizational impact. While Great Question focuses primarily on customer interviews and basic panel management with streamlined functionality, Optimal provides more robust capabilities, global participant reach, and advanced analytics infrastructure that the world's biggest brands rely on to build products users genuinely love. Optimal's platform enables teams to conduct sophisticated research, integrate insights across departments, and deliver actionable recommendations that drive meaningful business outcomes.

Why Choose Optimal over Great Question?

Strategic Research Capabilities vs. Interview-Centric Tools

Optimal's Research Leadership: Optimal delivers complete research capabilities spanning information architecture testing, prototype validation, card sorting, tree testing, first-click analysis, live site testing, and qualitative insights, all powered by AI-driven analysis and backed by 17 years of specialized research expertise that transforms user feedback into actionable business intelligence. Optimal's live site testing allows you to test actual websites and web apps without code, enabling continuous optimization and real-time insights post-launch.

Great Question's Limited Research Scope: In contrast, Great Question operates primarily as an interview scheduling and panel management tool with basic survey capabilities, lacking the comprehensive research methodologies and specialized testing tools that enterprise research programs require for strategic impact across the full product development lifecycle.

Enterprise-Ready Research Suite: Optimal serves Fortune 500 clients including Lego, Nike, and Netflix with SOC 2 compliance, enterprise security protocols, and a comprehensive research toolkit that scales with organizational growth and research sophistication.

Workflow Limitations: Great Question's interview-focused approach restricts teams to primarily qualitative methods, requiring additional tools for quantitative validation and specialized testing scenarios that modern product teams demand for comprehensive user understanding.

Participant Quality and Global Reach

Global Research Network: Optimal's 10M+ verified participants across 150+ countries enable sophisticated audience targeting, international market research, and reliable recruitment for any demographic or geographic requirement, from enterprise software buyers in Germany to mobile gamers in Southeast Asia.

Limited Panel Access: Great Question provides access to 3M+ participants with basic recruitment capabilities focused primarily on existing customer panels, limiting research scope for complex audience requirements and international market validation.

Advanced Participant Targeting: Optimal includes sophisticated recruitment filters, managed recruitment services, and quality assurance protocols that ensure research validity and participant engagement across diverse study requirements.

Basic Recruitment Features: Great Question focuses on CRM integration and existing customer recruitment without advanced screening capabilities or specialized audience targeting that complex research studies require.

Research Methodology Depth and Platform Capabilities

Complete Research Methodology Suite: Optimal provides full-spectrum research capabilities including advanced card sorting, tree testing, prototype validation, first-click testing, surveys, and qualitative insights with integrated AI analysis across all methodologies and specialized tools designed for specific research challenges.

Interview-Focused Limitations: Great Question offers elementary research capabilities centered on customer interviews and basic surveys, lacking the specialized testing tools enterprise teams need for information architecture, prototype validation, and quantitative user behavior analysis.

AI-Powered Research Operations: Optimal streamlines research workflows with automated analysis, AI-powered insights, advanced statistical reporting, and seamless collaboration tools that accelerate insight delivery while maintaining analytical rigor. Our new Interviews tool revolutionizes qualitative research, upload interview videos and let AI automatically surface key themes, generate smart highlight reels with timestamped evidence, and produce actionable insights in hours instead of weeks, eliminating the manual synthesis bottleneck.

Manual Analysis Dependencies: Great Question requires significant manual effort for insight synthesis beyond interview transcription, creating workflow inefficiencies that slow research velocity and limit the depth of analysis possible across large datasets.

Where Great Question Falls Short

Great Question may be a good choice for teams who are looking for:

  • Simple customer interview management without complex research requirements
  • Basic panel recruitment focused on existing customers
  • Streamlined workflows for small-scale qualitative studies
  • Budget-conscious solutions prioritizing low cost over comprehensive capabilities
  • Teams primarily focused on customer development rather than strategic UX research

When Optimal Delivers Strategic Value

Optimal becomes essential for:

  • Strategic Research Programs: When user insights drive business strategy, product decisions, and require diverse research methodologies beyond interviews
  • Information Architecture Excellence: Teams requiring specialized testing for navigation, content organization, and user mental models that directly impact product usability
  • Global Organizations: Requiring international research capabilities, market validation, and diverse participant recruitment across multiple regions
  • Quality-Critical Studies: Where participant verification, advanced analytics, statistical rigor, and research validity matter for strategic decision-making
  • Enterprise Compliance: Organizations with security, privacy, and regulatory requirements demanding SOC 2 compliance and enterprise-grade infrastructur
  • Advanced Research Operations: Teams requiring AI-powered insights, comprehensive analytics, specialized testing methodologies, and scalable research capabilities
  • Prototype and Design Validation: Product teams needing early-stage testing, iterative validation, and quantitative feedback on design concepts and user flows

Ready to see how leading brands including Lego, Netflix and Nike achieve better research outcomes? Experience how Optimal's platform delivers user insights that adapt to your team's growing needs and research sophistication.

Learn more
1 min read

The Evolution of UX Research: Digital Twins and the Future of User Insight

Introduction

User Experience (UX) research has always been about people. How they think, how they behave, what they need, and—just as importantly—what they don’t yet realise they need. Traditional UX methodologies have long relied on direct human input: interviews, usability testing, surveys, and behavioral observation. The assumption was clear—if you want to understand people, you have to engage with real humans.

But in 2025, that assumption is being challenged.

The emergence of digital twins and synthetic users—AI-powered simulations of human behavior—is changing how researchers approach user insights. These technologies claim to solve persistent UX research problems: slow participant recruitment, small sample sizes, high costs, and research timelines that struggle to keep pace with product development. The promise is enticing: instantly accessible, infinitely scalable users who can test, interact, and generate feedback without the logistical headaches of working with real participants.

Yet, as with any new technology, there are trade-offs. While digital twins may unlock efficiencies, they also raise important questions: Can they truly replicate human complexity? Where do they fit within existing research practices? What risks do they introduce?

This article explores the evolving role of digital twins in UX research—where they excel, where they fall short, and what their rise means for the future of human-centered design.

The Traditional UX Research Model: Why Change?

For decades, UX research has been grounded in methodologies that involve direct human participation. The core methods—usability testing, user interviews, ethnographic research, and behavioral analytics—have been refined to account for the unpredictability of human nature.

This approach works well, but it has challenges:

  1. Participant recruitment is time-consuming. Finding the right users—especially niche audiences—can be a logistical hurdle, often requiring specialised panels, incentives, and scheduling gymnastics.
  2. Research is expensive. Incentives, moderation, analysis, and recruitment all add to the cost. A single usability study can run into tens of thousands of dollars.
  3. Small sample sizes create risk. Budget and timeline constraints often mean testing with small groups, leaving room for blind spots and bias.
  4. Long feedback loops slow decision-making. By the time research is completed, product teams may have already moved on, limiting its impact.

In short: traditional UX research provides depth and authenticity, but it’s not always fast or scalable.

Digital twins and synthetic users aim to change that.

What Are Digital Twins and Synthetic Users?

While the terms digital twins and synthetic users are sometimes used interchangeably, they are distinct concepts.

Digital Twins: Simulating Real-World Behavior

A digital twin is a data-driven virtual representation of a real-world entity. Originally developed for industrial applications, digital twins replicate machines, environments, and human behavior in a digital space. They can be updated in real time using live data, allowing organisations to analyse scenarios, predict outcomes, and optimise performance.

In UX research, human digital twins attempt to replicate real users' behavioral patterns, decision-making processes, and interactions. They draw on existing datasets to mirror real-world users dynamically, adapting based on real-time inputs.

Synthetic Users: AI-Generated Research Participants

While a digital twin is a mirror of a real entity, a synthetic user is a fabricated research participant—a simulation that mimics human decision-making, behaviors, and responses. These AI-generated personas can be used in research scenarios to interact with products, answer questions, and simulate user journeys.

Unlike traditional user personas (which are static profiles based on aggregated research), synthetic users are interactive and capable of generating dynamic feedback. They aren’t modeled after a specific real-world person, but rather a combination of user behaviors drawn from large datasets.

Think of it this way:

  • A digital twin is a highly detailed, data-driven clone of a specific person, customer segment, or process.
  • A synthetic user is a fictional but realistic simulation of a potential user, generated based on behavioral patterns and demographic characteristics.

Both approaches are still evolving, but their potential applications in UX research are already taking shape.

Where Digital Twins and Synthetic Users Fit into UX Research

The appeal of AI-generated users is undeniable. They can:

  • Scale instantly – Test designs with thousands of simulated users, rather than just a handful of real participants.
  • Eliminate recruitment bottlenecks – No need to chase down participants or schedule interviews.
  • Reduce costs – No incentives, no travel, no last-minute no-shows.
  • Enable rapid iteration – Get user insights in real time and adjust designs on the fly.
  • Generate insights on sensitive topics – Synthetic users can explore scenarios that real participants might find too personal or intrusive.

These capabilities make digital twins particularly useful for:

  • Early-stage concept validation – Rapidly test ideas before committing to development.
  • Edge case identification – Run simulations to explore rare but critical user scenarios.
  • Pre-testing before live usability sessions – Identify glaring issues before investing in human research.

However, digital twins and synthetic users are not a replacement for human research. Their effectiveness is limited in areas where emotional, cultural, and contextual factors play a major role.

The Risks and Limitations of AI-Driven UX Research

For all their promise, digital twins and synthetic users introduce new challenges.

  1. They lack genuine emotional responses.
    AI can analyse sentiment, but it doesn’t feel frustration, delight, or confusion the way a human does. UX is often about unexpected moments—the frustrations, workarounds, and “aha” realisations that define real-world use.
  2. Bias is a real problem.
    AI models are trained on existing datasets, meaning they inherit and amplify biases in those datasets. If synthetic users are based on an incomplete or non-diverse dataset, the research insights they generate will be skewed.
  3. They struggle with novelty.
    Humans are unpredictable. They find unexpected uses for products, misunderstand instructions, and behave irrationally. AI models, no matter how advanced, can only predict behavior based on past patterns—not the unexpected ways real users might engage with a product.
  4. They require careful validation.
    How do we know that insights from digital twins align with real-world user behavior? Without rigorous validation against human data, there’s a risk of over-reliance on synthetic feedback that doesn’t reflect reality.

A Hybrid Future: AI + Human UX Research

Rather than viewing digital twins as a replacement for human research, the best UX teams will integrate them as a complementary tool.

Where AI Can Lead:

  • Large-scale pattern identification
  • Early-stage usability evaluations
  • Speeding up research cycles
  • Automating repetitive testing

Where Humans Remain Essential:

  • Understanding emotion, frustration, and delight
  • Detecting unexpected behaviors
  • Validating insights with real-world context
  • Ethical considerations and cultural nuance

The future of UX research is not about choosing between AI and human research—it’s about blending the strengths of both.

Final Thoughts: Proceeding With Caution and Curiosity

Digital twins and synthetic users are exciting, but they are not a magic bullet. They cannot fully replace human users, and relying on them exclusively could lead to false confidence in flawed insights.

Instead, UX researchers should view these technologies as powerful, but imperfect tools—best used in combination with traditional research methods.

As with any new technology, thoughtful implementation is key. The real opportunity lies in designing research methodologies that harness the speed and scale of AI without losing the depth, nuance, and humanity that make UX research truly valuable.

The challenge ahead isn’t about choosing between human or synthetic research. It’s about finding the right balance—one that keeps user experience truly human-centered, even in an AI-driven world.

This article was researched with the help of Perplexity.ai. 

Learn more
1 min read

UXDX Dublin 2024: Where Chocolate Meets UX Innovation

What happens when you mix New Zealand's finest chocolate with 870 of Europe's brightest UX minds? Pure magic, as we discovered at UXDX Dublin 2024!

A sweet start

Our UXDX journey began with pre-event drinks (courtesy of yours truly, Optimal Workshop) and a special treat from down under - a truckload of Whittaker's chocolate that quickly became the talk of the conference. Our impromptu card sorting exercise with different Whittaker's flavors revealed some interesting preferences, with Coconut Slab emerging as the clear favorite among attendees!

Cross-Functional Collaboration: More Than Just a Buzzword

The conference's core theme of breaking down silos between design, product, and engineering teams resonated deeply with our mission at Optimal Workshop. Andrew Birgiolas from Sephora delivered what I call a "magical performance" on collaboration as a product, complete with an unforgettable moment where he used his shoe to demonstrate communication scenarios (now that's what we call thinking on your feet!).

Purpose-driven design

Frank Gaine's session on organizational purpose was a standout moment, emphasizing the importance of alignment at three crucial levels:

- Company purpose

- Team purpose

- Individual purpose

This multi-layered approach to purpose struck a chord with attendees, reminding us that effective UX research and design must be anchored in clear, meaningful objectives at every level.

The art of communication

One of the most practical takeaways came from Kelle Link's session on navigating enterprise ecosystems. Her candid discussion about the necessity of becoming proficient in deck creation sparked knowing laughter from the audience. As our CEO noted, it's a crucial skill for communicating with senior leadership, board members, and investors - even if it means becoming a "deck ninja" (to use a more family-friendly term).

Standardization meets innovation

Chris Grant's insights on standardization hit home: "You need to standardize everything so things are predictable for a team." This seemingly counterintuitive approach to fostering innovation resonated with our own experience at Optimal Workshop - when the basics are predictable, teams have more bandwidth for tackling the unpredictable challenges that drive real innovation.

Building impactful product teams

Matt Fenby-Taylor's discussion of the "pirate vs. worker bee" persona balance was particularly illuminating. Finding team members who can maintain that delicate equilibrium between creative disruption and methodical execution is crucial for building truly impactful product teams.

Research evolution

A key thread throughout the conference was the evolution of UX research methods. Nadine Piecha's "Beyond Interviews" session emphasized that research is truly a team sport, requiring involvement from designers, PMs, and other stakeholders. This aligns perfectly with our mission at Optimal Workshop to make research more accessible and actionable for everyone.

The AI conversation

The debate on AI's role in design and research between John Cleere and Kevin Hawkins sparked intense discussions. The consensus? AI will augment rather than replace human researchers, allowing us to focus more on strategic thinking and deeper insights - a perspective that aligns with our own approach to integrating AI capabilities.

Looking ahead

As we reflect on UXDX 2024, a few things are clear:

  1. The industry is evolving rapidly, but the fundamentals of human-centered design remain crucial

  1. Cross-functional collaboration isn't just nice to have - it's essential for delivering impactful products

  1. The future of UX research and design is bright, with teams becoming more integrated and methodologies more sophisticated

The power of community

Perhaps the most valuable aspect of UXDX wasn't just the formal sessions, but the connections made over coffee (which we were happy to provide!) and, yes, New Zealand chocolate. The mix of workshops, forums, and networking opportunities created an environment where ideas could flow freely and partnerships could form naturally.

What's next?

As we look forward to UXDX 2025, we're excited to see how these conversations evolve. Will AI transform how we approach UX research? How will cross-functional collaboration continue to develop? And most importantly, which Whittaker's chocolate flavor will reign supreme next year?

One thing's for certain - the UX community is more vibrant and collaborative than ever, and we're proud to be part of its evolution. I’ve said it before and I’ll say it again, the industry has a very bright future. 

See you next year! We’ll remember to bring more Coconut Slab chocolate next time - it seems we've created quite a demand!

Learn more
1 min read

67 ways to use Optimal for user research

User research and design doesn’t fail because teams don’t care – it fails because there’s rarely time to explore every option. When deadlines pile up, most teams default to the same familiar research patterns and miss opportunities to get more value from the tools they already have.

We’ve brought together practical, real-world ways to use Optimal – from tree testing and first-click testing to card sorting, surveys, prototype testing, and interviews. Some of these use cases are obvious, but many aren’t. All of them are designed to help teams move faster, reduce risk, and turn user insights into decisions stakeholders trust.

We’ve focused on quick wins and flexible examples you can adapt to your own context – whether you’re benchmarking navigation, validating early designs, improving conversion flows, prioritizing work, or proving the ROI of UX. You don’t need more tools or more processes. You just need smarter ways to use what you already have.

Let’s get into it.

Practical ways to use Optimal for user research and UX design

#1 Benchmark your information architecture (IA)

Without a baseline for your navigation or information architecture (IA), you can’t easily tell if any changes you make have a positive effect. If you haven’t done so, benchmark your existing website on tree testing now. Upload your site structure and get results the same day. Now you’ll have IA scores to beat each month. Easy.

#2 Find out precisely where people get lost

Watch video recordings of real people interacting with your sites with live site testing. Combine this with surveys and user interviews to understand where users struggled. You can also use the tree testing pietree to find out exactly where people are getting lost in your website structure and where they go instead.

#3 Start with one screenshot

If you’re just not sure where to begin then take a screenshot of your homepage, or any page that you think might have some issues and get going with first-click testing. Write up a string of things that people might want to do when they find themselves on this page and use these as your tasks. Surprise all your colleagues with a maddening heatmap or video recordings showing where people actually clicked in response to your tasks or where they struggle. Now you’ll have a better idea of which area of your site to focus on for your next step.

#4 Test live sites during discovery

You can run live site testing as part of your discovery phase to baseline your live experiences and see how well your current site supports real user goals. Test competitors' sites to see how you stack up. You’ll quickly uncover opportunities to differentiate your site, all before a single wireframe is drawn. All that's required is a URL and then you're set to go. No code needed.

#5 A/B test your site structure

Tree testing is great for testing more than one content structure. It’s easy to run two separate tree testing studies, even more than two. It’ll help you decide which structure you and your team should run with, and it won’t take you long to set them up.

#6 Optimize sign-up flows

Discover how easy (or not) it is for users to navigate your sign up experience to ensure it works exactly as intended. Create a live site or prototype test to identify any confusion or points of friction. You could also use this test to understand users' first impressions of your home or landing page. Where do they click first and what information is valuable to them?

#7 Make collaborative design decisions‍

Use surveys, first-click tests, and card sorting to get your team involved and let their feedback feed your designs: logos, icons, banners, images, the list goes on... For example, by creating a closed image sort with categories, your team can group designs based on their preferences, you can get some quick feedback to help you figure out where you should focus your efforts.

#8 Do your (market) research

Get a better sense of your users and customers’ motivations with surveys and user interviews. You can also find out what people actually want to see on your website with a card sort, by conducting an image sort of potential products. By providing categories like ‘I would buy this’, ‘I wouldn’t buy this’ to indicate their preferences for each item, you can figure out what types of products appeal to your customers.

#9 Customer satisfaction surveys with surveys and interviews

The thoughts and feelings of your users are always important. A simple survey or user interview can help you take a deeper look at your checkout process, a recently launched product or service, or even the packaging your product arrives in. Your options are endless.

#10 Start testing prototypes

Companies that incorporate prototype testing in their design process can reduce development costs by 33%. Use prototype testing to ensure your designs hit the mark before you invest too heavily in the build. Build your own prototype with images in Optimal or import a Figma file. You can even test AI-generated prototypes from tools like Lovable or Magic Patterns by dropping the URL into live site testing.

#11 Crowdsource content ideas

Whether you’re running a blog or a UX conference, surveys can help you generate content ideas and understand any knowledge gaps that might be out there. Figure out what your users and attendees like to read on your blog, or what they want to hear about at your event, and let this feed into what you offer.

#12 Evaluate user flows

Sometimes a change in your product or service means you have to change how it’s presented to your existing customers.  Ensure your customers understand the changes to your product or service with prototype and live site testing. Identify issues with user flow, content, or layout that may confuse them. Discover which options they’re most likely to choose with the updates. Uncover what truly matters to your customers.

#13 Quantify the return on investment of UX

Some people, including UX Agony Aunt, define return on UX as time saved, money made, and people engaged. By attaching a value to the time spent completing tasks, or to successful completion of tasks, you can approximate an ROI or at least illustrate the difference between two options.

#14 Convince your stakeholders with highlight reels

User interviews are teeming with insights but can be time and resource intensive to analyze without automation. Use Optimal Interviews tool to capture key moments, reactions, and pain points with automated highlight reels and clips. These are perfect for storytelling, stakeholder buy-in, and keeping teams connected to who they’re building for.

#15 Prioritize upcoming work 

Survey your organization to build a list of ideas for upcoming work. Understand your audience’s priorities with card sorting to inform your feature development. Categorize your upcoming work ideas to decide collectively what’s best to take on next. Great for clarifying what the team considers the most valuable or pressing work to be done.

#16 Reduce content on landing pages to what people access regularly

Before you run an open card sort to generate new category ideas, you can run a closed card sort to find out if you have any redundant content. Say you wanted to simplify the homepage of your intranet. You can ask participants to sort cards (containing homepage links) based on how often they use them. You could compare this card sort data with analytics from your intranet and see if people’s actual behavior and perception are well aligned.

#17 Create tests to fit in your onboarding process

Onboarding new customers is crucial to keeping them engaged with your product, especially if it involves your users learning how to use it. You can set up a quick study to help your users stay on track with onboarding. For example, say your company provided online email marketing software. You can set up a first-click testing study using a photo of your app, with a task asking your participants where they’d click to see the open rates for a particular email that went out.

#18 Input your learnings and observations from a UX conference with qualitative insights

If you're lucky enough to attend a UX conference, you can now share the experience with your colleagues. You can easily jot down ideas, quotes and key takeaways in a Qualitative Insights project and keep your notes organized by using a new session for each presenter Bonus, if you’re part of a team, they can watch the live feed rolling into Qualitative Insights!


#19 Multivariate testing

Tree testing and first-click testing allow you to compare multiple versions of content structures, designs, or flows. You can also compare how users engage with different live websites in one study. This helps decide the best-performing option without guessing.

#20 Do some sociological research

Using card sorting for sociological research is a great way to deepen your understanding of how different groups may categorize information. For example, by looking at how young people group popular social media platforms, you can understand the relationships between them, and identify where your product may fit in the mix. Then, follow up with surveys or moderated interviews for deeper insights. 

#21 Test your FAQs page with new users

Your support and knowledge base within your website can be just as important as any other core action on your website. If your support site is lacking in navigation and UX, this will no doubt increase support tickets and resources. Make sure your online support section is up to scratch. Here’s an article on how to do it quickly.

#22 Establish which tags or filters people consider to be the most important

Create a card sort with your search filters or tags as labels, and have participants rank them according to how important they consider them to be. Analytics can tell you half of the story (where people actually click), so the card sort can give another side: a better idea of what people actually think or want. Follow up with surveys or interviews to confirm insights.

#23 Figure out if your icons need labels

‍Figure out if your icons are doing their job by testing whether your users are understanding them as intended. Uploading icons you currently use, or plan to use in your interface to first-click testing, and ask your users to identify their meaning by making use of post-task questions.

#24 Get straight to the aha! moments

Optimal Interviews gives you automated insights but you can also engage with AI Chat to dive deeper. Ask AI specific questions about a feature or process or request quotes or examples. Then, get highlight reels and clips to match.


#25 Improve website conversions

Make the marketing team’s day by doing a fast improvement on some core conversions on your website. Now, there are loads of ways to improve conversions for a check out cart or signup form, but using first-click testing to test out ideas before you start going live A/B test can take mere minutes and give your B version a confidence boost. For deeper insights, try a live site test. 

#26 Test your mobile experience or web app

As more and more people are using their smartphones for apps and to browse sites, you need to ensure its design gives your users a great experience. Test your mobile site to ensure people aren’t getting lost in the mobile version of your site. If you haven’t got a mobile-friendly design yet, now’s the time to start designing it!

#27 Get automated transcripts

Have a number of interviews you need to transcribe quickly? Upload up to 20 interviews at once in Optimal Interviews and get automated transcripts, so you can spend less time on admin and more time digging into insights.

#28 Reduce the bounce rates of certain sections of your website‍

People jumping off your website and not continuing their experience is something (depending on the landing page) everyone tries to improve. The metric ‘time on site’ and ‘average page views’ is a metric that shows the value your whole website has to offer. Again, there are many different ways to do this, but one big reason for people jumping off the website is not being able to find what they’re looking for. Use prototype testing or live site testing to watch users in action and understand where things break down.

#29 Test your website in different countries‍

No, you don’t have to spend thousands of dollars to go to all these countries to test, although that’d be pretty sweet. You can remotely research participants from all over the world, using our integrated recruitment panel. Start seeing how different cultures, languages, and countries interact with your website. 

#30 Preference test

Whether you’re coming up with a new logo design, headline, featured image, or anything, you can preference test it with first-click testing. Create an image that shows the two designs side by side and upload it to first-click testing. From there, you can ask people to click whichever one they prefer!  If you want to track multiple clicks per task or watch recordings, use prototype testing instead.


#31 Test visual hierarchy with first-click testing

Use first-click testing to understand which elements draw users' attention first on your page. Upload your design and ask participants to click on the most important element, or what catches their eye first. The resulting heatmap will show you if your visual hierarchy is working as intended - are users clicking where you expect them to? This technique helps validate design decisions about sizing, color, positioning, and contrast without needing to build the actual page.


#32 Tame your blog or knowledge base

Get the tags and categories in your blog under control to make life easier for your readers. Set up a card sort and use all your tags and categories as card labels. Either use your existing ones or test a fresh set of new tags and categories.

#33 Use AI Chat for stakeholder-ready outputs

Use AI-powered chat to instantly reformat interview insights and fast-track deliverables for different audiences. Simply specify the details of the deliverable you would like. For example: “Turn this into a 3-sentence Slack summary (no citations).” or “Rewrite this as an exec-ready insight with a clear recommendation.”

‍#34 Validate the designs in your head

As designers, you’ve probably got umpteen designs floating around in your head at any one time. But which of these are really worth pursuing? Figure this out by using Optimal to test out wireframes of new designs before putting any more work into them.

#35 Optimize the support escalation flow

Understand how users navigate help resources, report issues, and conceptualize support categories, especially when they need to locate assistance quickly in time-sensitive situations.

#36 Improve your search engine optimization (SEO) with tree testing

Yes, a good IA improves your SEO. Tree testing helps you understand how people navigate throughout your site. It also helps search engines better understand and index your content, making it more discoverable and relevant in search results. Make sure people can easily find what they’re looking for, and you’ll start to see improvement in your search engine ranking.

#37 Feature prioritization and get some help for your roadmap

Find out what people think are the most important next steps for your team. Set up a survey or card sort and ask people to categorize items and rank them in descending order of importance or impact on their work. This can also help you gauge their thoughts on potential new features for your site, and for bonus points compare team responses with customer responses.

#38 Define your brand tone of voice

Use a card sort to understand how people perceive your brand, so you can shape or refine your brand personality, tone of voice, and style guidelines. Run this with stakeholders or your audience to uncover current perceptions and where they’d like your brand to go next.

#39 Run an Easter egg hunt using the correct areas in first-click testing

Liven up the workday by creating a fun Easter egg hunt in first-click testing. Simply upload a photo (like those really hard “spot the X” photos), set the correct area of your target, then send out your study with participant identifiers enabled. You can also send these out as competitions and have closing rules based on time, number of participants, or both.

#40 Test your home button

Would an icon or text link work better for navigating to your home page? Before you go ahead and make changes to your site, you can find out by setting up a first-click testing test.

#41 Improve team structure and clarity role expectations

Run a card sort, survey, or internal interviews to understand how responsibilities are perceived across different roles. Work with team leaders and managers to clarify role definitions, reporting lines, and decision-making authority. This helps uncover overlapping responsibilities and opportunities to streamline management and support team workflows.

#42 ‘Buy now’ button shopping cart visibility‍

If you’re running an e-commerce site, ease of use and a great user experience are crucial. To see if your shopping cart and checkout processes are as good as they can be, look into running a live site, prototype or first-click test.

#43 Website periodic health checks

Raise the visibility of good IA by running periodic IA health checks using tree testing and reporting the results. Proactively identifying structural issues early, and backing decisions with clear metrics, helps drive alignment and build confidence across stakeholders.

‍#44 Use heatmaps to get the first impressions of designs

Heatmaps in our first-click testing tool are a great way of getting first impressions of any design. You can see where people clicked (correctly and incorrectly), giving you insights on what works and doesn’t work with your designs. Because it’s so fast to test, you can iterate until your designs start singing.

#45 Focus groups with interviews

Thinking of launching a new product, app or website, or seeking opinions on an existing one? Remote focus groups can provide you with a lot of candid information that may help get your project off the ground. They’re also dangerous because they’re susceptible to groupthink, design by committee, and tunnel vision. Use with caution, but if you do then upload your recordings to Interviews for automated insights! Find patterns across sessions and use AI Chat to dig deeper. Pay attention to emotional triggers.

#46 Gather opinions with surveys

Whether you want the opinions of your users or from members of your team, you can set up a quick and simple survey. It’s super useful for getting opinions on new ideas (consider it almost like a mini-focus group), or even for brainstorming with teammates.

#47 Prioritise content

Use a card sort to understand what content matters most to people, so you can plan what to write first. Ask participants which information is most useful or which tasks they do most often. You can also run this after a top tasks survey to help shape your long list of content.

#48 Test a new concept

Got an idea you want to sanity-check before investing more time? Use surveys, first-click testing, or prototype testing to see if people understand the concept and find it valuable. A quick test now can save a lot of rework later.


#49 Run an image card sort to organize products into groups

You can add images to each card that allows you to understand how your participants may organize and label particular items. Very useful if you want to organize some retail products and want to find out how other people would organize them given a visual including shape, color, and other potential context.

#50 Guerrilla testing with first-click testing

For really quick first-click testing, take first-click testing on a tablet, mobile device or laptop to a local coffee shop. Ask people standing in line if they’d like to take part in your super quick test in exchange for a cup of joe. Easy!

#51 Test your search box

Case study by Viget: “One of the most heavily used features of the website is its keyword search, so we wanted to make absolutely certain that our redesigned search box didn’t make search harder for users to find and use.” Use first-click testing to test different variations. 

#52 Run a Net Promoter Score (NPS) survey

Optimal surveys give you plenty of question options, but one of the simplest ways to take the pulse of your product is an NPS survey to find out how likely they would recommend your product or brand. Use the out-of-the-box NPS question type question to quickly understand customer sentiment and track it over time.

#53 Run an empathy test

Empathy – the ability to understand and share the experience of another person – is central to the design process. An empathy test is another great tool to use in the design phase because it enables you to find out if you are creating the right kind of feelings with your user. Take your design and show it to users. Provide them with a variety of words that could represent the design – for example “minimalistic”, “dynamic”, or “professional” – and ask them to pick out which words which they think are best suited to their experience.

#54 Compare and test email designs

Drop your email designs into first-click testing to see which version people prefer and where they click first. Use these insights to refine your layout, hierarchy, and calls to action to improve engagement and conversions.

#55 Source-specific data with an online survey

Online survey tools can complement your existing research by sourcing specific information from your participants. For example, if you need to find out more about how your participants use social media, which sites they use, and on which devices, you can do it all through a simple survey questionnaire. Additionally, if you need to identify usage patterns, device preferences or get information on what other products/websites your users are aware of/are using, a questionnaire is the ticket.

#56 Make sure you get the user's first-click right

Like most things, read a little, and then it’s all about practice. We’ve found that people who get the first click correct are almost three times as likely to complete a task successfully. Get your first clicks right in tree testing and first-click testing and you’ll start seeing your customers smile.

#57 Destroy evil attractors in your tree

Evil attractors are those labels in your IA that attract unjustified clicks across tasks. This usually means the chosen label is ambiguous, or possibly a catch-all phrase like ‘Resources’. Read how to quickly identify evil attractors in the Destinations table of tree test results and how to fix them.


#58 Ensure accessibility and inclusion

Check how people with different physical, visual, or cognitive needs move through your content, and spot any areas that might slow them down or cause confusion. Use what you uncover to remove friction and support all users.

#59 Add moderated card sort results to your card sort‍

An excellent way of gathering valuable qualitative insights alongside the results of your remote card sorts is to run a moderated version of the sorts with a smaller group of participants. When you can observe and interact with your participants as they complete the sort, you’ll be able to ask questions and learn more about their thought processes and the reasons why they have categorized things in a particular way.

#60 Test your customers' perceptions of different logo and brand image designs

Understand how customers perceive your brand by creating a closed card sort. Come up with a list of categories, and ask participants to sort images such as logos, and branded images.

#61 Run an open image card sort to classify images into groups based on the emotions they elicit

‍Are these pictures exhilarating, or terrifying? Are they humorous, or offensive? Relaxing, or boring? Productive, or frantic? Happy memories, or a deep sigh?

#62 Crowd-source the values you want your team/brand/product to represent

Card sorting is a well-established technique in the ‘company values’ realm, and there are some great resources to help you and your team brainstorm the values you represent. These ‘in-person’ brainstorm sessions are great, and you can run a remote closed card sort to support your findings. And if you want feedback from more than a small group of people (if your company has, say, more than 15 staff) you can run a remote closed card sort on its own. Use Microsoft’s Reaction Card Method as card inspiration.

#63 Test physical and digital experiences together

Use recorded videos and interviews to observe people interacting with physical products, kiosks, or mobile apps in real-world contexts. Record sessions, capture moments of friction, and bring those insights back into Optimal’s Interviews tool for automated insights.

#64 HR exercises to determine the motivations of your team

It’s simple to ask your team about their thoughts, feelings, and motivations with a survey. You can choose to leave participant identifiers blank (so responses are anonymous), or you can ask for a name/email address. As a bonus, you can set up a calendar reminder to send out a new survey in the next quarter. Duplicate the survey and send it out again!

#65 Designing physical environments

‍If your company has a physical environment in which your customers visit, you can research new structures using a mixture of tools in Optimal. This especially comes in handy if your customers require certain information within the physical environment in order to make decisions. For example, picture a retail store. Are all the signs clear and communicate the right information? Are people overwhelmed by the physical environment?

#66 Run an image card sort to organize your library

Whether it’s a physical library of books, or a digital drive full of ebooks, you can run a card sort to help organize them in a way that makes sense. Will it be by genre, author name, color or topic? Send out the study to your coworkers to get their input! You can also do this at home for your own personal library, and you can include music/CDs/vinyl records and movies!

#67 Use tree testing to refine an interactive phone menu system

Similar to how you’d design an IA, you can create a tree test to design an automated phone system. Whether you’re designing from the ground up, or improving your existing system, you will be able to find out if people are getting lost.

Practical ways to use Optimal for user research (and get value fast)

And that’s the list. This is not everything you can do with Optimal, but a solid reminder that meaningful user insights don’t have to be slow, heavy, or overcomplicated. Small, well-timed studies can uncover friction, validate decisions, and create momentum across teams.

Ready to get started?

Have a creative use case we missed? Let us know, we’re always learning from the ways our customers push research further, faster, and smarter.

Learn more
1 min read

Accelerate insights with transcripts in Qualitative Insights

The accuracy of your data collection is crucial in qualitative research. It is vital that nothing is lost in translation or simply missed from the point of collection to analysis, and our latest release makes this even easier to achieve. You can now directly import interview transcripts into Qualitative Insights (previously known as Reframer), allowing you and your team to capture and tag observations effortlessly while maintaining the integrity of the information. Get ready to experience a new level of efficiency in your qualitative research!

The importance of transcription ✍🏽

Whether you are conducting interviews alone or with the support of your team, it’s important to prioritize building connections with participants rather than struggling to take notes and ask the right questions. Transcripts ensure you avoid losing crucial insights and context as you move from data collection to analysis and reduce the likelihood of human errors and missed observations that sometimes occur during live note-taking sessions. 

It also enables smooth collaboration among team members by allowing them to review interviews and contribute to the analysis, even if they weren't present.

How to import a transcript to Qualitative Insights

Watch the video 📽️ 👀

You can add a transcript to a new or existing study in Qualitative Insights with just a few clicks. After recording an interview or user testing session, open your Qualitative Insights study and click ‘Sessions’ then ‘+ Transcript.’

Add a session title, any session information or a link to the video for future reference in the session information box. If you have created segments, choose which ones apply to this participant; you can update these later at any time. Then click ‘import transcript.’

Click ‘Select transcript’ and ensure you made any edits before importing it. This feature supports .vtt, .srt, or .txt files. Now, click Capture observations’ to complete the import and create and tag your observations.

You will see your transcript displayed. If you use a .vtt or .srt file, you will see the speaker names have been identified. You can update the speaker names by clicking on configure speakers.

How to create observations

To create observations from your transcript, simply highlight text, enter a new tag or select an existing one, then click create an observation.

There is no limit to how many transcripts you can import. This means you can import all your past and future interviews, ensuring all your research data is in one place for easy access and analysis.

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.