Learn hub

Learn hub

Get expert-level resources on running research, discovery, and building
an insights-driven culture.

Learn more
1 min read

Live Site Testing Without Code: 5 Key Takeaways

Live site testing is now part of the Optimal platform and is designed to give you real insights from real user interactions without code, installs, or complicated setups. 

If you missed our recent Live Site Testing Training webinar or want a refresher, we’ll get you up to speed with this recap of all the key insights. 

What is Live Site Testing?


Optimal’s live site testing lets you watch users navigate any website or web app, including your own staging or production site, or even competitor experiences. It’s all about understanding how people behave in the environments they actually use, helping you identify friction points you might otherwise miss.

Key Takeaways From the Training


1. Context Is Everything


In usability research, the “real world” often looks very different from controlled prototype tests. People use their own devices, have distractions, and bring patterns and expectations shaped by real life. Foundational research shows the richest insights often come from observing users in these real contexts.

Live site testing is built to reflect that reality, helping you answer not just if someone completes a task, but how they approach it and why they struggle. 

2. Testing Is Fast and Friction‑Free


One of the biggest barriers to live site testing historically is complexity, needing code snippets, extensions, or technical setup. Optimal’s tool removes all that friction so you can see natural behaviour without influence or disruption:

  • No code or installs required
  • Paste a URL and you’re ready to go
  • You can test as often as you want - during discovery, before launch, after launch, or anytime in between - and any site you want

3. Design Tests With Real‑Life Scenarios


When crafting tasks for live site testing, think about real user goals. Asking people to complete realistic tasks (e.g., find a product, book a flight, compare two pages) and encouraging them to think out loud leads to much richer insights than purely metric‑focused tests. You can also mix tasks with survey questions for quantitative data. 

4. Participant Experience Is Built for Natural Interaction


A big part of getting real behavior is ensuring participants feel comfortable and unencumbered. Optimal’s built-in task window is readily available when needed but otherwise minimizes to stay out of the way. This flow helps people stay focused and act naturally, which directly improves the quality of insights you collect.

5. Combine Live Site Testing with Optimal’s Interviews Tool


For even deeper insights, pair live site testing with Optimal Interviews. Once you upload live site testing recordings, you get automated insights, transcripts, summaries, as well as highlight reels in Interviews. You can also explore further with AI chat, so you can quickly uncover quotes, compare experiences, and answer ad‑hoc questions.

This combination doesn’t just make analysis faster; it helps you convince stakeholders with clear, digestible, and compelling evidence from real user behaviour. Instead of long reports, you can present snackable, actionable insights that drive alignment and decisions.


Looking Ahead


We’re evolving live site testing at Optimal with solution testing, a multi-method approach that combines prototypes, live sites, and surveys in a single study. This will let teams capture even richer insights with speak-aloud tasks, automated analysis, highlight reels, and AI chat, making it faster and easier to understand user behavior and share compelling findings.


FAQs Highlights


Can you test staging or test environments and sites behind a password or firewall?

Yes, Optimal's live site testing tool works with any URL, including staging and test environments as well as sites behind a password or firewall.

You can share specific instructions with participants before they start. For example, if participants need to create an account and you don’t want that recorded, you can ask them to do this in advance via the welcome screen. That way, when the study begins, they’re already logged in.

Will live site testing affect my live website or real data?
No, user testers interacting with a live site test cannot make any changes to your website or its data.


What permissions are needed to test competitor websites?
With Optimal’s live site testing, you don't need special approval or permissions to evaluate public competitors' experiences.


Access the Training


If you want to experience the full walkthrough, demo, and Q&A from the session, we encourage you to watch the full webinar! You’ll learn how to start running your own live site tests and uncover real user behavior, plus pick up tips and best practices straight from the training.


👉 Watch the full training webinar here.

Learn more
1 min read

5 User Research Workflows That Drive Decisions

59% of teams say that without research, their decisions would be based on assumptions. Yet only 16% of organizations have user research fully embedded in how they work.

That gap explains why so much research never influences what gets built.

Teams do the work – they run studies, gather insights, document findings. But when research tools are too complex for most people to use, when getting insights takes weeks instead of days, when findings arrive after decisions are already made, research becomes documentation, not direction.

The problem isn't research quality. It's that most user research processes don't match how product teams actually make decisions. Research platforms are too complex, so only specialists can run studies. Analysis takes too long, so teams ship before insights are ready. Findings arrive as 40-slide decks, so they get filed away instead of acted on.

The teams getting research to influence decisions aren't running more studies. They're running connected workflows that answer the right questions at the right time, with insights stakeholders can't ignore.

Here are five workflows that make this happen.

1. Understand what competitors get right (and wrong)

Your team is redesigning checkout, and leadership wants competitive intelligence. But screenshot comparisons and assumptions won't cut it when you're trying to justify engineering time.

Here's the workflow:

Start with Live Site Testing to observe how real users navigate competitor experiences. Watch where they hesitate, what they click first, where they abandon the process entirely. You're not analyzing what competitors built, you're seeing how users actually respond to it.

Follow up with Interviews to understand the why behind the behavior. Upload your live site test recordings and use AI analysis to surface patterns across participants. That random dropout? It's actually a theme: users don't trust the security badge placement because it looks like an ad.

Validate your redesign with Prototype Testing before you commit to building it. Test your new flow against the competitor's current experience and measure the difference in task success, time on task, and user confidence.

What stakeholders see: Video evidence of where competitors fail users, quantitative data proving your concept performs better, and AI-generated insights connecting behavior to business impact. 

2. Ship features users will actually use

Product wants to launch a new feature. You need to make sure it won't just join the graveyard of functionality nobody touches.

Here's the workflow:

Use Surveys to understand current user priorities and pain points. Deploy an intercept survey on your live site to catch people in context, not just those who respond to email campaigns. Find out what problems they're actually trying to solve today.

Build it out in Prototype Testing to see whether users can find, understand, and successfully use the feature. Validate key interactions and task flows before engineering writes a line of code. Where do users expect to find this feature? Can they actually easily complete the task you're building it for? Do they move through a flow as intended?

Conduct Interviews to explore the edge cases and mental models you didn't anticipate. Use AI Chat to query across all your interview data: "What concerns did users raise about data privacy?" Get quotes, highlight reels, and themes that answer questions you didn't think to ask upfront.

What stakeholders see: Evidence that this feature solves a real user problem, proof that users can find it where you're planning to put it, and specific guidance on what could cause adoption to fail.

3. Fix navigation without rebuilding blindly

Your information architecture is a mess. Years of adding features means nobody can find anything anymore. But reorganizing based on what makes sense to internal teams is how you end up with labels or structures that don’t resonate with users.

Here's the workflow:

Run Card Sorting to understand how users naturally categorize your content. What your team calls "Account Settings," users call "My Profile." What seems logical internally could be completely foreign to the people who actually use your product.

Validate your structure with Tree Testing before you commit to rebuilding. Test multiple organizational approaches and use the task comparison tool to see which structure helps users complete critical tasks. Can they find what they need, or are you just rearranging deck chairs?

Use Live Site Testing to see how people struggle with your current navigation in practice. Watch them hunt through menus, resort to search as a last-ditch effort, give up entirely. Then test your new structure the same way to measure actual improvement, not just theoretical better-ness.

Upload recordings to Interviews for AI-powered analysis. Get clear summaries of common pain points, highlight reels of critical issues, and stakeholder-friendly video clips that make the case for change.

What stakeholders see: Your redesign isn't based on internal preferences. It's based on how users think about your content, validated with task completion data, and backed by video proof of improvement.

4. Boost conversions with evidence from users

Leadership wants to know why conversion rates are stuck. You have theories about friction points, but theories don't justify engineering sprints.

Here's the workflow:

Deploy Surveys with intercept snippets on your live site. Ask people in the moment what they're trying to accomplish and what's stopping them. Surface objections and confusion you wouldn't discover through internal speculation. This solves two problems: you get feedback from actual users in context, and you avoid the participant recruitment challenge that 41% of researchers cite as a top obstacle.

Run Live Site Testing to watch users actually attempt to convert. See where they hesitate before clicking "Continue," what makes them abandon their cart, which form fields cause them to pause and reconsider.

Run First-Click Testing to identify navigation barriers to conversion. Test whether users can find the path that leads to conversion - like locating your pricing page, finding the upgrade plan button, identifying the right product category, or comparing different products to each other. Users who make a correct first click are 3X more likely to complete their task successfully, so this quickly reveals when poor navigation or unclear signage is killing conversion.

Test proposed fixes with Prototype Testing before rebuilding anything. If you think the problem is an unclear value proposition, test clearer messaging. If you think it's a trust issue, test different social proof placements. Compare task success rates between your current flow and proposed changes.

Use Interviews to understand the emotional and practical barriers underneath the behavior. AI analysis helps you spot patterns: it's not that your pricing is too high, it's that users don't understand what they're getting for the price, or why your option is better than competitors.

What stakeholders see: Exactly where users drop off, why they drop off, and measured improvement from your proposed solution, all before engineering builds anything.

5. Make research fast enough to actually matter

Your product team ships every two weeks. Research that takes three weeks to complete is documentation of what you already built, not input into decisions.

Here's the workflow:

Build research into your sprint cycles by eliminating the manual overhead. Use Surveys for quick pulse checks on assumptions. Deploy a tree test in hours to validate a navigation change before sprint planning, not after the feature ships. Invite your own participants, use Optimal's on-demand panel for fast turnaround, or leverage managed recruitment when you need specific, hard-to-reach audiences.

Leverage AI to handle the time-consuming work in Interviews. Upload recordings and get automated insights, themes, and highlight reels while you're planning your next study. What used to take days of manual review now takes minutes of focused analysis. AI also automatically surfaces patterns in survey responses and pre/post-task feedback across your studies, so you're finding insights faster regardless of method.

Test current and proposed experiences in parallel. Use Live Site Testing and Prototype Testing to baseline the problem with your current experience, while simultaneously testing your solution. Compare results side-by-side to show measurable improvement, not just directional feedback. Tree testing has built-in task comparison so you can directly measure navigation performance between your existing structure and proposed changes.

Share results in tools your team actually uses. Generate highlight reels for stand-ups, pull specific quotes for Slack threads with AI Chat, export data for deeper stakeholder analysis. Research findings that fit into existing workflows get used. Research that requires everyone to change how they work gets ignored.

What stakeholders see: Research isn't the thing slowing down product velocity. It's the thing making decisions faster and more confident. Teams do more research because research fits their workflow, not because they've been told they should.

The pattern: What actually makes user research influential

Most organizations struggle to embed user research into product development. Research happens in disconnected moments rather than integrated workflows, which is why it often feels like it's happening to teams rather than with them.

Closing that gap requires two shifts: building a user research process that connects insights across the entire product cycle, and making research accessible to everyone who makes product decisions.

That's the workflow advantage: card sorting reveals how people naturally categorize and label content, tree testing validates structure, surveys surface priorities, live site testing shows real behavior, prototype testing confirms solutions, interviews provide context, and AI analysis handles synthesis. Each method is designed for speed and simplicity, so product managers can validate assumptions, designers can test solutions, and researchers can scale their impact without becoming bottlenecks.

The workflows we covered - reducing churn, validating roadmaps, boosting conversions, proving impact, and matching product velocity - all follow this same pattern: the right combination of UX research methods, deployed at the right moment, analyzed fast enough to matter, and accessible to the entire product team.

Ready to see how these user research workflows work for your team? Explore Optimal's platform or talk to our team about your specific research challenges.

Learn more
1 min read

Optimal 3.0: Built to Challenge the Status Quo

A year ago, we looked at the user research market and made a decision.

We saw product teams shipping faster than ever while research tools stayed stuck in time. We saw researchers drowning in manual work, waiting on vendor emails, stitching together fragmented tools. We heard "should we test this?" followed by "never mind, we already shipped."

The dominant platforms got comfortable. We didn't.

Today, we're excited to announce Optimal 3.0, the result of refusing to accept the status quo and building the fresh alternative teams have been asking for.

The Problem: Research Platforms Haven't Evolved

The gap between product velocity and research velocity has never been wider. The situation isn't sustainable. And it's not the researcher's fault. The tools are the problem. They’re: 

  • Built for specialists only - Complex interfaces that gatekeep research from the rest of the team
  • Fragmented ecosystems - Separate tools for recruitment, testing, and analysis that don't talk to each other
  • Data in silos - Insights trapped study-by-study with no way to search across everything
  • Zero integration - Platforms that force you to abandon your workflow instead of fitting into it

These platforms haven't changed because they don't have to, so we set out to challenge them.

Our Answer: A Complete Ecosystem for Research Velocity

Optimal 3.0 isn't an incremental update to the old way of doing things. It's a fundamental rethinking of what a research platform should be.

Research For All, Not Just Researchers.

For 18 years, we've believed research should be accessible to everyone, not just specialists. Optimal 3.0 takes that principle further.

Unlimited seats. Zero gatekeeping.

Designers can validate concepts without waiting for research bandwidth. PMs can test assumptions without learning specialist tools. Marketers can gather feedback without procurement nightmares. Research shouldn't be rationed by licenses or complexity. It should be a shared capability across your entire team.

A Complete Ecosystem in One Place.

Stop stitching together point solutions.Optimal 3.0 gives you everything you need in one platform:

Recruitment Built In Access millions of verified participants worldwide without the vendor tag. Target by demographics, behaviors, and custom screeners. Launch studies in minutes, not days. No endless email chains. No procurement delays.

Learn more about Recruitment

Testing That Adapts to You

  • Live Site Testing: Test any URL, your production site, staging, or competitors, without code or developer dependencies
  • Prototype Testing: Connect Figma and go from design to insights in minutes
  • Mobile Testing: Native screen recordings that capture the real user experience
  • Enhanced Traditional Methods: Card sorting, tree testing, first-click tests, the methodologically sound foundations we built our reputation on

Learn more about Live Site Testing

AI-Powered Analysis (With Control) Interview analysis used to take weeks. We've reduced it to minutes.

Our AI automatically identifies themes, surfaces key quotes, and generates summaries, while you maintain full control over the analysis.

As one researcher told us: "What took me 4 weeks to manually analyze now took me 5 minutes."

This isn't about replacing researcher judgment. It's about amplifying it. The AI handles the busywork, tagging, organizing, timestamping. You handle the strategic thinking and judgment calls. That's where your value actually lives.

Learn more about Optimal Interviews

Chat Across All Your Data Your research data is now conversational.

Ask questions and get answers instantly, backed by actual video evidence from your studies. Query across multiple Interview studies at once. Share findings with stakeholders complete with supporting clips.

Every insight comes with the receipts. Because stakeholders don't just need insights, they need proof.

A Dashboard Built for Velocity See all your studies, all your data, in one place. Track progress across your entire team. Jump from question to insight in seconds. Research velocity starts with knowing what you have.

Explore the new dashboard

Integration Layer

Optimal 3.0 fits your workflow. It doesn't dominate it. We integrate with the tools you already use, Figma, Slack, your existing tech stack, because research shouldn't force you to abandon how you work.

What Didn't Change: Methodological Rigor

Here's what we didn't do: abandon the foundations that made teams trust us.

Card sorting, tree testing, first-click tests, surveys, the methodologically sound tools that Amazon, Google, Netflix, and HSBC have relied on for years are all still here. Better than ever.

We didn't replace our roots. We built on them.

18 years of research methodology, amplified by modern AI and unified in a complete ecosystem.

Why This Matters Now

Product development isn't slowing down. AI is accelerating everything. Competitors are moving faster. Customer expectations are higher than ever.

Research can either be a bottleneck or an accelerator.

The difference is having a platform that:

  • Makes research accessible to everyone (not just specialists)
  • Provides a complete ecosystem (not fragmented point solutions)
  • Amplifies judgment with AI (instead of replacing it)
  • Integrates with workflows (instead of forcing new ones)
  • Lets you search across all your data (not trapped in silos)

Optimal 3.0 is built for research that arrives before the decision is made. Research that shapes products, not just documents them. Research that helps teams ship confidently because they asked users first.

A Fresh Alternative

We're not trying to be the biggest platform in the market.

We're trying to be the best alternative to the clunky tools that have dominated for years.

Amazon, Google, Netflix, Uber, Apple, Workday, they didn't choose us because we're the incumbent. They chose us because we make research accessible, fast, and actionable.

"Overall, each release feels like the platform is getting better." — Lead Product Designer at Flo

"The one research platform I keep coming back to." — G2 Review

What's Next

This launch represents our biggest transformation, but it's not the end. It's a new beginning.

We're continuing to invest in:

  • AI capabilities that amplify (not replace) researcher judgment
  • Platform integrations that fit your workflow
  • Methodological innovations that maintain rigor while increasing speed
  • Features that make research accessible to everyone

Our goal is simple: make user research so fast and accessible that it becomes impossible not to include users in every decision.

See What We've Built

If you're evaluating research platforms and tired of the same old clunky tools, we'd love to show you the alternative.

Book a demo or start a free trial

The platform that turns "should we?" into "we did."

Welcome to Optimal 3.0.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.