October 7, 2025
3 minutes

The AI Automation Breakthrough: Key Insights from Our Latest Community Event

Last night, Optimal brought together an incredible community of product leaders and innovators for "The Automation Breakthrough: Workflows for the AI Era" at Q-Branch in Austin, Texas. This two-hour in-person event featured expert perspectives on how AI and automation are transforming the way we work, create, and lead.

The event featured a lightning Talk on "Designing for Interfaces" featured Cindy Brummer, Founder of Standard Beagle Studio, followed by a dynamic panel discussion titled "The Automation Breakthrough" with industry leaders including Joe Meersman (Managing Partner, Gyroscope AI), Carmen Broomes (Head of UX, Handshake), Kasey Randall (Product Design Lead, Posh AI), and Prateek Khare (Head of Product, Amazon). We also had a fireside chat with our CEO, Alex Burke and Stu Smith, Head of Design at Atlassian. 

Here are the key themes and insights that emerged from these conversations:

Trust & Transparency: The Foundation of AI Adoption

Cindy emphasized that trust and transparency aren't just nice-to-haves in the AI era, they're essential. As AI tools become more integrated into our workflows, building systems that users can understand and rely on becomes paramount. This theme set the tone for the entire event, reminding us that technological advancement must go hand-in-hand with ethical considerations.

Automation Liberates Us from Grunt Work

One of the most resonant themes was how AI fundamentally changes what we spend our time on. As Carmen noted, AI reduces the grunt work and tasks we don't want to do, freeing us to focus on what matters most. This isn't about replacing human workers, it's about eliminating the tedious, repetitive tasks that drain our energy and creativity.

Enabling Creativity and Higher-Quality Decision-Making

When automation handles the mundane, something remarkable happens: we gain space for deeper thinking and creativity. The panelists shared powerful examples of this transformation:

Carmen described how AI and workflows help teams get to insights and execution on a much faster scale, rather than drowning in comments and documentation. Prateek encouraged the audience to use automation to get creative about their work, while Kasey shared how AI and automation have helped him develop different approaches to coaching, mentorship, and problem-solving, ultimately helping him grow as a leader.

The decision-making benefits were particularly striking. Prateek explained how AI and automation have helped him be more thoughtful about decisions and make higher-quality choices, while Kasey echoed that these tools have helped him be more creative and deliberate in his approach.

Democratizing Product Development

Perhaps the most exciting shift discussed was how AI is leveling the playing field across organizations. Carmen emphasized the importance of anyone, regardless of their role, being able to get close to their customers. This democratization means that everyone can get involved in UX, think through user needs, and consider the best experience.

The panel explored how roles are blurring in productive ways. Kasey noted that "we're all becoming product builders" and that product managers are becoming more central to conversations. Prateek predicted that teams are going to get smaller and achieve more with less as these tools become more accessible.

Automation also plays a crucial role in iteration, helping teams incorporate customer feedback more effectively, according to Prateek.

Practical Advice for Navigating the AI Era

The panelists didn't just share lofty visions, they offered concrete guidance for professionals navigating this transformation:

Stay perpetually curious. Prateek warned that no acquired knowledge will stay with you for long, so you need to be ready to learn anything at any time.

Embrace experimentation. "Allow your process to misbehave," Prateek advised, encouraging attendees to break from rigid workflows and explore new approaches.

Overcome fear. Carmen urged the audience not to be afraid of bringing in new tools or worrying that AI will take their jobs. The technology is here to augment, not replace.

Just start. Kasey's advice was refreshingly simple: "Just start and do it again." Whether you're experimenting with AI tools or trying "vibe coding," the key is to begin and iterate.

The energy in the room at Q-Branch reflected a community that's not just adapting to change but actively shaping it. The automation breakthrough isn't just about new tools, it's about reimagining how we work, who gets to participate in product development, and what becomes possible when we free ourselves from repetitive tasks.

As we continue to navigate the AI era, events like this remind us that the most valuable insights come from bringing diverse perspectives together. The conversation doesn't end here, it's just beginning.

Interested in joining future Optimal community events? Stay tuned for upcoming gatherings where we'll continue exploring the intersection of design, product, and emerging technologies.

Share this article
Author
Optimal
Workshop
Topics

Related articles

View all blog articles
Learn more
1 min read

AI Is Only as Good as Its UX: Why User Experience Tops Everything

AI is transforming how businesses approach product development. From AI-powered chatbots and recommendation engines to predictive analytics and generative models, AI-first products are reshaping user interactions with technology, which in turn impacts every phase of the product development lifecycle.

Whether you're skeptical about AI or enthusiastic about its potential, the fundamental truth about product development in an AI-driven future remains unchanged: a product is only as good as its user experience.

No matter how powerful the underlying AI, if users don't trust it, can't understand it, or struggle to use it, the product will fail. Good UX isn't simply an add-on for AI-first products, it's a fundamental requirement.

Why UX Is More Critical Than Ever

Unlike traditional software, where users typically follow structured, planned workflows, AI-first products introduce dynamic, unpredictable experiences. This creates several unique UX challenges:

  • Users struggle to understand AI's decisions – Why did the AI generate this particular response? Can they trust it?
  • AI doesn't always get it right – How does the product handle mistakes, errors, or bias?
  • Users expect AI to "just work" like magic – If interactions feel confusing, people will abandon the product.

AI only succeeds when it's intuitive, accessible, and easy-to-use: the fundamental components of good user experience. That's why product teams need to embed strong UX research and design into AI development, right from the start.

Key UX Focus Areas for AI-First Products

To Trust Your AI, Users Need to Understand It

AI can feel like a black box, users often don't know how it works or why it's making certain decisions or recommendations. If people don't understand or trust your AI, they simply won't use it. The user experiences you need to build for an AI-first product must be grounded in transparency.

What does a transparent experience look like?

  • Show users why AI makes certain decisions (e.g., "Recommended for you because…")
  • Allow users to adjust AI settings to customize their experience
  • Enable users to provide feedback when AI gets something wrong—and offer ways to correct it

A strong example: Spotify's AI recommendations explain why a song was suggested, helping users understand the reasoning behind specific song recommendations.

AI Should Augment Human Expertise Not Replace It

AI often goes hand-in-hand with automation, but this approach ignores one of AI's biggest limitations: incorporating human nuance and intuition into recommendations or answers. While AI products strive to feel seamless and automated, users need clarity on what's happening when AI makes mistakes.

How can you address this? Design for AI-Human Collaboration:

  • Guide users on the best ways to interact with and extract value from your AI
  • Provide the ability to refine results so users feel in control of the end output
  • Offer a hybrid approach: allow users to combine AI-driven automation with manual/human inputs

Consider Google's Gemini AI, which lets users edit generated responses rather than forcing them to accept AI's output as final, a thoughtful approach to human-AI collaboration.

Validate and Test AI UX Early and Often

Because AI-first products offer dynamic experiences that can behave unpredictably, traditional usability testing isn't sufficient. Product teams need to test AI interactions across multiple real-world scenarios before launch to ensure their product functions properly.

Run UX Research to Validate AI Models Throughout Development:

  • Implement First Click Testing to verify users understand where to interact with AI
  • Use Tree Testing to refine chatbot flows and decision trees
  • Conduct longitudinal studies to observe how users interact with AI over time

One notable example: A leading tech company used Optimal to test their new AI product with 2,400 global participants, helping them refine navigation and conversion points, ultimately leading to improved engagement and retention.

The Future of AI Products Relies on UX

The bottom line is that AI isn't replacing UX, it's making good UX even more essential. The more sophisticated the product, the more product teams need to invest in regular research, transparency, and usability testing to ensure they're building products people genuinely value and enjoy using.

Want to improve your AI product's UX? Start testing with Optimal today.

Learn more
1 min read

Top User Research Platforms 2025

User research software isn't what it used to be. The days of insights being locked away in specialist UX research teams are fading fast, replaced by a world where product managers, designers, and even marketers are running their own usability testing, prototype validation, and user interviews. The best UX research platforms powering this shift have evolved from complex enterprise software into tools that genuinely enable teams to test with users, analyze results, and share insights faster.

This isn't just about better software, it's about a fundamental transformation in how organizations make decisions. Let's explore the top user research tools in 2025, what makes each one worth considering, and how they're changing the research landscape.


What Makes a UX Research Platform All-in-One?


The shift toward all-in-one UX research platforms reflects a deeper need: teams want to move from idea to insight without juggling multiple tools, logins, or data silos. A truly comprehensive research platform combines several key capabilities within a unified workflow.

The best all-in-one platforms integrate study design, participant recruitment, multiple research methods (from usability testing to surveys to interviews to navigation testing to prototype testing), AI-powered analysis, and insight management in one cohesive experience. This isn't just about feature breadth, it's about eliminating the friction that prevents research from influencing decisions. When your entire research workflow lives in one platform, insights move faster from discovery to action.

What separates genuine all-in-one solutions from feature-heavy tools is thoughtful integration. The best platforms ensure that data flows seamlessly between methods, participants can be recruited consistently across study types, and insights build upon each other rather than existing in isolation. This integrated approach enables both quick validation studies and comprehensive strategic research within the same environment.

1. Optimal: Best End-to-End UX Research Platform


Optimal has carved out a unique position in the UX research landscape: it’s powerful enough for enterprise teams at Netflix, HSBC, Lego, and Toyota, yet intuitive enough that anyone, product managers, designers, even marketers, can confidently run usability studies. That balance between depth and accessibility is hard to achieve, and it's where Optimal shines.

Unlike fragmented tool stacks, Optimal is a complete User Insights Platform that supports the full research workflow. It covers everything from study design and participant recruitment to usability testing, prototype validation, AI-assisted interviews, and a research repository. You don't need multiple logins or wonder where your data lives, it's all in one place.

Two recent features push the platform even further:

  • Live Site Testing: Run usability studies on your actual live product, capturing real user behavior in production environments.

  • Interviews: AI-assisted analysis dramatically cuts down time-to-insight from moderated sessions, without losing the nuance that makes qualitative research valuable.



One of Optimal's biggest advantages is its pricing model. There are no per-seat fees, no participant caps, and no limits on the number of users. Pricing is usage-based, so anyone on your team can run a study without needing a separate license or blowing your budget. It's a model built to support research at scale, not gate it behind permissioning.

Reviews on G2 reflect this balance between power and ease. Users consistently highlight Optimal's intuitive interface, responsive customer support, and fast turnaround from study to insight. Many reviewers also call out its AI-powered features, which help teams synthesize findings and communicate insights more effectively. These reviews reinforce Optimal's position as an all-in-one platform that supports research from everyday usability checks to strategic deep dives.

The bottom line? Optimal isn't just a suite of user research tools. It's a system that enables anyone in your organization to participate in user-centered decision-making, while giving researchers the advanced features they need to go deeper.

2. UserTesting: Remote Usability Testing


UserTesting built its reputation on one thing: remote usability testing with real-time video feedback. Watch people interact with your product, hear them think aloud, see where they get confused. It's immediate and visceral in a way that heat maps and analytics can't match.

The platform excels at both moderated and unmoderated usability testing, with strong user panel access that enables quick turnaround. Large teams particularly appreciate how fast they can gather sentiment data across UX research studies, marketing campaigns, and product launches. If you need authentic user reactions captured on video, UserTesting delivers consistently.

That said, reviews on G2 and Capterra note that while video feedback is excellent, teams often need to supplement UserTesting with additional tools for deeper analysis and insight management. The platform's strength is capturing reactions, though some users mention the analysis capabilities and data export features could be more robust for teams running comprehensive research programs.

A significant consideration: UserTesting operates on a high-cost model with per-user annual fees plus additional session-based charges. This pricing structure can create unpredictable costs that escalate as your research volume grows, teams often report budget surprises when conducting longer studies or more frequent research. For organizations scaling their research practice, transparent and predictable pricing becomes increasingly important.

3. Maze: Rapid Prototype Testing


Maze understands that speed matters. Design teams working in agile environments don't have weeks to wait for findings, they need answers now. The platform leans into this reality with rapid prototype testing and continuous discovery research, making it particularly appealing to individual designers and small product teams.

Its Figma integration is convenient for quick prototype tests. However, the platform's focus on speed involves trade-offs in flexibility as users note rigid question structures and limited test customization options compared to more comprehensive platforms. For straightforward usability tests, this works fine. For complex research requiring custom flows or advanced interactions, the constraints become more apparent.

User feedback suggests Maze excels at directional insights and quick design validation. However, researchers looking for deep qualitative analysis or longitudinal studies may find the platform limited. As one G2 reviewer noted, "perfect for quick design validation, less so for strategic research." The reporting tends toward surface-level metrics rather than the layered, strategic insights enterprise teams often need for major product decisions.

For teams scaling their research practice, some considerations emerge. Lower-tier plans limit the number of studies you can run per month, and full access to card sorting, tree testing, and advanced prototype testing requires higher-tier plans. For teams running continuous research or multiple studies weekly, these study caps and feature gates can become restrictive. Users also report prototype stability issues, particularly on mobile devices and with complex design systems, which can disrupt testing sessions. Originally built for individual designers, Maze works well for smaller teams but may lack the enterprise features, security protocols, and dedicated support that large organizations require for comprehensive research programs.

4. Dovetail: Research Centralization Hub

Dovetail has positioned itself as the research repository and analysis platform that helps teams make sense of their growing body of insights. Rather than conducting tests directly, Dovetail shines as a centralization hub where research from various sources can be tagged, analyzed, and shared across the organization. Its collaboration features ensure that insights don't get buried in individual files but become organizational knowledge.

Many teams use Dovetail alongside testing platforms like Optimal, creating a powerful combination where studies are conducted in dedicated research tools and then synthesized in Dovetail's collaborative environment. For organizations struggling with insight fragmentation or research accessibility, Dovetail offers a compelling solution to ensure research actually influences decisions.

6. Lookback: Moderated User Interviews


Lookback specializes in moderated user interviews and remote testing, offering a clean, focused interface that stays out of the way of genuine human conversation. The platform is designed specifically for qualitative UX work, where the goal is deep understanding rather than statistical significance. Its streamlined approach to session recording and collaboration makes it easy for teams to conduct and share interview findings.

For researchers who prioritize depth over breadth and want a tool that facilitates genuine conversation without overwhelming complexity, Lookback delivers a refined experience. It's particularly popular among UX researchers who spend significant time in one-on-one sessions and value tools that respect the craft of qualitative inquiry.

7. Lyssna: Quick and lite design feedback


Lyssna (formerly UsabilityHub) positions itself as a straightforward, budget-friendly option for teams needing quick feedback on designs. The platform emphasizes simplicity and fast turnaround, making it accessible for smaller teams or those just starting their research practice.

The interface is deliberately simple, which reduces the learning curve for new users. For basic preference tests, first-click tests, and simple prototype validation, Lyssna's streamlined approach gets you answers quickly without overwhelming complexity.

However, this simplicity involves significant trade-offs. The platform operates primarily as a self-service testing tool rather than a comprehensive research platform. Teams report that Lyssna lacks AI-powered analysis, you're working with raw data and manual interpretation rather than automated insight generation. The participant panel is notably smaller (around 530,000 participants) with limited geographic reach compared to enterprise platforms, and users mention quality control issues where participants don't consistently match requested criteria.

For organizations scaling beyond basic validation, the limitations become more apparent. There's no managed recruitment service for complex targeting needs, no enterprise security certifications, and limited support infrastructure. The reporting stays at a basic metrics level without the layered analysis or strategic insights that inform major product decisions. Lyssna works well for simple, low-stakes testing on limited budgets, but teams with strategic research needs, global requirements, or quality-critical studies typically require more robust capabilities.

Emerging Trends in User Research for 2025


The UX and user research industry is shifting in important ways:

Live environment usability testing is growing. Insights from real users on live sites are proving more reliable than artificial prototype studies. Optimal is leading this shift with dedicated Live Site Testing capabilities that capture authentic behavior where it matters most.

AI-powered research tools are finally delivering on their promise, speeding up analysis while preserving depth. The best implementations, like Optimal's Interviews, handle time-consuming synthesis without losing the nuanced context that makes qualitative research valuable.

Research democratization means UX research is no longer locked in specialist teams. Product managers, designers, and marketers are now empowered to run studies. This doesn't replace research expertise; it amplifies it by letting specialists focus on complex strategic questions while teams self-serve for straightforward validation.

Inclusive, global recruitment is now non-negotiable. Platforms that support accessibility testing and global participant diversity are gaining serious traction. Understanding users across geographies, abilities, and contexts has moved from nice-to-have to essential for building products that truly serve everyone.

How to Choose the Right Platform for Your Team


Forget feature checklists. Instead, ask:

Do you need qualitative vs. quantitative UX research? Some platforms excel at one, while others like Optimal provide robust capabilities for both within a single workflow.

Will non-researchers be running studies (making ease of use critical)? If this is your goal, prioritize intuitive interfaces that don't require extensive training.

Do you need global user panels, compliance features, or AI-powered analysis? Consider whether your industry requires specific certifications or if AI-assisted synthesis would meaningfully accelerate your workflow.

How important is integration with Figma, Slack, Jira, or Notion? The best platform fits naturally into your existing stack, reducing friction and increasing adoption across teams.


Evaluating All-in-One Research Capabilities

When assessing comprehensive research platforms, look beyond the feature list to understand how well different capabilities work together. The best all-in-one solutions excel at data continuity, participants recruited for one study can seamlessly participate in follow-up research, and insights from usability tests can inform survey design or interview discussion guides.

Consider your team's research maturity and growth trajectory. Platforms like Optimal that combine ease of use with advanced capabilities allow teams to start simple and scale sophisticated research methods as their needs evolve, all within the same environment. This approach prevents the costly platform migrations that often occur when teams outgrow point solutions.

Pay particular attention to analysis and reporting integration. All-in-one platforms should synthesize findings across research methods, not just collect them. The ability to compare prototype testing results with interview insights, or track user sentiment across multiple touchpoints, transforms isolated data points into strategic intelligence.

Most importantly, the best platform is the one your team will actually use. Trial multiple options, involve stakeholders from different disciplines, and evaluate not just features but how well each tool fits your team's natural workflow.

The Bottom Line: Powering Better Decisions Through Research


Each of these platforms brings strengths. But Optimal stands out for a rare combination: end-to-end research capabilities, AI-powered insights, and usability testing at scale in an all-in-one interface designed for all teams, not just specialists.

With the additions of Live Site Testing capturing authentic user behavior in production environments, and Interviews delivering rapid qualitative synthesis, Optimal helps teams make faster, better product decisions. The platform removes the friction that typically prevents research from influencing decisions, whether you're running quick usability tests or comprehensive mixed-methods studies.

The right UX research platform doesn't just collect data. It ensures user insights shape every product decision your team makes, building experiences that genuinely serve the people using them. That's the transformation happening at the moment; Research is becoming central to how we build, not an afterthought.

Learn more
1 min read

How AI is Augmenting, Not Replacing, UX Researchers

Despite AI being the buzzword in UX right now, there are still lots of concerns about how it’s going to impact research roles. One of the biggest concerns we hear is: is AI just going to replace UX researchers altogether?

The answer, in our opinion, is no. The longer, more interesting answer is that AI is fundamentally transforming what it means to be a UX researcher, and in ways that make the role more strategic, more impactful, and more interesting than ever before.

What AI Actually Does for Research 

A 2024 survey by the UX Research Collective found that 68% of UX researchers are concerned about AI's impact on their roles. The fear makes sense, we've all seen how automation has transformed other industries. But what's actually happening is that rather than AI replacing researchers, it's eliminating the parts of research that researchers hate most.

According to Gartner's 2024 Market Guide for User Research, AI tools can reduce analysis time by 60-70%, but not by replacing human insight. Instead, they handle:

  • Pattern Recognition at Scale: AI can process hundreds of user interviews and identify recurring themes in hours. For a human researcher that same work would take weeks. But those patterns will need human validation because AI doesn't understand why those patterns matter. That's where researchers will continue to add value, and we would argue, become more important than ever. 
  • Synthesis Acceleration: According to research by the Nielsen Norman Group, AI can generate first-draft insight summaries 10x faster than humans. But these summaries still need researcher oversight to ensure context, accuracy, and actionable insights aren't lost. 
  • Multi-language Analysis: AI can analyze feedback in 50+ languages simultaneously, democratizing global research. But cultural context and nuanced interpretation still require human understanding. 
  •  Always-On Insights:  Traditional research is limited by human availability. Tools like AI interviewers can  run 24/7 while your team sleeps, allowing you to get continuous, high-quality user insights. 

AI is Elevating the Role of Researchers 

We think that what AI is actually doing  is making UX researchers more important, not less. By automating the less sophisticated  aspects of research, AI is pushing researchers toward the strategic work that only humans can do.

From Operators to Strategists: McKinsey's 2024 research shows that teams using AI research tools spend 45% more time on strategic planning and only 20% on execution, compared to 30% strategy and 60% execution for traditional teams.

From Reporters  to Storytellers: With AI handling data processing, researchers can focus on crafting compelling narratives. 

From Analysts to Advisors: When freed from manual analysis, researchers become embedded strategic partners. 

Human + AI Collaboration 

The most effective research teams aren't choosing between human or AI, they're creating collaborative workflows that incorporate AI to augment researchers roles, not replace them: 

  • AI-Powered Data Collection: Automated transcription, sentiment analysis, and preliminary coding happen in real-time during user sessions.
  • Human-Led Interpretation: Researchers review AI-generated insights, add context, challenge assumptions, and identify what AI might have missed.
  • Collaborative Synthesis: AI suggests patterns and themes; researchers validate, refine, and connect to business context.
  • Human Storytelling: Researchers craft narratives, implications, and recommendations that AI cannot generate.

Is it likely that with AI more and more research tasks will become automated? Absolutely. Basic transcription, preliminary coding, and simple pattern recognition are already AI’s bread and butter. But research has never been about these tasks, it's been about understanding users and driving better decisions and that should always be left to humans. 

The researchers thriving in 2025 and beyond aren't fighting AI, they're embracing it. They're using AI to handle the tedious 40% of their job so they can focus on the strategic 60% that creates real business value. You  have a choice. You can choose to adopt AI as a tool to elevate your role, or you can view it as a threat and get left behind. Our customers tell us that the researchers choosing elevation are finding their roles more strategic, more impactful, and more essential to product success than ever before.

AI isn't replacing UX researchers. It's freeing them to do what they've always done best, understand humans and help build better products. And in a world drowning in data but starving for insight, that human expertise has never been more valuable.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.