Optimal Blog
Articles and Podcasts on Customer Service, AI and Automation, Product, and more

At Optimal, we know the reality of user research: you've just wrapped up a fantastic interview session, your head is buzzing with insights, and then... you're staring at hours of video footage that somehow needs to become actionable recommendations for your team.
User interviews and usability sessions are treasure troves of insight, but the reality is reviewing hours of raw footage can be time-consuming, tedious, and easy to overlook important details. Too often, valuable user stories never make it past the recording stage.
That's why we’re excited to announce the launch of early access for Interviews, a brand-new tool that saves you time with AI and automation, turns real user moments into actionable recommendations, and provides the evidence you need to shape decisions, bring stakeholders on board, and inspire action.
Interviews, Reimagined
What once took hours of video review now takes minutes. With Interviews, you get:
- Instant clarity: Upload your interviews and let AI automatically surface key themes, pain points, opportunities, and other key insights.
- Deeper exploration: Ask follow-up questions and anything with AI chat. Every insight comes with supporting video evidence, so you can back up recommendations with real user feedback.
- Automatic highlight reels: Generate clips and compilations that spotlight the takeaways that matter.
- Real user voices: Turn insight into impact with user feedback clips and videos. Share insights and download clips to drive product and stakeholder decisions.

Groundbreaking AI at Your Service
This tool is powered by AI designed for researchers, product owners, and designers. This isn’t just transcription or summarization, it’s intelligence tailored to surface the insights that matter most. It’s like having a personal AI research assistant, accelerating analysis and automating your workflow without compromising quality. No more endless footage scrolling.
The AI used for Interviews as well as all other AI with Optimal is backed by AWS Amazon Bedrock, ensuring that your AI insights are supported with industry-leading protection and compliance.

What’s Next: The Future of Moderated Interviews in Optimal
This new tool is just the beginning. Soon, you’ll be able to manage the entire moderated interview process inside Optimal, from recruitment to scheduling to analysis and sharing.
Here’s what’s coming:
- Recruit users using Optimal’s managed recruitment services.
- View your scheduled sessions directly within Optimal. Link up with your own calendar.
- Connect seamlessly with Zoom, Google Meet, or Teams.
Imagine running your full end-to-end interview workflow, all in one platform. That’s where we’re heading, and Interviews is our first step.
Ready to Explore?
Interviews is available now for our latest Optimal plans with study limits. Start transforming your footage into minutes of clarity and bring your users’ voices to the center of every decision. We can’t wait to see what you uncover.
Want to learn more and see it in action? Join us for our upcoming webinar on Oct 21st at 12 PM PST.
Topics
Research Methods
Popular
All topics
Latest

5 Alternatives to Askable for User Research and Participant Recruitment
When evaluating tools for user testing and participant recruitment, Askable often appears on the shortlist, especially for teams based in Australia and New Zealand. But in 2025, many researchers are finding Askable’s limitations increasingly difficult to work around: restricted study volume, inconsistent participant quality, and new pricing that limits flexibility.
If you’re exploring Askable alternatives that offer more scalability, higher data quality, and global reach, here are five strong options.
1. Optimal: Best Overall Alternative for Scalable, AI-Powered Research
Optimal is a comprehensive user insights platform supporting the full research lifecycle, from participant recruitment to analysis and reporting. Unlike Askable, which has historically focused on recruitment, Optimal unifies multiple research methods in one platform, including prototype testing, card sorting, tree testing, and AI-assisted interviews.
Why teams switch from Askable to Optimal
1. You can only run one study at a time in Askable
Optimal removes that bottleneck, letting you launch multiple concurrent studies across teams and research methods.
2. Askable’s new pricing limits flexibility
Optimal offers scalable plans with unlimited seats, so teams only pay for what they need.
3. Askable’s participant quality has dropped
Optimal provides access to over 100+ million verified participants worldwide, with strong fraud-prevention and screening systems that eliminate low-effort or AI-assisted responses.
Additional advantages
- End-to-end research tools in one workspace
- AI-powered insight generation that tags and summarizes automatically
- Enterprise-grade reliability with decade-long market trust
- Dedicated onboarding and SLA-backed support
Best for: Teams seeking an enterprise-ready, scalable research platform that eliminates the operational constraints of Askable.
2. UserTesting: Best for Video-Based Moderated Studies
UserTesting remains one of the most established platforms for moderated and unmoderated usability testing. It excels at gathering video feedback from participants in real time.
Pros:
- Large participant pool with strong demographic filters
- Supports moderated sessions and live interviews
- Integrations with design tools like Figma and Miro
Cons:
- Higher cost at enterprise scale
- Less flexible for survey-driven or unmoderated studies compared with Optimal
- The UI has become increasingly complex and buggy as UserTesting has been expanding their platform through acquisitions such as UserZoom and Validately.
Best for: Companies prioritizing live, moderated usability sessions.
3. Maze: Best for Product Teams Using Figma Prototypes
Maze offers seamless Figma integration and focuses on automating prototype-testing workflows for product and design teams.
Pros:
- Excellent Figma and Adobe XD integration
- Automated reporting
- Good fit for early-stage design validation
Cons:
- Limited depth for qualitative research
- Smaller participant pool
Best for: Design-first teams validating prototypes and navigation flows.
4. Lyssna (formerly UsabilityHub): Best for Fast Design Feedback
Lyssna focuses on quick-turn, unmoderated studies such as preference tests, first-click tests, and five-second tests.
Pros:
- Fast turnaround
- Simple, intuitive interface
- Affordable for smaller teams
Cons:
- Limited participant targeting options
- Narrower study types than Askable
Best for: Designers and researchers running lightweight validation tests.
5. Dovetail: Best for Research Repository and Analysis
Dovetail is primarily a qualitative data repository rather than a testing platform. It’s useful for centralizing and analyzing insights from research studies conducted elsewhere.
Pros:
- Strong tagging and note-taking features
- Centralized research hub for large teams
Cons:
- Doesn’t recruit participants or run studies
- Requires manual uploads from other tools like Askable or UserTesting
Best for: Research teams centralizing insights from multiple sources.
Final Thoughts on Alternatives to Askable
If your goal is simply to recruit local participants, Askable can still meet basic needs. But if you’re looking to scale research in your organization, integrate testing and analysis, and automate insights, Optimal stands out as the best long-term investment. Its blend of global reach, AI-powered analysis, and proven enterprise support makes it the natural next step for growing research teams.

A beginner’s guide to qualitative and quantitative research
In the field of user research, every method is either qualitative, quantitative – or both. Understandably, there’s some confusion around these 2 approaches and where the different methods are applicable. This article provides a handy breakdown of the different terms and where and why you’d want to use qualitative or quantitative research methods.
Qualitative research
Let’s start with qualitative research, an approach that’s all about the ‘why’. It’s exploratory and not about numbers, instead focusing on reasons, motivations, behaviors and opinions – it’s best at helping you gain insight and delve deep into a particular problem. This type of data typically comes from conversations, interviews and responses to open questions. The real value of qualitative research is in its ability to give you a human perspective on a research question. Unlike quantitative research, this approach will help you understand some of the more intangible factors – things like behaviors, habits and past experiences – whose effects may not always be readily apparent when you’re conducting quantitative research. A qualitative research question could be investigating why people switch between different banks, for example.
When to use qualitative research
Qualitative research is best suited to identifying how people think about problems, how they interact with products and services, and what encourages them to behave a certain way. For example, you could run a study to better understand how people feel about a product they use, or why people have trouble filling out your sign up form. Qualitative research can be very exploratory (e.g., user interviews) as well as more closely tied to evaluating designs (e.g., usability testing). Good qualitative research questions to ask include:
- Why do customers never add items to their wishlist on our website?
- How do new customers find out about our services?
- What are the main reasons people don’t sign up for our newsletter?
How to gather qualitative data
There’s no shortage of methods to gather qualitative data, which commonly takes the form of interview transcripts, notes and audio and video recordings. Here are some of the most widely-used qualitative research methods:
- Usability test – Test a product with people by observing them as they attempt to complete various tasks.
- User interview – Sit down with a user to learn more about their background, motivations and pain points.
- Contextual inquiry – Learn more about your users in their own environment by asking them questions before moving onto an observation activity.
- Focus group – Gather 6 to 10 people for a forum-like session to get feedback on a product.
How many participants will you need?
You don’t often need large numbers of participants for qualitative research, with the average range usually somewhere between 5 to 10 people. You’ll likely require more if you're focusing your work on specific personas, for example, in which case you may need to study 5-10 people for each persona. While this may seem quite low, consider the research methods you’ll be using. Carrying out large numbers of in-person research sessions requires a significant time investment in terms of planning, actually hosting the sessions and analyzing your findings.
Quantitative research
On the other side of the coin you’ve got quantitative research. This type of research is focused on numbers and measurement, gathering data and being able to transform this information into statistics. Given that quantitative research is all about generating data that can be expressed in numbers, there multiple ways you make use of it. Statistical analysis means you can pull useful facts from your quantitative data, for example trends, demographic information and differences between groups. It’s an excellent way to understand a snapshot of your users. A quantitative research question could involve investigating the number of people that upgrade from a free plan to a paid plan.
When to use quantitative research
Quantitative research is ideal for understanding behaviors and usage. In many cases it's a lot less resource-heavy than qualitative research because you don't need to pay incentives or spend time scheduling sessions etc). With that in mind, you might do some quantitative research early on to better understand the problem space, for example by running a survey on your users. Here are some examples of good quantitative research questions to ask:
- How many customers view our pricing page before making a purchase decision?
- How many customers search versus navigate to find products on our website?
- How often do visitors on our website change their password?
How to gather quantitative data
Commonly, quantitative data takes the form of numbers and statistics.
Here are some of the most popular quantitative research methods:
- Card sorts – Find out how people categorize and sort information on your website.
- First-click tests – See where people click first when tasked with completing an action.
- A/B tests – Compare 2 versions of a design in order to work out which is more effective.
- Clickstream analysis – Analyze aggregate data about website visits.
How many participants will you need?
While you only need a small number of participants for qualitative research, you need significantly more for quantitative research. Quantitative research is all about quantity. With more participants, you can generate more useful and reliable data you can analyze. In turn, you’ll have a clearer understanding of your research problem. This means that quantitative research can often involve gathering data from thousands of participants through an A/B test, or with 30 through a card sort. Read more about the right number of participants to gather for your research.
Mixed methods research
While there are certainly times when you’d only want to focus on qualitative or quantitative data to get answers, there’s significant value in utilizing both methods on the same research projects.Interestingly, there are a number of research methods that will generate both quantitative and qualitative data. Take surveys as an example. A survey could include questions that require written answers from participants as well as questions that require participants to select from multiple choices.
Looking back at the earlier example of how people move from a free plan to a paid plan, applying both research approaches to the question will yield a more robust or holistic answer. You’ll know why people upgrade to the paid plan in addition to how many. You can read more about mixed methods research in this article:
Where to from here?
Now that you know the difference between qualitative and quantitative research, the best way to build confidence is to start testing. Hands-on experience is the fastest path to deeper insight. At Optimal, we make it easy to run your first study, no matter your role or research experience.
- Explore our 101 guides to user research
- Start a free trial and discover insights that drive real business impact
- How to encourage people to participate in your study – Seemingly one of the hardest parts of conducting research is finding willing participants. It’s actually not that difficult.

Top User Research Platforms 2025
User research software isn’t what it used to be. The days of insights being locked away in specialist UX research teams are fading fast, replaced by a world where product managers, designers, and even marketers are running their own usability testing, prototype validation, and user interviews. The best UX research platforms powering this shift have evolved from complex enterprise software into tools that genuinely enable teams to test with users, analyze results, and share insights faster.
This isn’t just about better software, it’s about a fundamental transformation in how organizations make decisions. Let’s explore the top user research tools in 2025, what makes each one worth considering, and how they’re changing the research landscape.
1. Optimal: Best End-to-End UX Research Platform
Optimal has carved out a unique position in the UX research landscape: it’s powerful enough for enterprise teams at Netflix, HSBC, Lego, and Toyota, yet intuitive enough that anyone, product managers, designers, even marketers, can confidently run usability studies. That balance between depth and accessibility is hard to achieve, and it’s where Optimal shines.
Unlike fragmented tool stacks, Optimal is a complete User Insights Platform that supports the full research workflow. It covers everything from study design and participant recruitment to usability testing, prototype validation, AI-assisted interviews, and a research repository. You don’t need multiple logins or wonder where your data lives, it’s all in one place.
Two recent features push the platform even further:
- Live Site Testing: Run usability studies on your actual live product, capturing real user behavior in production environments.
- Interviews: AI-assisted analysis dramatically cuts down time-to-insight from moderated sessions, without losing the nuance that makes qualitative research valuable.
One of Optimal’s biggest advantages is its pricing model. There are no per-seat fees, no participant caps, and no limits on the number of users. Pricing is usage-based, so anyone on your team can run a study without needing a separate license or blowing your budget. It’s a model built to support research at scale, not gate it behind permissioning.
Reviews on G2 reflect this balance between power and ease. Users consistently highlight Optimal’s intuitive interface, responsive customer support, and fast turnaround from study to insight. Many reviewers also call out its AI-powered features, which help teams synthesize findings and communicate insights more effectively. These reviews reinforce Optimal’s position as an all-in-one platform that supports research from everyday usability checks to strategic deep dives.
The bottom line? Optimal isn’t just a suite of user research tools. It’s a system that enables anyone in your organization to participate in user-centered decision-making, while giving researchers the advanced features they need to go deeper.
2. UserTesting: Remote Usability Testing
UserTesting built its reputation on one thing: remote usability testing with real-time video feedback. Watch people interact with your product, hear them think aloud, see where they get confused. It's immediate and visceral in a way that heat maps and analytics can't match.
The platform excels at both moderated and unmoderated usability testing, with strong user panel access that enables quick turnaround. Large teams particularly appreciate how fast they can gather sentiment data across UX research studies, marketing campaigns, and product launches. If you need authentic user reactions captured on video, UserTesting delivers consistently.
That said, reviews on G2 and Capterra note that while video feedback is excellent, teams often need to supplement UserTesting with additional tools for deeper analysis and insight management. The platform's strength is capturing reactions, though some users mention the analysis capabilities and data export features could be more robust for teams running comprehensive research programs.
A significant consideration: UserTesting operates on a high-cost model with per-user annual fees plus additional session-based charges. This pricing structure can create unpredictable costs that escalate as your research volume grows, teams often report budget surprises when conducting longer studies or more frequent research. For organizations scaling their research practice, transparent and predictable pricing becomes increasingly important.
3. Maze: Rapid Prototype Testing
Maze understands that speed matters. Design teams working in agile environments don't have weeks to wait for findings, they need answers now. The platform leans into this reality with rapid prototype testing and continuous discovery research, making it particularly appealing to individual designers and small product teams.
Its Figma integration is convenient for quick prototype tests. However, the platform's focus on speed involves trade-offs in flexibility as users note rigid question structures and limited test customization options compared to more comprehensive platforms. For straightforward usability tests, this works fine. For complex research requiring custom flows or advanced interactions, the constraints become more apparent.
User feedback suggests Maze excels at directional insights and quick design validation. However, researchers looking for deep qualitative analysis or longitudinal studies may find the platform limited. As one G2 reviewer noted, "perfect for quick design validation, less so for strategic research." The reporting tends toward surface-level metrics rather than the layered, strategic insights enterprise teams often need for major product decisions.
For teams scaling their research practice, some considerations emerge. Lower-tier plans limit the number of studies you can run per month, and full access to card sorting, tree testing, and advanced prototype testing requires higher-tier plans. For teams running continuous research or multiple studies weekly, these study caps and feature gates can become restrictive. Users also report prototype stability issues, particularly on mobile devices and with complex design systems, which can disrupt testing sessions. Originally built for individual designers, Maze works well for smaller teams but may lack the enterprise features, security protocols, and dedicated support that large organizations require for comprehensive research programs.
4. Dovetail: Research Centralization Hub
Dovetail has positioned itself as the research repository and analysis platform that helps teams make sense of their growing body of insights. Rather than conducting tests directly, Dovetail shines as a centralization hub where research from various sources can be tagged, analyzed, and shared across the organization. Its collaboration features ensure that insights don't get buried in individual files but become organizational knowledge.
Many teams use Dovetail alongside testing platforms like Optimal, creating a powerful combination where studies are conducted in dedicated research tools and then synthesized in Dovetail's collaborative environment. For organizations struggling with insight fragmentation or research accessibility, Dovetail offers a compelling solution to ensure research actually influences decisions.
6. Lookback: Moderated User Interviews
Lookback specializes in moderated user interviews and remote testing, offering a clean, focused interface that stays out of the way of genuine human conversation. The platform is designed specifically for qualitative UX work, where the goal is deep understanding rather than statistical significance. Its streamlined approach to session recording and collaboration makes it easy for teams to conduct and share interview findings.
For researchers who prioritize depth over breadth and want a tool that facilitates genuine conversation without overwhelming complexity, Lookback delivers a refined experience. It's particularly popular among UX researchers who spend significant time in one-on-one sessions and value tools that respect the craft of qualitative inquiry.
7. Lyssna: Quick and lite design feedback
Lyssna (formerly UsabilityHub) positions itself as a straightforward, budget-friendly option for teams needing quick feedback on designs. The platform emphasizes simplicity and fast turnaround, making it accessible for smaller teams or those just starting their research practice.
The interface is deliberately simple, which reduces the learning curve for new users. For basic preference tests, first-click tests, and simple prototype validation, Lyssna's streamlined approach gets you answers quickly without overwhelming complexity.
However, this simplicity involves significant trade-offs. The platform operates primarily as a self-service testing tool rather than a comprehensive research platform. Teams report that Lyssna lacks AI-powered analysis, you're working with raw data and manual interpretation rather than automated insight generation. The participant panel is notably smaller (around 530,000 participants) with limited geographic reach compared to enterprise platforms, and users mention quality control issues where participants don't consistently match requested criteria.
For organizations scaling beyond basic validation, the limitations become more apparent. There's no managed recruitment service for complex targeting needs, no enterprise security certifications, and limited support infrastructure. The reporting stays at a basic metrics level without the layered analysis or strategic insights that inform major product decisions. Lyssna works well for simple, low-stakes testing on limited budgets, but teams with strategic research needs, global requirements, or quality-critical studies typically require more robust capabilities.
Emerging Trends in User Research for 2025
The UX and user research industry is shifting in important ways:
Live environment usability testing is growing. Insights from real users on live sites are proving more reliable than artificial prototype studies. Optimal is leading this shift with dedicated Live Site Testing capabilities that capture authentic behavior where it matters most.
AI-powered research tools are finally delivering on their promise, speeding up analysis while preserving depth. The best implementations, like Optimal's Interviews, handle time-consuming synthesis without losing the nuanced context that makes qualitative research valuable.
Research democratization means UX research is no longer locked in specialist teams. Product managers, designers, and marketers are now empowered to run studies. This doesn't replace research expertise; it amplifies it by letting specialists focus on complex strategic questions while teams self-serve for straightforward validation.
Inclusive, global recruitment is now non-negotiable. Platforms that support accessibility testing and global participant diversity are gaining serious traction. Understanding users across geographies, abilities, and contexts has moved from nice-to-have to essential for building products that truly serve everyone.
How to Choose the Right Platform for Your Team
Forget feature checklists. Instead, ask:
Do you need qualitative vs. quantitative UX research? Some platforms excel at one, while others like Optimal provide robust capabilities for both within a single workflow.
Will non-researchers be running studies (making ease of use critical)? If this is your goal, prioritize intuitive interfaces that don't require extensive training.
Do you need global user panels, compliance features, or AI-powered analysis? Consider whether your industry requires specific certifications or if AI-assisted synthesis would meaningfully accelerate your workflow.
How important is integration with Figma, Slack, Jira, or Notion? The best platform fits naturally into your existing stack, reducing friction and increasing adoption across teams.
Most importantly, the best platform is the one your team will actually use. Trial multiple options, involve stakeholders from different disciplines, and evaluate not just features but how well each tool fits your team's natural workflow.
The Bottom Line: Powering Better Decisions Through Research
Each of these platforms brings strengths. But Optimal stands out for a rare combination: end-to-end research capabilities, AI-powered insights, and usability testing at scale in an all-in-one interface designed for all teams, not just specialists.
With the additions of Live Site Testing capturing authentic user behavior in production environments and Interviews delivering rapid qualitative synthesis, Optimal helps teams make faster, better product decisions. The platform removes the friction that typically prevents research from influencing decisions, whether you're running quick usability tests or comprehensive mixed-methods studies.
The right UX research platform doesn't just collect data. It ensures user insights shape every product decision your team makes, building experiences that genuinely serve the people using them. That's the transformation happening at the moment; Research is becoming central to how we build, not an afterthought.

The Insight to Roadmap Gap
Why Your Best Insights Never Make It Into Products
Does this sound familiar? Your research teams spend weeks uncovering user insights. Your Product teams spend months building features users don't want. Between these two realities lies one of the most expensive problems in product development.
According to a 2024 Forrester study, 73% of product decisions are made without any customer insight, despite 89% of companies investing in user research. This is not because of a lack of research, but instead because of a broken translation process between discovery and delivery.
This gap isn't just about communication, it's structural. Researchers speak in themes, patterns, and user needs. Product managers speak in features, priorities, and business outcomes. Designers speak in experiences and interfaces. Each discipline has its own language, timelines, and success metrics.
The biggest challenge isn't conducting research, it's making sure that research actually influences what gets built.
Why Good Research Dies in Translation:
- Research operates in 2-4 week cycles. Product decisions happen in real-time. By the time findings are synthesized and presented, the moment for influence has passed.
- A 40-slide research report is nobody's idea of actionable. According to Nielsen Norman Group research, product managers spend an average of 12 minutes reviewing research findings, yet the average research report takes 2 hours to fully digest.
- Individual insights lack context. Was this problem mentioned by 1 user or 20? Is it a dealbreaker or a minor annoyance? Without this context, teams can't prioritize effectively.
The most successful product teams don't just conduct research, they create processes and systems that bridge the gap between research and product including doing more continuous discovery and connecting research insights into actual product updates.
- Teams doing continuous discovery make 3x more research-informed decisions than those doing quarterly research sprints. This becomes more achievable when the entire product trio (PM, designer, researcher) is involved in ongoing discovery.
- Product and research teams need to work together to connect research insights directly to potential features. Mapping each insight to product opportunities, which map to experiments, which feed directly into the roadmap.
Recent research from Stanford's Human-Centered AI Institute revealed that AI-driven interfaces created 2.6 times more usability issues for older adults and 3.2 times more issues for users with disabilities compared to general populations, a gap that often goes undetected without specific testing for these groups.
The Optimal Approach: Design with Evidence, Not Assumptions
The future of product development isn't just about doing more continuous research, it's about making research integral to how decisions happen:
- Start with Questions, Not Studies. Before launching research, collaborate with product teams to identify specific decisions that need informing. What will change based on what you learn?
- Embed Researchers in Roadmap Planning. Research findings should be part of sprint planning, roadmap reviews, and OKR setting.
- Measure Research Impact.: Track not just what research you do, but what decisions it influences. Amplitude found that teams measuring "research-informed feature success rate" show 35% higher user satisfaction scores.
The question you need to ask your organization isn't whether your research is good enough. It's whether your research to product collaboration process is strong enough to ensure those insights actually shape what gets built.
.png)
The AI Automation Breakthrough: Key Insights from Our Latest Community Event
Last night, Optimal brought together an incredible community of product leaders and innovators for "The Automation Breakthrough: Workflows for the AI Era" at Q-Branch in Austin, Texas. This two-hour in-person event featured expert perspectives on how AI and automation are transforming the way we work, create, and lead.
The event featured a lightning Talk on "Designing for Interfaces" featured Cindy Brummer, Founder of Standard Beagle Studio, followed by a dynamic panel discussion titled "The Automation Breakthrough" with industry leaders including Joe Meersman (Managing Partner, Gyroscope AI), Carmen Broomes (Head of UX, Handshake), Kasey Randall (Product Design Lead, Posh AI), and Prateek Khare (Head of Product, Amazon). We also had a fireside chat with our CEO, Alex Burke and Stu Smith, Head of Design at Atlassian.
Here are the key themes and insights that emerged from these conversations:
Trust & Transparency: The Foundation of AI Adoption
Cindy emphasized that trust and transparency aren't just nice-to-haves in the AI era, they're essential. As AI tools become more integrated into our workflows, building systems that users can understand and rely on becomes paramount. This theme set the tone for the entire event, reminding us that technological advancement must go hand-in-hand with ethical considerations.
Automation Liberates Us from Grunt Work
One of the most resonant themes was how AI fundamentally changes what we spend our time on. As Carmen noted, AI reduces the grunt work and tasks we don't want to do, freeing us to focus on what matters most. This isn't about replacing human workers, it's about eliminating the tedious, repetitive tasks that drain our energy and creativity.
Enabling Creativity and Higher-Quality Decision-Making
When automation handles the mundane, something remarkable happens: we gain space for deeper thinking and creativity. The panelists shared powerful examples of this transformation:
Carmen described how AI and workflows help teams get to insights and execution on a much faster scale, rather than drowning in comments and documentation. Prateek encouraged the audience to use automation to get creative about their work, while Kasey shared how AI and automation have helped him develop different approaches to coaching, mentorship, and problem-solving, ultimately helping him grow as a leader.
The decision-making benefits were particularly striking. Prateek explained how AI and automation have helped him be more thoughtful about decisions and make higher-quality choices, while Kasey echoed that these tools have helped him be more creative and deliberate in his approach.
Democratizing Product Development
Perhaps the most exciting shift discussed was how AI is leveling the playing field across organizations. Carmen emphasized the importance of anyone, regardless of their role, being able to get close to their customers. This democratization means that everyone can get involved in UX, think through user needs, and consider the best experience.
The panel explored how roles are blurring in productive ways. Kasey noted that "we're all becoming product builders" and that product managers are becoming more central to conversations. Prateek predicted that teams are going to get smaller and achieve more with less as these tools become more accessible.
Automation also plays a crucial role in iteration, helping teams incorporate customer feedback more effectively, according to Prateek.
Practical Advice for Navigating the AI Era
The panelists didn't just share lofty visions, they offered concrete guidance for professionals navigating this transformation:
Stay perpetually curious. Prateek warned that no acquired knowledge will stay with you for long, so you need to be ready to learn anything at any time.
Embrace experimentation. "Allow your process to misbehave," Prateek advised, encouraging attendees to break from rigid workflows and explore new approaches.
Overcome fear. Carmen urged the audience not to be afraid of bringing in new tools or worrying that AI will take their jobs. The technology is here to augment, not replace.
Just start. Kasey's advice was refreshingly simple: "Just start and do it again." Whether you're experimenting with AI tools or trying "vibe coding," the key is to begin and iterate.
The energy in the room at Q-Branch reflected a community that's not just adapting to change but actively shaping it. The automation breakthrough isn't just about new tools, it's about reimagining how we work, who gets to participate in product development, and what becomes possible when we free ourselves from repetitive tasks.
As we continue to navigate the AI era, events like this remind us that the most valuable insights come from bringing diverse perspectives together. The conversation doesn't end here, it's just beginning.
Interested in joining future Optimal community events? Stay tuned for upcoming gatherings where we'll continue exploring the intersection of design, product, and emerging technologies.

Reimagining User Interviews for the Modern Product Workflow
When we planned our product roadmap for 2025 we talked to our users to understand their biggest pain points and one thing came up time and time again: conducting and analyzing user interviews, while still one of the most important aspects of user research, was still incredibly painful and time consuming.
So we went away, and we tried to envision the perfect workflow for user interviews for product, design and research terms and what we came up with looked a little something like this:
- Upload a video, and within minutes, key insights surface automatically
- Ask questions and get back evidence with video citations
- Create video highlight reals faster than ever for shareable insights
- User voices reach product decisions and executive teams in time to influencer product decisions
Then we went and built it.
Interviews, Reimagined
Traditional interviews are passive. They sit in folders, waiting for someone to have time to review them. But what if interviews could speak for themselves? What if they could surface their own insights, highlight critical moments, and answer follow-up questions?
This isn't science fiction, it's the natural evolution of user research, powered by AI (and built by Optimal).
Most research teams have folders full of unanalyzed video content and hours of valuable insights buried in hours of footage and unfortunately, talking to your users doesn't matter if insights never surface. Most research teams area already trying to leverage AI for solve some of their challenges, but generic AI tools miss the nuance of user research. They can transcribe words but can't identify pain points. They can find keywords but can't surface behavioral patterns. They understand language but not user psychology. The next generation of user interview tools require research-grade AI. AI trained on user research methodologies. Algorithms that understand the difference between stated preferences and actual behavior. Technology that recognizes emotional cues, identifies friction points, and connects user needs to product opportunities.
Traditional analysis creates static reports. Product, design and research teams need tools for user interviews that create dynamic intelligence. Instead of documents that get filed away, imagine insights that flow directly into product decisions:
- Automatic highlight reels that bring user voices to stakeholder meetings
- Evidence-backed recommendations with supporting video clips
- Searchable repositories where any team member can ask questions and get answers
- Real-time insight sharing that influences decisions while they're being made
Manual analysis can take weeks or even months, especially for large datasets. AI-powered tools can speed this process up significantly, but time savings is just the beginning. The real transformation happens when researchers stop spending time on manual tasks and start spending time on strategic thinking. When analysis happens automatically, human intelligence can focus on synthesis, strategy, and storytelling.
We are reimagining user interviews from the ground up. Instead of weeks of manual analysis we want you to be able to surface insights in hours. Instead of static reports, we want you to have dynamic, searchable intelligence. Instead of user voices lost in transcripts, we want to help you get video evidence that influences every product decision.
This isn't a distant future, it's happening now. We can’t wait for you to see it.