Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

UX

Learn more
1 min read

Behind the scenes of UX work on Trade Me's CRM system

We love getting stuck into scary, hairy problems to make things better here at Trade Me. One challenge for us in particular is how best to navigate customer reaction to any change we make to the site, the app, the terms and conditions, and so on. Our customers are passionate both about the service we provide — an online auction and marketplace — and its place in their lives, and are rightly forthcoming when they're displeased or frustrated. We therefore rely on our Customer Service (CS) team to give customers a voice, and to respond with patience and skill to customer problems ranging from incorrectly listed items to reports of abusive behavior.

The CS team uses a Customer Relationship Management (CRM) system, Trade Me Admin, to monitor support requests and manage customer accounts. As the spectrum of Trade Me's services and the complexity of the public website have grown rapidly, the CRM system has, to be blunt, been updated in ways which have not always been the prettiest. Links for new tools and reports have simply been added to existing pages, and old tools for services we no longer operate have not always been removed. Thus, our latest focus has been to improve the user experience of the CRM system for our CS team.

And though on the surface it looks like we're working on a product with only 90 internal users, our changes will have flow on effects to tens of thousands of our members at any given time (from a total number of around 3.6 million members).

The challenges of designing customer service systems

We face unique challenges designing customer service systems. Robert Schumacher from GfK summarizes these problems well. I’ve paraphrased him here and added an issue of my own:

1. Customer service centres are high volume environments — Our CS team has thousands of customer interactions every day, and and each team member travels similar paths in the CRM system.

2. Wrong turns are amplified — With so many similar interactions, a system change that adds a minute more to processing customer queries could slow down the whole team and result in delays for customers.

3. Two people relying on the same system — When the CS team takes a phone call from a customer, the CRM system is serving both people: the CS person who is interacting with it, and the caller who directs the interaction. Trouble is, the caller can't see the paths the system is forcing the CS person to take. For example, in a previous job a client’s CS team would always ask callers two or three extra security questions — not to confirm identites, but to cover up the delay between answering the call and the right page loading in the system.

4. Desktop clutter — As a result of the plethora of tools and reports and systems, the desktop of the average CS team member is crowded with open windows and tabs. They have to remember where things are and also how to interact with the different tools and reports, all of which may have been created independently (ie. work differently). This presents quite the cognitive load.

5. CS team members are expert users — They use the system every day, and will all have their own techniques for interacting with it quickly and accurately. They've also probably come up with their own solutions to system problems, which they might be very comfortable with. As Schumacher says, 'A critical mistake is to discount the expert and design for the novice. In contact centers, novices become experts very quickly.'

6. Co-design is risky — Co-design workshops, where the users become the designers,  are all the rage, and are usually pretty effective at getting great ideas quickly into systems. But expert users almost always end up regurgitating the system they're familiar with, as they've been trained by repeated use of systems to think in fixed ways.

7. Training is expensive — Complex systems require more training so if your call centre has high churn (ours doesn’t – most staff stick around for years) then you’ll be spending a lot of money. …and the one I’ve added:

8. Powerful does not mean easy to learn — The ‘it must be easy to use and intuitive’ design rationale is often the cause of badly designed CRM systems. Designers mistakenly design something simple when they should be designing something powerful. Powerful is complicated, dense, and often less easy to learn, but once mastered lets staff really motor.

Our project focus

Our improvement of Trade Me Admin is focused on fixing the shattered IA and restructuring the key pages to make them perform even better, bringing them into a new code framework. We're not redesigning the reports, tools, code or even the interaction for most of the reports, as this will be many years of effort. Watching our own staff use Trade Me Admin is like watching someone juggling six or seven things.

The system requires them to visit multiple pages, hold multiple facts in their head, pattern and problem-match across those pages, and follow their professional intuition to get to the heart of a problem. Where the system works well is on some key, densely detailed hub pages. Where it works badly, staff have to navigate click farms with arbitrary link names, have to type across the URL to get to hidden reports, and generally expend more effort on finding the answer than on comprehending the answer.

Groundwork

The first thing that we did was to sit with CS and watch them work and get to know the common actions they perform. The random nature of the IA and the plethora of dead links and superseded reports became apparent. We surveyed teams, providing them with screen printouts and three highlighter pens to colour things as green (use heaps), orange (use sometimes) and red (never use). From this, we were able to immediately remove a lot of noise from the new IA. We also saw that specific teams used certain links but that everyone used a core set. Initially focussing on the core set, we set about understanding the tasks under those links.

The complexity of the job soon became apparent – with a complex system like Trade Me Admin, it is possible to do the same thing in many different ways. Most CRM systems are complex and detailed enough for there to be more than one way to achieve the same end and often, it’s not possible to get a definitive answer, only possible to ‘build a picture’. There’s no one-to-one mapping of task to link. Links were also often arbitrarily named: ‘SQL Lookup’ being an example. The highly-trained user base are dependent on muscle memory in finding these links. This meant that when asked something like: “What and where is the policing enquiry function?”, many couldn’t tell us what or where it was, but when they needed the report it contained they found it straight away.

Sort of difficult

Therefore, it came as little surprise that staff found the subsequent card sort task quite hard. We renamed the links to better describe their associated actions, and of course, they weren't in the same location as in Trade Me Admin. So instead of taking the predicted 20 minutes, the sort was taking upwards of 40 minutes. Not great when staff are supposed to be answering customer enquiries!

We noticed some strong trends in the results, with links clustering around some of the key pages and tasks (like 'member', 'listing', 'review member financials', and so on). The results also confirmed something that we had observed — that there is a strong split between two types of information: emails/tickets/notes and member info/listing info/reports.

We built and tested two IAs

pietree results tree testing

After card sorting, we created two new IAs, and then customized one of the IAs for each of the three CS teams, giving us IAs to test. Each team was then asked to complete two tree tests, with 50% doing one first and 50% doing the other first. At first glance, the results of the tree test were okay — around 61% — but 'Could try harder'. We saw very little overall difference between the success of the two structures, but definitely some differences in task success. And we also came across an interesting quirk in the results.

Closer analysis of the pie charts with an expert in Trade Me Admin showed that some ‘wrong’ answers would give part of the picture required. In some cases so much so that I reclassified answers as ‘correct’ as they were more right than wrong. Typically, in a real world situation, staff might check several reports in order to build a picture. This ambiguous nature is hard to replicate in a tree test which wants definitive yes or no answers. Keeping the tasks both simple to follow and comprehensive proved harder than we expected.

For example, we set a task that asked participants to investigate whether two customers had been bidding on each other's auctions. When we looked at the pietree (see screenshot below), we noticed some participants had clicked on 'Search Members', thinking they needed to locate the customer accounts, when the task had presumed that the customers had already been found. This is a useful insight into writing more comprehensive tasks that we can take with us into our next tests.  

What’s clear from analysis is that although it’s possible to provide definitive answers for a typical site’s IAs, for a CRM like Trade Me Admin this is a lot harder. Devising and testing the structure of a CRM has proved a challenge for our highly trained audience, who are used to the current system and naturally find it difficult to see and do things differently. Once we had reclassified some of the answers as ‘correct’ one of the two trees was a clear winner — it had gone from 61% to 69%. The other tree had only improved slightly, from 61% to 63%.

There were still elements with it that were performing sub-optimally in our winning structure, though. Generally, the problems were to do with labelling, where, in some cases, we had attempted to disambiguate those ‘SQL lookup’-type labels but in the process, confused the team. We were left with the dilemma of whether to go with the new labels and make the system initially harder to use for staff but easier to learn for new staff, or stick with the old labels, which are harder to learn. My view is that any new system is going to see an initial performance dip, so we might as well change the labels now and make it better.

The importance of carefully structuring questions in a tree test has been highlighted, particularly in light of the ‘start anywhere/go anywhere’ nature of a CRM. The diffuse but powerful nature of a CRM means that careful consideration of tree test answer options needs to be made, in order to decide ‘how close to 100% correct answer’ you want to get.

Development work has begun so watch this space

It's great to see that our research is influencing the next stage of the CRM system, and we're looking forward to seeing it go live. Of course, our work isn't over— and nor would we want it to be! Alongside the redevelopment of the IA, I've been redesigning the key pages from Trade Me Admin, and continuing to conduct user research, including first click testing using Chalkmark.

This project has been governed by a steadily developing set of design principles, focused on complex CRM systems and the specific needs of their audience. Two of these principles are to reduce navigation and to design for experts, not novices, which means creating dense, detailed pages. It's intense, complex, and rewarding design work, and we'll be exploring this exciting space in more depth in upcoming posts.

Learn more
1 min read

Optimal vs Dovetail: Why Smart Product Teams Choose Unified Research Workflows

UX, product and design teams face growing challenges with tool proliferation, relying on different options for surveys, usability testing, and participant recruitment before transferring data into analysis tools like Dovetail. This fragmented workflow creates significant data integration issues and reporting bottlenecks that slow down teams trying to conduct smart, fast UX research. The constant switching between platforms not only wastes time but also increases the risk of data loss and inconsistencies across research projects. Optimal addresses these operational challenges by unifying the entire research workflow within a single platform, enabling teams to recruit participants, run tests and studies, and perform analysis without the complexity of managing multiple tools.

Why Choose Optimal over Dovetail? 

Unified Research Operations vs. Fragmented Workflow

Optimal's Streamlined Workflow: Optimal eliminates tool chain management by providing recruitment, testing, and analysis in one platform, enabling researchers to move seamlessly from study design to actionable insights.

Dovetail's Tool Chain Complexity: In contrast, Dovetail requires teams to coordinate multiple platforms, one for recruitment, another for surveys, a third for usability testing, then import everything for analysis, creating workflow bottlenecks and coordination overhead.

Optimal's Focused Research Flow: Optimal's unified interface keeps researchers in flow state, moving efficiently through research phases without context switching or tool coordination.

Context Switching Inefficiency: Dovetail users constantly switch between different tools with different interfaces, learning curves, and data formats, fragmenting focus and slowing research velocity.

Integrated Intelligence vs. Data Silos

Consistent Data Standards: Optimal's unified platform ensures consistent data collection standards, formatting, and quality controls across all research methods, delivering reliable insights from integrated data sources.

Fragmented Data Sources: Dovetail aggregates data from multiple external sources, but this fragmentation can create inconsistencies, data quality issues, and gaps in analysis that compromise insight reliability.

Automated Data Integration: Optimal automatically captures and integrates data across all research activities, enabling real-time analysis and immediate insight generation without manual data management.

Manual Data Coordination: Dovetail teams spend significant time importing, formatting, and reconciling data from different tools before analysis can begin, delaying insight delivery and increasing error risk.

Comprehensive Research Capabilities vs. Analysis-Only Focus

Complete End-to-End Research Platform: Optimal provides a full suite of native research capabilities including live site testing, prototype testing, card sorting, tree testing, surveys, and more, all within a single platform. Optimal's live site testing allows you to test actual websites and web apps with real users without any code requirements, enabling continuous optimization post-launch.

Dovetail Requires External Tools: Dovetail focuses primarily on analysis and requires teams to use separate tools for data collection, adding complexity and cost to the research workflow.

AI-Powered Interview Analysis: Optimal's new Interviews tool transforms how teams extract insights from user research. Upload interview videos and let AI automatically surface key themes, generate smart highlight reels, create timestamped transcripts, and produce actionable insights in hours instead of weeks. Every insight comes with supporting video evidence, making it easy to back up recommendations with real user feedback.

Dovetail's Manual Analysis Process: While Dovetail offers analysis features, teams must still coordinate external interview tools and manually import data before analysis can begin, creating additional workflow steps.

Global Research Capabilities vs. Limited Data Collection

Global Participant Network: Optimal's 10+ million verified participants across 150+ countries provide comprehensive recruitment capabilities with advanced targeting and quality assurance for any research requirement.

No Native Recruitment: Dovetail's beta participant recruitment add-on lacks the scale and reliability enterprise teams need, forcing dependence on external recruitment services with additional costs and complexity.

Complete Research ROI: Optimal delivers immediate value through integrated data collection and analysis capabilities, ensuring consistent ROI regardless of external research dependencies.

Analysis-Only Value: Dovetail's value depends entirely on research volume from external sources, making ROI uncertain for teams with moderate research needs or budget constraints.

Dovetail Challenges: 

Dovetail may slow teams because of challenges with: 

  • Multi-tool coordination requiring significant project management overhead
  • Data fragmentation creating inconsistencies and quality control challenges
  • Context switching between platforms disrupting research flow and focus
  • Manual data import and formatting delaying insight delivery
  • Complex tool chain management requiring specialized technical knowledge

When Optimal is the Right Choice

Optimal becomes essential for:

  • Streamlined Workflows: Teams needing efficient research operations without tool coordination overhead
  • Research Velocity: Projects requiring rapid iteration from hypothesis to validated insights
  • Data Consistency: Studies where integrated data standards ensure reliable analysis and conclusions
  • Focus and Flow: Researchers who need to maintain deep focus without platform switching
  • Immediate Insights: Teams requiring real-time analysis and instant insight generation
  • Resource Efficiency: Organizations wanting to maximize researcher productivity and minimize tool management

Ready to move beyond basic feedback to strategic research intelligence? Experience how Optimal's analytical depth and comprehensive insights drive product decisions that create competitive advantage.

Learn more
1 min read

Optimal vs Ballpark: Why Research Depth Matters More Than Surface-Level Simplicity

Many smaller product teams find newer research tools like Ballpark attractive due to their promises of being able to provide simple and quick user feedback tools. However, larger teams conducting UX research that drives product strategy need platforms capable of delivering actionable insights rather than just surface-level metrics. While Ballpark provides basic testing functionality that works for simple validation, Optimal offers the research depth, comprehensive analysis capabilities, and strategic intelligence that teams require when making critical product decisions.

Why Choose Optimal over Ballpark?

Surface-Level Feedback vs. Strategic Research Intelligence

  • Ballpark's Shallow Analysis: Ballpark focuses on collecting quick feedback through basic surveys and simple preference tests, but lacks the analytical depth needed to understand why users behave as they do or what actions to take based on findings.
  • Optimal's Strategic Insights: Optimal transforms user feedback into strategic intelligence through advanced analytics, behavioral analysis, and AI-powered insights that reveal not just what happened, but why it happened and what to do about it.
  • Limited Research Methodology: Ballpark's toolset centers on simple feedback collection without comprehensive research methods like advanced card sorting, tree testing, or sophisticated user journey analysis.
  • Complete Research Arsenal: Optimal provides the full spectrum of research methodologies needed to understand complex user behaviors, validate design decisions, and guide strategic product development.

Quick Metrics vs. Actionable Intelligence

  • Basic Data Collection: Ballpark provides simple metrics and basic reporting that tell you what happened but leave teams to figure out the 'why' and 'what next' on their own.
  • Intelligent Analysis: Optimal's AI-powered analysis doesn't just collect data—it identifies patterns, predicts user behavior, and provides specific recommendations that guide product decisions.
  • Limited Participant Insights: Ballpark's 3 million participant panel provides basic demographic targeting but lacks the sophisticated segmentation and behavioral profiling needed for nuanced research.
  • Deep User Understanding: Optimal's 100+ million verified participants across 150+ countries enable precise targeting and comprehensive user profiling that reveals deep behavioral insights and cultural nuances.

Startup Risk vs. Enterprise Reliability

  • Unproven Stability: As a recently founded startup with limited funding transparency, Ballpark presents platform stability risks and uncertain long-term viability for enterprise research investments.
  • Proven Enterprise Reliability: Optimal has successfully launched over 100,000 studies with 99.9% uptime guarantee, providing the reliability and stability enterprise organizations require.
  • Limited Support Infrastructure: Ballpark's small team and basic support options cannot match the dedicated account management and enterprise support that strategic research programs demand.
  • Enterprise Support Excellence: Optimal provides dedicated account managers, 24/7 enterprise support, and comprehensive onboarding that ensures research program success.

When to Choose Optimal

Optimal is the best choice for teams looking for: 

  • Actionable Intelligence: When teams need insights that directly inform product strategy and design decisions
  • Behavioral Understanding: Projects requiring deep analysis of why users behave as they do
  • Complex Research Questions: Studies that demand sophisticated methodologies and advanced analytics
  • Strategic Product Decisions: When research insights drive major feature development and business direction
  • Comprehensive User Insights: Teams needing complete user understanding beyond basic preference testing
  • Competitive Advantage: Organizations using research intelligence to outperform competitors

Ready to move beyond basic feedback to strategic research intelligence? Experience how Optimal's analytical depth and comprehensive insights drive product decisions that create competitive advantage.

Learn more
1 min read

Optimal vs Useberry: Why Strategic Research Requires More Than Basic Prototype Testing

Smaller research teams frequently gravitate toward lightweight tools like Useberry when they need quick user feedback. However, as product teams scale and tackle more complex challenges, they require platforms that can deliver both rapid insights and strategic depth. While Useberry offers basic prototype testing capabilities that work well for simple user feedback collection, Optimal provides the comprehensive feature set and flexible participant recruitment options that leading organizations depend on to make informed product and design decisions.

Why Choose Optimal over Useberry?

Rapid Feedback vs. Comprehensive Research Intelligence

  • Useberry's Basic Approach: Useberry focuses on simple prototype testing with basic click tracking and minimal analysis capabilities, lacking the sophisticated insights and enterprise features required for strategic research programs.
  • Optimal's Research Excellence: Optimal combines rapid study deployment with comprehensive research methodologies, AI-powered analysis, and enterprise-grade insights that transform user feedback into strategic business intelligence.
  • Limited Research Depth: Useberry provides surface-level metrics without advanced statistical analysis, AI-powered insights, or comprehensive reporting capabilities that enterprise teams require for strategic decision-making.
  • Strategic Intelligence Platform: Optimal delivers deep research capabilities with advanced analytics, predictive modeling, and AI-powered insights that enable data-driven strategy and competitive advantage.

Enterprise Scalability

  • Constrained Participant Options: Useberry offers limited participant recruitment with basic demographic targeting, restricting research scope and limiting access to specialized audiences required for enterprise research.
  • Global Research Network: Optimal's 100+ million verified participants across 150+ countries enable sophisticated targeting, international market validation, and reliable recruitment for any audience requirement.
  • Basic Quality Controls: Useberry lacks comprehensive participant verification and fraud prevention measures, potentially compromising data quality and research validity for mission-critical studies.
  • Enterprise-Grade Quality: Optimal implements advanced fraud prevention, multi-layer verification, and quality assurance protocols trusted by Fortune 500 companies for reliable research results.

Key Platform Differentiators for Enterprise

  • Limited Methodology Support: Useberry focuses primarily on prototype testing with basic surveys, lacking the comprehensive research methodology suite enterprise teams need for diverse research requirements.
  • Complete Research Platform: Optimal provides full-spectrum research capabilities including advanced card sorting, tree testing, surveys, prototype validation, and qualitative insights with integrated analysis across all methods.
  • Basic Security and Support: Useberry operates with standard security measures and basic support options, insufficient for enterprise organizations with compliance requirements and mission-critical research needs.
  • Enterprise Security and Support: Optimal delivers SOC 2 compliance, enterprise security protocols, dedicated account management, and 24/7 support that meets Fortune 500 requirements.

When to Choose Optimal vs. Useberry

Useberry may be a good choice for teams who are happy with:

  • Basic prototype testing needs without comprehensive research requirements
  • Limited participant targeting without sophisticated segmentation
  • Simple metrics without advanced analytics and AI-powered insights
  • Standard security needs without enterprise compliance requirements
  • Small-scale projects without global research demands

When Optimal Enables Research Excellence

Optimal becomes essential for:

  • Strategic Research Programs: When insights drive product strategy and business decisions
  • Enterprise Organizations: Requiring comprehensive security, compliance, and support infrastructure
  • Global Market Research: Needing international participant access and cultural localization
  • Advanced Analytics: Teams requiring AI-powered insights, statistical modeling, and predictive analysis
  • Quality-Critical Studies: Where participant verification and data integrity are paramount
  • Scalable Operations: Growing research programs needing enterprise-grade platform capabilities

Ready to transform research from basic feedback to strategic intelligence? Experience how Optimal's enterprise platform delivers the comprehensive capabilities and global reach your research program demands.

Learn more
1 min read

Optimal vs UXtweak: Why Enterprise Teams Need Comprehensive Research Platforms

The decision between specialized UX testing tools and comprehensive user insight platforms fundamentally shapes how teams generate, analyze, and act on user feedback. This choice affects not only immediate research capabilities but also long-term strategic planning and organizational impact. While UXTweak focuses primarily on basic usability testing with straightforward functionality, Optimal provides the robust capabilities, global participant reach, and advanced analytics infrastructure that the world's biggest brands rely on to build products users genuinely love. Optimal's platform enables teams to conduct sophisticated research, integrate insights across departments, and deliver actionable recommendations that drive meaningful business outcomes.

Why Choose Optimal over UXtweak?

Strategic User Research vs. Basic Testing

Optimal's Research Leadership: Optimal delivers complete research capabilities combining rapid study deployment with AI-powered insights, advanced participant targeting, and enterprise-grade analytics that transform user feedback into actionable business intelligence. This includes comprehensive live site testing that allows you to test actual websites and web apps without code, enabling continuous optimization and real-time user insights post-launch.

UXtweak's Limited Scope: In contrast, UXTweak operates primarily as a basic usability testing tool with simple click tracking and heat maps, lacking the sophisticated AI-powered analysis and comprehensive insights enterprise research programs demand for strategic impact.

Enterprise-Ready Platform: Optimal serves Fortune 500 clients including Lego, Nike, and Amazon with SOC 2 compliance, enterprise security protocols, and dedicated support infrastructure that scales with organizational growth.

Scalability Constraints: UXTweak's basic infrastructure and limited feature set restrict growth potential, making it unsuitable for enterprise teams requiring sophisticated research operations and global deployment capabilities.

Participant Quality and Advanced Analytics

Global Research Network: Optimal's 10+ million verified participants across 150+ countries enable sophisticated audience targeting, international market research, and reliable recruitment for any demographic or geographic requirement.

Limited Panel Access: UXTweak provides minimal participant recruitment options with basic targeting capabilities, restricting teams to simple demographic filters and limiting research scope for complex audience requirements.

AI-Powered Intelligence: Optimal includes sophisticated AI analysis tools that automatically generate insights, identify patterns, create statistical models, and deliver actionable recommendations that drive strategic decisions. Our new Interviews tool transforms research analysis, upload interview videos and let AI automatically surface key themes, generate smart highlight reels with timestamped evidence, and produce actionable insights in hours instead of weeks, eliminating manual analysis bottlenecks.

Surface-Level Analysis: UXTweak delivers basic click tracking and simple metrics without integrated AI tools or advanced statistical analysis, requiring teams to manually interpret raw data for insights.

Feature Depth and Platform Capabilities

Complete Research Suite: Optimal provides full-spectrum research capabilities including advanced card sorting, tree testing, prototype validation, surveys, and qualitative insights with integrated AI analysis across all methodologies.

Basic Tool Limitations: UXtweak offers elementary testing capabilities focused on simple click tracking and basic surveys, lacking the comprehensive research tools enterprise teams need for strategic product decisions.

Automated Research Operations: Optimal streamlines research workflows with automated participant matching, AI-powered analysis, integrated reporting, and seamless collaboration tools that accelerate insight delivery.

Manual Workflow Dependencies: UXtweak requires significant manual effort for study setup, participant management, and data analysis, creating workflow inefficiencies that slow research velocity and impact delivery timelines.

Where UXtweak Falls Short

UXtweak may be a good choice for teams who are looking for:

  • Basic testing needs without strategic research requirements
  • Simple demographic targeting without sophisticated segmentation
  • Manual analysis workflows without AI-powered insights
  • Limited budget prioritizing low cost over comprehensive capabilities
  • Small-scale projects without enterprise compliance needs

When Optimal Delivers Strategic Value

Optimal becomes essential for:

  • Strategic Research Programs: When user insights drive business strategy and product decisions
  • Global Organizations: Requiring international research capabilities and market validation
  • Quality-Critical Studies: Where participant verification, advanced analytics, and data integrity matter
  • Enterprise Compliance: Organizations with security, privacy, and regulatory requirements
  • Advanced Research Needs: Teams requiring AI-powered insights, statistical analysis, and comprehensive reporting
  • Scalable Operations: Growing programs needing enterprise-grade platform capabilities and support

Ready to see how leading brands including Lego, Netflix and Nike achieve better research outcomes? Experience how Optimal's platform delivers user insights that adapt to your team's growing needs.

Learn more
1 min read

Optimal vs Lyssna: Why Enterprise Teams Need Enterprise-Ready Platforms

The choice between comprehensive research platforms and tools designed for smaller teams becomes increasingly critical as research and product teams work to scale their user insight capabilities. This decision impacts not only immediate research outcomes but also long-term strategic planning and organizational growth. While platforms like Lyssna focus on rapid feedback collection and quick turnaround times which are valuable for teams needing fast validation, Optimal delivers the depth, reliability, and enterprise features that the world's biggest brands require to make strategic product decisions.

Why do teams choose Optimal instead of Lyssna?

Comprehensive Insights vs. Speed-Only Focus

Optimal's Comprehensive Approach: Optimal combines speed with depth, delivering rapid study launch alongside AI-powered analysis, detailed reporting, and enterprise-grade insights that transform user feedback into actionable business intelligence. This includes live site testing capabilities that let you test actual websites and web apps without code, enabling continuous optimization post-launch.

Lyssna's Speed Focus: In contrast, Lyssna optimizes for quick feedback collection with simple testing workflows, but lacks AI-powered analysis, advanced reporting, and the sophisticated insights enterprise research programs require for strategic decision-making.

Trusted by Global Brands: Optimal serves enterprise clients including Lego, Nike, and Amazon with SOC 2 compliance, global security protocols, and dedicated enterprise support that meets Fortune 500 requirements.

Limited Enterprise Features: Lyssna operates as a testing tool rather than an enterprise platform, lacking the compliance, security, and support infrastructure global brands require for mission-critical research programs.

Participant Quality and Global Reach

Global Participant Network: Optimal's 10+ million verified participants across 150+ countries enable sophisticated audience targeting, global market research, and reliable recruitment for any demographic or geographic requirement.

Limited Panel Reach: Lyssna's small participant panel restricts targeting options and geographic coverage, particularly for niche audiences or international research requirements.

Verified Participant Quality: Optimal implements comprehensive fraud prevention, advanced screening protocols, and quality assurance processes that ensure participant authenticity and criteria matching for reliable research results.

Quality Control Issues: Users report that Lyssna participants often don't match requested criteria, compromising study validity and requiring additional screening overhead.

Advanced Features and Platform Capabilities

AI-Powered Insights: Optimal includes sophisticated AI analysis tools that automatically generate insights, identify patterns, and create actionable recommendations from research data. Our new Interviews tool exemplifies this innovation, upload interview videos and let AI automatically surface key themes, generate smart highlight reels with timestamped evidence, and produce actionable insights in hours instead of weeks.

Manual Analysis Required: Lyssna provides basic reporting without integrated AI tools, requiring teams to manually analyze results and generate insights from raw data.

Full-Service Flexibility: Optimal provides both self-service and white-glove managed recruitment services, accommodating varying team resources and research complexity with dedicated support for challenging recruitment scenarios.

Self-Service Only: Lyssna operates exclusively as a self-service platform without managed recruitment options for teams requiring specialized audience targeting or complex demographic requirements.

Sophisticated Yet Accessible: Optimal balances powerful functionality with intuitive design, providing guided templates and automation features that enable complex research without overwhelming users.

Simple but Limited: While Lyssna offers a straightforward interface, this simplicity comes with functional limitations that restrict test design flexibility and advanced research capabilities.

When to Choose Lyssna

Lyssna may suffice for teams with:

  • Basic testing needs without strategic implications
  • Limited budgets prioritizing low cost over comprehensive features
  • Simple research requirements without compliance needs
  • Acceptance of limited participant quality and geographic reach

When to Choose Optimal

Optimal becomes essential for:

  • Strategic Research Programs: When user insights drive business strategy
  • Global Organizations: Requiring international research capabilities
  • Quality-Critical Studies: Where participant verification and data integrity matter
  • Enterprise Compliance: Organizations with security and compliance requirements
  • Advanced Analysis Needs: Teams requiring AI-powered insights and sophisticated reporting
  • Scalable Research Operations: Growing programs needing comprehensive platform capabilities

Why Enterprises Need to Prioritize Enterprise Research Excellence

While Lyssna serves basic testing needs, enterprise research requires the depth, reliability, and global reach that only comprehensive platforms provide. Optimal delivers speed without sacrificing the sophisticated capabilities enterprise teams need for strategic decision-making. Don't compromise research quality for simple, quick tools.

Ready to see how leading brands including Lego, Netflix and Nike achieve better research outcomes? Experience how Optimal's platform delivers user insights that adapt to your team's growing needs.

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.