Blog

Optimal Blog

Articles and Podcasts on Customer Service, AI and Automation, Product, and more

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Latest

Learn more
1 min read

4 options for running a card sort

This morning Ieavesdroppeda conversation between Amy Worley (@worleygirl) and The SemanticWill™ (@semanticwill) on "the twitters".Aside from recommending two books by Donna Spencer (@maadonna), I asked Nicole Kaufmann, one of the friendly consultants at Optimal Usability, if she had any advice for Amy about reorganising 404 books into categories that make more sense.I don't know Amy's email address and this is much too long for a tweet. In any case I thought it might be helpful for someone else too so here's what Nicole had to say:In general I would recommend having at least three sources of information (e.g. 1x analytics + 1 open card sort + 1 tree test, or 2 card sorts + 1 tree test) in order to come up with a useful and reliable categorisation structure.Here are four options for how you could consider approaching it (starting with my most preferred to least preferred):

Option A

  • Pick the 20-25 cards you think will be the most difficult and 20-25 cards that you think will be the easiest to sort and test those in one open card sort.
  • Based on the results create one or two sets of categories structures which you can test in a one or two closed card sorts. Consider replacing about half of the tested cards with new ones.
  • Based on the results of those two rounds of card sorting, create a categorisation structure and pick a set of difficult cards which you can turn into tasks which you can test in a tree test.
  • Plus: Categorisation is revised between studies. Relative easy analysis.
  • Minus: Not all cards have been tested. Depending on the number of studies needs about 80-110 participants. Time intensive.

Option B

  • Pick the 20-25 cards you think will be the most difficult and 20-25 cards that you think will be the easiest to sort and test those in one open card sort.
  • Based on the results do a closed card sort(s) excluding the easiest cards and adding some new cards which haven't been tested before.
  • Plus: Card sort with reasonable number of cards, only 40-60 participants needed, quick to analyse.
  • Minus: Potential bias and misleading results if the wrong cards are picked.

Option C

  • Create your own top level categories (5-8) (could be based on a card sort) and assign cards to these categories, then pick random cards within those categories and set up a card sort for each (5-8).
  • Based on the results create a categorisation structure and a set of task which will be tested in a tree test.
  • Plus: Limited set of card sorts with reasonable number of cards, quick to analyse. Several sorts for comparison.
  • Minus: Potential bias and misleading results if the wrong top categories are picked. Potentially different categorisation schemes/approaches for each card sort, making them hard to combine into one solid categorisation structure.

Option D

  • Approach: Put all 404 cards into 1 open card sort, showing each participant only 40-50 cards.
  • Plus: All cards will have been tested
  • Do a follow up card sort with the most difficult and easiest cards (similar to option B).
  • Minus: You need at least 200-300 completed responses to get reasonable results. Depending on your participant sources it may take ages to get that many participants.
Learn more
1 min read

Optimal vs Dovetail: Why Smart Product Teams Choose Unified Research Workflows

UX, product and design teams face growing challenges with tool proliferation, relying on different options for surveys, usability testing, and participant recruitment before transferring data into analysis tools like Dovetail. This fragmented workflow creates significant data integration issues and reporting bottlenecks that slow down teams trying to conduct smart, fast UX research. The constant switching between platforms not only wastes time but also increases the risk of data loss and inconsistencies across research projects. Optimal addresses these operational challenges by unifying the entire research workflow within a single platform, enabling teams to recruit participants, run tests and studies, and perform analysis without the complexity of managing multiple tools.

Why Choose Optimal over Dovetail? 

Fragmented Workflow vs. Unified Research Operations

  • Dovetail's Tool Chain Complexity: Dovetail requires teams to coordinate multiple platforms—one for recruitment, another for surveys, a third for usability testing—then import everything for analysis, creating workflow bottlenecks and coordination overhead.
  • Optimal's Streamlined Workflow: Optimal eliminates tool chain management by providing recruitment, testing, and analysis in one platform, enabling researchers to move seamlessly from study design to actionable insights.
  • Context Switching Inefficiency: Dovetail users constantly switch between different tools with different interfaces, learning curves, and data formats, fragmenting focus and slowing research velocity.
  • Focused Research Flow: Optimal's unified interface keeps researchers in flow state, moving efficiently through research phases without context switching or tool coordination.

Data Silos vs. Integrated Intelligence

  • Fragmented Data Sources: Dovetail aggregates data from multiple external sources, but this fragmentation can create inconsistencies, data quality issues, and gaps in analysis that compromise insight reliability.
  • Consistent Data Standards: Optimal's unified platform ensures consistent data collection standards, formatting, and quality controls across all research methods, delivering reliable insights from integrated data sources.
  • Manual Data Coordination: Dovetail teams spend significant time importing, formatting, and reconciling data from different tools before analysis can begin, delaying insight delivery and increasing error risk.
  • Automated Data Integration: Optimal automatically captures and integrates data across all research activities, enabling real-time analysis and immediate insight generation without manual data management.

Limited Data Collection vs. Global Research Capabilities

  • No Native Recruitment: Dovetail's beta participant recruitment add-on lacks the scale and reliability enterprise teams need, forcing dependence on external recruitment services with additional costs and complexity.
  • Global Participant Network: Optimal's 100+ million verified participants across 150+ countries provide comprehensive recruitment capabilities with advanced targeting and quality assurance for any research requirement.
  • Analysis-Only Value: Dovetail's value depends entirely on research volume from external sources, making ROI uncertain for teams with moderate research needs or budget constraints.
  • Complete Research ROI: Optimal delivers immediate value through integrated data collection and analysis capabilities, ensuring consistent ROI regardless of external research dependencies.

Doveetail Challenges: 

Dovetail may slow teams because of challenges with: 

  • Multi-tool coordination requiring significant project management overhead
  • Data fragmentation creating inconsistencies and quality control challenges
  • Context switching between platforms disrupting research flow and focus
  • Manual data import and formatting delaying insight delivery
  • Complex tool chain management requiring specialized technical knowledge

When Optimal is the Right Choice

Optimal becomes essential for:

  • Streamlined Workflows: Teams needing efficient research operations without tool coordination overhead
  • Research Velocity: Projects requiring rapid iteration from hypothesis to validated insights
  • Data Consistency: Studies where integrated data standards ensure reliable analysis and conclusions
  • Focus and Flow: Researchers who need to maintain deep focus without platform switching
  • Immediate Insights: Teams requiring real-time analysis and instant insight generation
  • Resource Efficiency: Organizations wanting to maximize researcher productivity and minimize tool management

Ready to move beyond basic feedback to strategic research intelligence? Experience how Optimal's analytical depth and comprehensive insights drive product decisions that create competitive advantage.

Learn more
1 min read

Optimal vs Ballpark: Why Research Depth Matters More Than Surface-Level Simplicity

Many smaller product teams find newer research tools like Ballpark attractive due to their promises of being able to provide simple and quick user feedback tools. However, larger teams conducting UX research that drives product strategy need platforms capable of delivering actionable insights rather than just surface-level metrics. While Ballpark provides basic testing functionality that works for simple validation, Optimal offers the research depth, comprehensive analysis capabilities, and strategic intelligence that teams require when making critical product decisions.

Why Choose Optimal over Ballpark?

Surface-Level Feedback vs. Strategic Research Intelligence

  • Ballpark's Shallow Analysis: Ballpark focuses on collecting quick feedback through basic surveys and simple preference tests, but lacks the analytical depth needed to understand why users behave as they do or what actions to take based on findings.
  • Optimal's Strategic Insights: Optimal transforms user feedback into strategic intelligence through advanced analytics, behavioral analysis, and AI-powered insights that reveal not just what happened, but why it happened and what to do about it.
  • Limited Research Methodology: Ballpark's toolset centers on simple feedback collection without comprehensive research methods like advanced card sorting, tree testing, or sophisticated user journey analysis.
  • Complete Research Arsenal: Optimal provides the full spectrum of research methodologies needed to understand complex user behaviors, validate design decisions, and guide strategic product development.

Quick Metrics vs. Actionable Intelligence

  • Basic Data Collection: Ballpark provides simple metrics and basic reporting that tell you what happened but leave teams to figure out the 'why' and 'what next' on their own.
  • Intelligent Analysis: Optimal's AI-powered analysis doesn't just collect data—it identifies patterns, predicts user behavior, and provides specific recommendations that guide product decisions.
  • Limited Participant Insights: Ballpark's 3 million participant panel provides basic demographic targeting but lacks the sophisticated segmentation and behavioral profiling needed for nuanced research.
  • Deep User Understanding: Optimal's 100+ million verified participants across 150+ countries enable precise targeting and comprehensive user profiling that reveals deep behavioral insights and cultural nuances.

Startup Risk vs. Enterprise Reliability

  • Unproven Stability: As a recently founded startup with limited funding transparency, Ballpark presents platform stability risks and uncertain long-term viability for enterprise research investments.
  • Proven Enterprise Reliability: Optimal has successfully launched over 100,000 studies with 99.9% uptime guarantee, providing the reliability and stability enterprise organizations require.
  • Limited Support Infrastructure: Ballpark's small team and basic support options cannot match the dedicated account management and enterprise support that strategic research programs demand.
  • Enterprise Support Excellence: Optimal provides dedicated account managers, 24/7 enterprise support, and comprehensive onboarding that ensures research program success.

When to Choose Optimal

Optimal is the best choice for teams looking for: 

  • Actionable Intelligence: When teams need insights that directly inform product strategy and design decisions
  • Behavioral Understanding: Projects requiring deep analysis of why users behave as they do
  • Complex Research Questions: Studies that demand sophisticated methodologies and advanced analytics
  • Strategic Product Decisions: When research insights drive major feature development and business direction
  • Comprehensive User Insights: Teams needing complete user understanding beyond basic preference testing
  • Competitive Advantage: Organizations using research intelligence to outperform competitors

Ready to move beyond basic feedback to strategic research intelligence? Experience how Optimal's analytical depth and comprehensive insights drive product decisions that create competitive advantage.

Learn more
1 min read

Optimal vs Useberry: Why Strategic Research Requires More Than Basic Prototype Testing

Smaller research teams frequently gravitate toward lightweight tools like Useberry when they need quick user feedback. However, as product teams scale and tackle more complex challenges, they require platforms that can deliver both rapid insights and strategic depth. While Useberry offers basic prototype testing capabilities that work well for simple user feedback collection, Optimal provides the comprehensive feature set and flexible participant recruitment options that leading organizations depend on to make informed product and design decisions.

Why Choose Optimal over Useberry?

Rapid Feedback vs. Comprehensive Research Intelligence

  • Useberry's Basic Approach: Useberry focuses on simple prototype testing with basic click tracking and minimal analysis capabilities, lacking the sophisticated insights and enterprise features required for strategic research programs.
  • Optimal's Research Excellence: Optimal combines rapid study deployment with comprehensive research methodologies, AI-powered analysis, and enterprise-grade insights that transform user feedback into strategic business intelligence.
  • Limited Research Depth: Useberry provides surface-level metrics without advanced statistical analysis, AI-powered insights, or comprehensive reporting capabilities that enterprise teams require for strategic decision-making.
  • Strategic Intelligence Platform: Optimal delivers deep research capabilities with advanced analytics, predictive modeling, and AI-powered insights that enable data-driven strategy and competitive advantage.

Enterprise Scalability

  • Constrained Participant Options: Useberry offers limited participant recruitment with basic demographic targeting, restricting research scope and limiting access to specialized audiences required for enterprise research.
  • Global Research Network: Optimal's 100+ million verified participants across 150+ countries enable sophisticated targeting, international market validation, and reliable recruitment for any audience requirement.
  • Basic Quality Controls: Useberry lacks comprehensive participant verification and fraud prevention measures, potentially compromising data quality and research validity for mission-critical studies.
  • Enterprise-Grade Quality: Optimal implements advanced fraud prevention, multi-layer verification, and quality assurance protocols trusted by Fortune 500 companies for reliable research results.

Key Platform Differentiators for Enterprise

  • Limited Methodology Support: Useberry focuses primarily on prototype testing with basic surveys, lacking the comprehensive research methodology suite enterprise teams need for diverse research requirements.
  • Complete Research Platform: Optimal provides full-spectrum research capabilities including advanced card sorting, tree testing, surveys, prototype validation, and qualitative insights with integrated analysis across all methods.
  • Basic Security and Support: Useberry operates with standard security measures and basic support options, insufficient for enterprise organizations with compliance requirements and mission-critical research needs.
  • Enterprise Security and Support: Optimal delivers SOC 2 compliance, enterprise security protocols, dedicated account management, and 24/7 support that meets Fortune 500 requirements.

When to Choose Optimal vs. Useberry

Useberry may be a good choice for teams who are happy with:

  • Basic prototype testing needs without comprehensive research requirements
  • Limited participant targeting without sophisticated segmentation
  • Simple metrics without advanced analytics and AI-powered insights
  • Standard security needs without enterprise compliance requirements
  • Small-scale projects without global research demands

When Optimal Enables Research Excellence

Optimal becomes essential for:

  • Strategic Research Programs: When insights drive product strategy and business decisions
  • Enterprise Organizations: Requiring comprehensive security, compliance, and support infrastructure
  • Global Market Research: Needing international participant access and cultural localization
  • Advanced Analytics: Teams requiring AI-powered insights, statistical modeling, and predictive analysis
  • Quality-Critical Studies: Where participant verification and data integrity are paramount
  • Scalable Operations: Growing research programs needing enterprise-grade platform capabilities

Ready to transform research from basic feedback to strategic intelligence? Experience how Optimal's enterprise platform delivers the comprehensive capabilities and global reach your research program demands.

Learn more
1 min read

Optimal vs UXtweak: Why Enterprise Teams Need Comprehensive Research Platforms

The decision between specialized UX testing tools and comprehensive user insight platforms fundamentally shapes how teams generate, analyze, and act on user feedback. This choice affects not only immediate research capabilities but also long-term strategic planning and organizational impact. While UXTweak focuses primarily on basic usability testing with straightforward functionality, Optimal provides the robust capabilities, global participant reach, and advanced analytics infrastructure that the world's biggest brands rely on to build products users genuinely love. Optimal's platform enables teams to conduct sophisticated research, integrate insights across departments, and deliver actionable recommendations that drive meaningful business outcomes.

Why Choose Optimal over UXtweak?

Basic Testing vs. Strategic User Research

  • UXtweak's Limited Scope: UXTweak operates primarily as a basic usability testing tool with simple click tracking and heat maps, lacking the sophisticated AI-powered analysis and comprehensive insights enterprise research programs demand for strategic impact.
  • Optimal's Research Leadership: Optimal delivers complete research capabilities combining rapid study deployment with AI-powered insights, advanced participant targeting, and enterprise-grade analytics that transform user feedback into actionable business intelligence.
  • Scalability Constraints: UXTweak's basic infrastructure and limited feature set restrict growth potential, making it unsuitable for enterprise teams requiring sophisticated research operations and global deployment capabilities.
  • Enterprise-Ready Platform: Optimal serves Fortune 500 clients including Lego, Nike, and Amazon with SOC 2 compliance, enterprise security protocols, and dedicated support infrastructure that scales with organizational growth.

Participant Quality

  • Limited Panel Access: UXtweak provides minimal participant recruitment options with basic targeting capabilities, restricting teams to simple demographic filters and limiting research scope for complex audience requirements.
  • Global Research Network: Optimal's 100+ million verified participants across 150+ countries enable sophisticated audience targeting, international market research, and reliable recruitment for any demographic or geographic requirement.
  • Surface-Level Analysis: UXtweak delivers basic click tracking and simple metrics without integrated AI tools or advanced statistical analysis, requiring teams to manually interpret raw data for insights.
  • AI-Powered Intelligence: Optimal includes sophisticated AI analysis tools that automatically generate insights, identify patterns, create statistical models, and deliver actionable recommendations that drive strategic decisions.

Feature Depth and Platform Capabilities

  • Basic Tool Limitations: UXtweak offers elementary testing capabilities focused on simple click tracking and basic surveys, lacking the comprehensive research tools enterprise teams need for strategic product decisions.
  • Complete Research Suite: Optimal provides full-spectrum research capabilities including advanced card sorting, tree testing, prototype validation, surveys, and qualitative insights with integrated AI analysis across all methodologies.
  • Manual Workflow Dependencies: UXtweak requires significant manual effort for study setup, participant management, and data analysis, creating workflow inefficiencies that slow research velocity and impact delivery timelines.
  • Automated Research Operations: Optimal streamlines research workflows with automated participant matching, AI-powered analysis, integrated reporting, and seamless collaboration tools that accelerate insight delivery.

Where UXtweak Falls Short

UXtweak may be a good choice for teams who are looking for:

  • Basic testing needs without strategic research requirements
  • Simple demographic targeting without sophisticated segmentation
  • Manual analysis workflows without AI-powered insights
  • Limited budget prioritizing low cost over comprehensive capabilities
  • Small-scale projects without enterprise compliance needs

When Optimal Delivers Strategic Value

Optimal becomes essential for:

  • Strategic Research Programs: When user insights drive business strategy and product decisions
  • Global Organizations: Requiring international research capabilities and market validation
  • Quality-Critical Studies: Where participant verification, advanced analytics, and data integrity matter
  • Enterprise Compliance: Organizations with security, privacy, and regulatory requirements
  • Advanced Research Needs: Teams requiring AI-powered insights, statistical analysis, and comprehensive reporting
  • Scalable Operations: Growing programs needing enterprise-grade platform capabilities and support

Ready to see how leading brands including Lego, Netflix and Nike achieve better research outcomes? Experience how Optimal's platform delivers user insights that adapt to your team's growing needs.

Learn more
1 min read

Optimal vs Lyssna: Why Enterprise Teams Need Enterprise-Ready Platforms

The choice between comprehensive research platforms and tools designed for smaller teams becomes increasingly critical as research and product teams work to scale their user insight capabilities. This decision impacts not only immediate research outcomes but also long-term strategic planning and organizational growth. While platforms like Lyssna focus on rapid feedback collection and quick turnaround times which are valuable for teams needing fast validation, Optimal delivers the depth, reliability, and enterprise features that the world's biggest brands require to make strategic product decisions.

Why do teams choose Optimal instead of Lyssna?

Speed vs. Comprehensive Insights

  • Lyssna's Speed Focus: Lyssna optimizes for quick feedback collection with simple testing workflows, but lacks AI-powered analysis, advanced reporting, and the sophisticated insights enterprise research programs require for strategic decision-making.
  • Optimal's Comprehensive Approach: Optimal combines speed with depth, delivering rapid study launch alongside AI-powered analysis, detailed reporting, and enterprise-grade insights that transform user feedback into actionable business intelligence.
  • Limited Enterprise Features: Lyssna operates as a testing tool rather than an enterprise platform, lacking the compliance, security, and support infrastructure global brands require for mission-critical research programs.
  • Trusted by Global Brands: Optimal serves enterprise clients including Lego, Nike, and Amazon with SOC 2 compliance, global security protocols, and dedicated enterprise support that meets Fortune 500 requirements.

Participant Quality and Global Reach

  • Limited Panel Reach: Lyssna's approximately 700,000 participant panel restricts targeting options and geographic coverage, particularly for niche audiences or international research requirements.
  • Global Participant Network: Optimal's 100+ million verified participants across 150+ countries enable sophisticated audience targeting, global market research, and reliable recruitment for any demographic or geographic requirement.
  • Quality Control Issues: Users report that Lyssna participants often don't match requested criteria, compromising study validity and requiring additional screening overhead.
  • Verified Participant Quality: Optimal implements comprehensive fraud prevention, advanced screening protocols, and quality assurance processes that ensure participant authenticity and criteria matching for reliable research results.

Advanced Features and Platform Capabilities

  • Manual Analysis Required: Lyssna provides basic reporting without integrated AI tools, requiring teams to manually analyze results and generate insights from raw data.
  • AI-Powered Insights: Optimal includes sophisticated AI analysis tools that automatically generate insights, identify patterns, and create actionable recommendations from research data.
  • Self-Service Only: Lyssna operates exclusively as a self-service platform without managed recruitment options for teams requiring specialized audience targeting or complex demographic requirements.
  • Full-Service Flexibility: Optimal provides both self-service and white-glove managed recruitment services, accommodating varying team resources and research complexity with dedicated support for challenging recruitment scenarios.
  • Simple but Limited: While Lyssna offers a straightforward interface, this simplicity comes with functional limitations that restrict test design flexibility and advanced research capabilities.
  • Sophisticated Yet Accessible: Optimal balances powerful functionality with intuitive design, providing guided templates and automation features that enable complex research without overwhelming users.

When to Choose Lyssna

Lyssna may suffice for teams with:

  • Basic testing needs without strategic implications
  • Limited budgets prioritizing low cost over comprehensive features
  • Simple research requirements without compliance needs
  • Acceptance of limited participant quality and geographic reach

When to Choose Optimal

Optimal becomes essential for:

  • Strategic Research Programs: When user insights drive business strategy
  • Global Organizations: Requiring international research capabilities
  • Quality-Critical Studies: Where participant verification and data integrity matter
  • Enterprise Compliance: Organizations with security and compliance requirements
  • Advanced Analysis Needs: Teams requiring AI-powered insights and sophisticated reporting
  • Scalable Research Operations: Growing programs needing comprehensive platform capabilities

Why Enterprises Need to Prioritize Enterprise Research Excellence

While Lyssna serves basic testing needs, enterprise research requires the depth, reliability, and global reach that only comprehensive platforms provide. Optimal delivers speed without sacrificing the sophisticated capabilities enterprise teams need for strategic decision-making. Don't compromise research quality for simple, quick tools.

Ready to see how leading brands including Lego, Netflix and Nike achieve better research outcomes? Experience how Optimal's platform delivers user insights that adapt to your team's growing needs.

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.