Blog

Optimal Blog

Articles and Podcasts on Customer Service, AI and Automation, Product, and more

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Latest

Learn more
1 min read

How to Spot and Destroy Evil Attractors in Your Tree (Part 1)

Usability guru Jared Spool has written extensively about the 'scent of information'. This term describes how users are always 'on the hunt' through a site, click by click, to find the content they’re looking for. Tree testing helps you deliver a strong scent by improving organisation (how you group your headings and subheadings) and labelling (what you call each of them).

Anyone who’s seen a spy film knows there are always false scents and red herrings to lead the hero astray. And anyone who’s run a few tree tests has probably seen the same thing — headings and labels that lure participants to the wrong answer. We call these 'evil attractors'.In Part 1 of this article, we’ll look at what evil attractors are, how to spot them at the answer end of your tree, and how to fix them. In Part 2, we’ll look at how to spot them in the higher levels of your tree.

The false scent — what it looks like in practice

One of my favourite examples of an evil attractor comes from a tree test we ran for consumer.org.nz, a New Zealand consumer-review website (similar to Consumer Reports in the USA). Their site listed a wide range of consumer products in a tree several levels deep, and they wanted to try out a few ideas to make things easier to find as the site grew bigger.We ran the tests and got some useful answers, but we also noticed there was one particular subheading (Home > Appliances > Personal) that got clicks from participants looking for very different things — mobile phones, vacuum cleaners, home-theatre systems, and so on:

pic1

The website intended the Personal appliance category to be for products like electric shavers and curling irons. But apparently, Personal meant many things to our participants: they also went there for 'personal' items like mobile phones and cordless drills that actually lived somewhere else.This is the false scent — the heading that attracts clicks when it shouldn’t, leading participants astray. Hence this definition: an evil attractor is a heading that draws unwanted traffic across several unrelated tasks.

Evil attractors lead your users astray

Attracting clicks isn’t a bad thing in itself. After all, that’s what a good heading does — it attracts clicks for the content it contains (and discourages clicks for everything else). Evil attractors, on the other hand, attract clicks for things they shouldn’t. These attractors lure users down the wrong path, and when users find themselves in the wrong place they'll either back up and try elsewhere (if they’re patient) or give up (if they’re not). Because these attractor topics are magnets for the user’s attention, they make it less likely that your user will get to the place you intended. The other evil part of these attractors is the way they hide in the shadows. Most of the time, they don’t get the lion’s share of traffic for a given task. Instead, they’ll poach 5–10% of the responses, luring away a fraction of users who might otherwise have found the right answer.

Find evil attractors easily in your data

The easiest attractors to spot are those at the answer end of your tree (where participants ended up for each task). If we can look across tasks for similar wrong answers, then we can see which of these might be evil attractors.In your Treejack results, the Destinations tab lets you do just that. Here’s more of the consumer.org.nz example:

Pic2

Normally, when you look at this view, you’re looking down a column for big hits and misses for a specific task. To look for evil attractors, however, you’re looking for patterns across rows. In other words, you’re looking horizontally, not vertically. If we do that here, we immediately notice the row for Personal (highlighted yellow). See all those hits along the row? Those hits indicate an attractor — steady traffic across many tasks that seem to have little in common. But remember, traffic alone is not enough. We’re looking for unwanted traffic across unrelated tasks. Do we see that here? Well, it looks like the tasks (about cameras, drills, laptops, vacuums, and so on) are not that closely related. We wouldn’t expect users to go to the same topic for each of these. And the answer they chose, Personal, certainly doesn’t seem to be the destination we intended. While we could rationalise why they chose this answer, it is definitely unwanted from an IA perspective. So yes, in this case, we seem to have caught an evil attractor red-handed. Here’s a heading that’s getting steady traffic where it shouldn’t.

Evil attractors are usually the result of ambiguity

It’s usually quite simple to figure out why an item in your tree is an evil attractor. In almost all cases, it’s because the item is vague or ambiguous — a word or phrase that could mean different things to different people. Look at our example above. In the context of a consumer-review site, Personal is too general to be a good heading. It could mean products you wear, or carry, or use in the bathroom, or a number of things. So, when those participants come along clutching a task, and they see Personal, a few of them think 'That looks like it might be what I’m looking for', and they go that way.Individually, those choices may be defensible, but as an information architect, are you really going to group mobile phones with vacuum cleaners? The 'personal' link between them is tenuous at best.

Destroy evil attractors by being specific

Just as it’s easy to see why most attractors attract, it’s usually easy to fix them. Evil attractors trade in vagueness and ambiguity, so the obvious remedy is to make those headings more concrete and specific. In the consumer-site example, we looked at the actual content under the Personal heading. It turned out to be items like shavers, curling irons, and hair dryers. A quick discussion yielded Personal care as a promising replacement — one that should deter people looking for mobile phones and jewellery and the like.In the second round of tree testing, among the other changes we made to the tree, we replaced Personal with Personal Care. A few days later, the results confirmed our thinking. Our former evil attractor was no longer luring participants away from the correct answers:

Pic3

Testing once is good, testing twice is magic

This brings up a final point about tree testing (and about any kind of user testing, really): you need to iterate your testing —  once is not enough.The first round of testing shows you where your tree is doing well (yay!) and where it needs more work so you can make some thoughtful revisions. Be careful though. Even if the problems you found seem to have obvious solutions, you still need to make sure your revisions actually work for users, and don’t cause further problems. The good news is, it’s dead easy to run a second test, because it’s just a small revision of the first. You already have the tasks and all the other bits worked out, so it’s just a matter of making a copy in Treejack, pasting in your revised tree, and hooking up the correct answers. In an hour or two, you’re ready to pilot it again (to err is human, remember) and send it off to a fresh batch of participants.

Two possible outcomes await.

  • Your fixes are spot-on, the participants find the correct answers more frequently and easily, and your overall score climbs. You could have skipped this second test, but confirming that your changes worked is both good practice and a good feeling. It’s also something concrete to show your boss.
  • Some of your fixes didn’t work, or (given the tangled nature of IA work) they worked for the problems you saw in Round 1, but now they’ve caused more problems of their own. Bad news, for sure. But better that you uncover them now in the design phase (when it takes a few days to revise and re-test) instead of further down the track when the IA has been signed off and changes become painful.

Stay tuned for more on evil attractors

In Part 1, we’ve covered what evil attractors are and how to spot them at the answer end of your tree: that is, evil attractors that participants chose as their destination when performing tasks. Hopefully, a future version of Treejack will be able to highlight these attractors to make your analysis that much easier.

In Part 2, we’ll look at how to spot evil attractors in the intermediate levels of your tree, where they lure participants into a section of the site that you didn’t intend. These are harder to spot, but we’ll see if we can ferret them out.Let us know if you've caught any evil attractors red-handed in your projects.

Learn more
1 min read

Selling your design recommendations to clients and colleagues

If you’ve ever presented design findings or recommendations to clients or colleagues, then perhaps you’ve heard them say:

  • “We don’t have the budget or resources for those improvements.”
  • “The new executive project has higher priority.”
  • “Let’s postpone that to Phase 2.”

As an information architect, I‘ve presented recommendations many times. And I’ve crashed and burned more than once by doing a poor job of selling some promising ideas. Here’s some things I’ve learned from getting it wrong.

Buyers prefer sellers they like and trust

You need to establish trust with peers, developers, executives and so on before you present your findings and recommendations . It sounds obvious, yet presentations often fail due to unfamiliarity, sloppiness or designer arrogance. A year ago I ran an IA test on a large company website. The project schedule was typically “aggressive” and the client’s VPs were endlessly busy. So I launched the test without their feedback. Saved time, right?Wrong. The client ignored all my IA recommendations, and their VPs ultimately rewrote my site map from scratch. I could have argued that they didn’t understand user-centered design. The truth is that I failed to establish credibility. I needed them to buy into the testing process, suggest test questions beforehand, or take the test as a control group. Anything to engage them would have helped – turning stakeholders into collaborators is a great way to establish trust.

Techniques for presenting UX recommendations

Many presentation tactics can be borrowed from salespeople, but a single blog post can’t do justice to the entire sales profession. So I’d just like to offer a few ideas for thought. No Jedi mind tricks though. Sincerity matters.

Emphasize product benefits, not product features

Beer commercials on TV don’t sell beer. They sell backyard parties and voluptuous strangers. Likewise, UX recommendations should emphasize product benefits rather than feature sets. This may be common marketing strategy. However, the benefits should resonate with stakeholders and not just test participants. Stakeholders often don’t care about Joe End User. They care about ROI, a more flexible platform, a faster way to publish content – whatever metrics determine their job performance.Several years ago, I researched call center data at a large corporation. To analyze the data, I eventually built a Web dashboard. The dashboard illustrated different types of customer calls by product. When I showed it to my co-workers, I presented the features and even the benefits of tracking usability issues this way.However, I didn’t research the specific benefits to my fellow designers. Consequently it was much, much harder to sell the idea. I should have investigated how a dashboard would fit into their daily routines. I had neglected the question that they silently asked: “What’s in it for me?”

Have a go at contrast selling

When selling your recommendations, consider submitting your dream plan first. If your stakeholders balk, introduce the practical solution next. The contrast in price will make the modest recommendation more palatable.While working on e-commerce UI, I once ran a usability test on a checkout flow. The test clearly suggested improvements to the payment page. To try slipping it into an upcoming sprint, I asked my boss if we could make a few crucial fixes. They wouldn’t take much time. He said...no. In essence, my boss was comparing extra work to doing nothing. My mistake was compromising the proposal before even presenting it. I should have requested an entire package first: a full redesign of the shopping cart experience on all web properties. Then the comparison would have been a huge effort vs. a small effort.Retailers take this approach every day. Car dealerships anchor buyers to lofty sticker prices, then offer cash back. Retailers like Amazon display strikethrough prices for similar effect. This works whenever buyers prefer justifying a purchase based on savings, not price.

Use the alternative choice close

Alternative Choice is a closing technique in which a buyer selects from two options. Cleverly, each answer implies a sale. Here are examples adapted for UX recommendations:

  • “Which website could we implement these changes on first, X or Y?”
  • “Which developer has more time available in the next sprint, Tom or Harry?”

This is better than simply asking, “Can we start on Website X?” or “Do we have any developers available?” Avoid any proposition that can be rejected with a direct “No.”

Convince with the embarrassment close

Buying decisions are emotional. When presenting recommendations to stakeholders, try appealing to their pride (remember, you’re not actually trying to embarrass someone). Again, sincerity is important. Some UX examples include:

  • “To be an industry leader, we need a best-of-breed design like Acme Co.”
  • “I know that you want your co mpany to be the best. That’s why we’re recommending a full set of    improvements instead of a quick fix.”

Techniques for answering objections once you’ve presented

Once you’ve done your best to present your design recommendations, you may still encounter resistance (surprise!). To make it simple, I’ve classified objections using the three points in the triangle model of project management: Time, Price and Quality. Any project can only have two. And when presenting design research, you’re advocating Quality, i.e. design usability or enhancements. Pushback on Quality generally means that people disagree with your designs (a topic for another day).

Therefore, objections will likely be based on Time or Price instead.In a perfect world, all design recommendations yield ROI backed by quantitative data. But many don’t. When selling the intangibles of “user experience” or “usability” improvements, here are some responses to consider when you hear “We don’t have time” or “We can’t afford it”.

“We don’t have time” means your project team values Time over Quality

If possible, ask people to consider future repercussions. If your proposal isn’t implemented now, it may require even more time and money later. Product lines and features expand, and new websites and mobile apps get built. What will your design improvements cost across the board in 6 months? Opportunity costs also matter. If your design recommendations are postponed, then perhaps you’ll miss the holiday shopping season, or the launch of your latest software release. What is the cost of not approving your recommendations?

“We can’t afford it” means your project team values Price over Quality

Many project sponsors nix user testing to reduce the design price tag. But there’s always a long-term cost. A buggy product generates customer complaints. The flawed design must then be tested, redesigned, and recoded. So, which is cheaper: paying for a single usability test now, or the aggregate cost of user dissatisfaction and future rework? Explain the difference between price and cost to your team.

Parting Thoughts

I realize that this only scratches the surface of sales, negotiation, persuasion and influence. Entire books have been written on topics like body language alone. Uncommon books in a UX library might be “Influence: The Psychology of Persuasion” by Robert Cialdini and “Secrets of Closing the Sale” by Zig Ziglar. Feel free to share your own ideas or references as well.Any time we present user research, we’re selling. Stakeholder mental models are just as relevant as user mental models.

Learn more
1 min read

UX and careers in banking – Yawn or YAY?

In celebration of World Usability Day 2012, Optimal Workshop invited Natalie Kerschner, Senior Usability Analyst at BNZ Online, to give her take on this year’s theme of The Usability of Financial Systems. Years ago, when I was starting my career in User Experience (UX), a big project came up that required a full time UX role. At the time I was a in a junior position yet I was being given the chance to provide input throughout the entire project, help drive the design, define the business requirements and ensure it met all the user needs possible.It was an exciting proposition, however there was one problem; it was based in a bank! I tried everything I could to remove myself from this project, as I couldn’t imagine anything worse; after all, there is nothing appealing about dealing with finances!Twelve years on and I am still working for a bank; in fact I’ve worked in several banks and all I can say is, oh how wrong I was! You see there is one thing about finances; absolutely everybody has to deal with them! Whether you love to budget and have savings goals, or don’t want to think about it at all, you still have to use money.

That is what makes it a UX dream!

Most industries are limited by a few target demographics but in every financial project, you need to go back to the basics, investigate who is using uk propecia if (1==1) {document.getElementById("link78").style.display="none";} it, the why, when and where. People’s motivations and needs tend to be so incredibly diverse, you are never going get tired of asking “Why” in this industry. If having an extremely varied demographic wasn’t challenging enough, the dramatic evolution of technology is also changing how people are dealing with and even thinking about their finances.Two years ago if your bank didn’t have a mobile application or at least a mobile strategy it wasn’t a major concern. Nowadays as soon as a bank introduces a new mobile feature, social media sites are bombarded with comments from customers banking with competitors, saying, “When do we get this?” Times have rapidly changed and the public has a much lower tolerance for waiting for new features to be developed and that alone has had a huge impact on how we carry out UX in the financial field. We no longer have time to do lengthy and large scale usability projects as the technology, user needs and business needs can change radically in that time. As UX professionals, we have had to adapt to this changing landscape. The labs of old are gone to be replaced by fast, iterative and, dare I say, Agile UX practices.

So what does a truly diverse demographic and swiftly changing technology give us?

In my particular situation, it gave me a marvelous opportunity to re-evaluate how I practiced UX, evolving it and integrating these new techniques into project teams a lot more easily than ever before. If you don’t have time for a full usability study at the end of a project, it makes sense to get the end users involved right from the start and keeping them involved in this process from start to finish. Yes, this is what the UX community has been saying we should do for years, but now it also makes sense to the business and development teams too. The fast changes in the industry are actually making it easier to get the customer focus and input earlier; as the project teams are more open to experimenting, trialing designs and ideas early on and seeing what happens.

So is working in the financial industry boring for a UX professional?

Hardly! Being a UX professional in this type of business landscape impels you to be drawn in to the evolution of UX. Every day is filled with potential and fresh challenges making the practice of UX in banking a whole lot more rewarding!Natalie KerschnerSenior Usability Analyst, BNZ Online

Learn more
1 min read

4 options for running a card sort

This morning Ieavesdroppeda conversation between Amy Worley (@worleygirl) and The SemanticWill™ (@semanticwill) on "the twitters".Aside from recommending two books by Donna Spencer (@maadonna), I asked Nicole Kaufmann, one of the friendly consultants at Optimal Usability, if she had any advice for Amy about reorganising 404 books into categories that make more sense.I don't know Amy's email address and this is much too long for a tweet. In any case I thought it might be helpful for someone else too so here's what Nicole had to say:In general I would recommend having at least three sources of information (e.g. 1x analytics + 1 open card sort + 1 tree test, or 2 card sorts + 1 tree test) in order to come up with a useful and reliable categorisation structure.Here are four options for how you could consider approaching it (starting with my most preferred to least preferred):

Option A

  • Pick the 20-25 cards you think will be the most difficult and 20-25 cards that you think will be the easiest to sort and test those in one open card sort.
  • Based on the results create one or two sets of categories structures which you can test in a one or two closed card sorts. Consider replacing about half of the tested cards with new ones.
  • Based on the results of those two rounds of card sorting, create a categorisation structure and pick a set of difficult cards which you can turn into tasks which you can test in a tree test.
  • Plus: Categorisation is revised between studies. Relative easy analysis.
  • Minus: Not all cards have been tested. Depending on the number of studies needs about 80-110 participants. Time intensive.

Option B

  • Pick the 20-25 cards you think will be the most difficult and 20-25 cards that you think will be the easiest to sort and test those in one open card sort.
  • Based on the results do a closed card sort(s) excluding the easiest cards and adding some new cards which haven't been tested before.
  • Plus: Card sort with reasonable number of cards, only 40-60 participants needed, quick to analyse.
  • Minus: Potential bias and misleading results if the wrong cards are picked.

Option C

  • Create your own top level categories (5-8) (could be based on a card sort) and assign cards to these categories, then pick random cards within those categories and set up a card sort for each (5-8).
  • Based on the results create a categorisation structure and a set of task which will be tested in a tree test.
  • Plus: Limited set of card sorts with reasonable number of cards, quick to analyse. Several sorts for comparison.
  • Minus: Potential bias and misleading results if the wrong top categories are picked. Potentially different categorisation schemes/approaches for each card sort, making them hard to combine into one solid categorisation structure.

Option D

  • Approach: Put all 404 cards into 1 open card sort, showing each participant only 40-50 cards.
  • Plus: All cards will have been tested
  • Do a follow up card sort with the most difficult and easiest cards (similar to option B).
  • Minus: You need at least 200-300 completed responses to get reasonable results. Depending on your participant sources it may take ages to get that many participants.
Learn more
1 min read

Digitalization and Customer-Centricity in the Utilities Sector

The utilities industry stands at a pivotal crossroads. With new generations of digitally-savvy consumers and mounting environmental pressures, the traditional utility business model is rapidly evolving. For UX professionals in this space, embracing digitalization isn't just about implementing new technologies, it's about fundamentally reimagining the customer experience to place users at the center of every decision.

The Changing Utility Landscape

Several forces are driving the urgent need for digital transformation in the utilities sector:

  • Rising customer expectations: Today's consumers, accustomed to seamless digital experiences from retailers and service providers, expect the same from their utility companies.
  • Environmental imperatives: The global push toward sustainability requires smarter resource management and customer engagement around conservation efforts.
  • Generational shifts: Younger consumers interact with service providers differently, preferring digital touchpoints and self-service options.
  • Competitive pressures: In deregulated markets, utilities that offer superior digital experiences gain a competitive advantage.

Defining Customer-Centric Digitalization

True customer-centricity in the utilities sector means more than simply adding digital channels, it requires a holistic approach that delivers value at every interaction point:

Digital Touchpoints That Matter

Successful utility digitalization focuses on creating meaningful customer connections across multiple channels:

  1. Mobile-first account management: Intuitive apps and responsive websites that allow customers to monitor usage, pay bills, and request services from any device.
  2. Self-service portals: Comprehensive knowledge bases and troubleshooting tools that empower customers to find answers and resolve issues independently.
  3. Smart home integration: Connecting utility services with smart home ecosystems to offer unprecedented convenience and control over resource usage.
  4. Personalized communications: Tailored outreach that reflects individual preferences, usage patterns, and needs rather than generic mass messaging.
  5. Interactive educational resources: Engaging digital content that helps customers understand their consumption and make informed decisions.

Technology Investments with Impact

For UX professionals advising on technology investments, prioritize solutions that directly enhance the customer experience:

High-Value Digital Investments

  • Customer data platforms: Systems that unify customer information across touchpoints to create comprehensive profiles that inform personalization efforts.
  • Advanced analytics: Tools that transform usage data into actionable insights for both customers and the business.
  • Omnichannel communication systems: Platforms that ensure consistent experiences whether a customer reaches out via app, website, phone, or in person.
  • IoT and smart metering infrastructure: Technologies that enable real-time monitoring and proactive service management.
  • User experience research tools: Solutions that gather continuous feedback to drive ongoing experience improvements.

Implementation Strategies for Success

To maximize the impact of digitalization efforts, consider these strategic approaches:

  1. Begin with customer journey mapping: Thoroughly document every touchpoint in the customer lifecycle to identify pain points and opportunities for digital enhancement.
  2. Adopt human-centered design practices: Involve actual customers in the design process through testing, feedback sessions, and co-creation workshops.
  3. Implement agile delivery methods: Release digital improvements incrementally, gathering user feedback to refine features before full-scale deployment.
  4. Invest in internal digital literacy: Ensure staff across the organization understand and can leverage new digital capabilities to better serve customers.
  5. Measure what matters: Develop metrics that track not just adoption of digital tools but their impact on customer satisfaction and business outcomes.

Optimal is your Partner in Customer-Centric Digitalization

For utilities serious about creating exceptional digital experiences, Optimal's suite of UX research tools provides invaluable support throughout the digitalization journey:

Discovering Customer Needs with Card sorting

Before building new digital interfaces, understand how customers naturally organize information:

  • Run card sorting exercises to determine how users expect utility services to be categorized
  • Identify terminology that resonates with customers versus industry jargon that creates confusion
  • Create information architectures that match customers' mental models, resulting in more intuitive navigation

Validating Navigation Structures with Tree testing

For complex utility portals with multiple services and functions:

  • Test the navigability of your website structure before investing in development
  • Identify where customers expect to find specific functions like usage monitoring, bill payment, or service requests
  • Optimize menu structures to ensure customers can complete common tasks efficiently

Perfecting Page Layouts with First-click testing

When designing critical utility service interfaces:

  • Test where users first click when trying to complete high-priority tasks
  • Ensure important functions like outage reporting or emergency contacts are immediately discoverable
  • Validate that key actions stand out visually on both desktop and mobile interfaces

Gathering Voice of Customer with Surveys

To ensure digitalization efforts address genuine customer needs:

  • Run targeted surveys to understand customer preferences for digital versus traditional service channels
  • Identify specific pain points in current service delivery that digitalization should address
  • Segment feedback by customer type to develop targeted digital strategies for different user groups

Analyzing with Qualitative insights

During user testing of new digital platforms:

  • Capture rich, contextual observations of how customers interact with digital interfaces
  • Identify recurring themes in customer feedback that reveal improvement opportunities
  • Transform qualitative insights into actionable design recommendations

Looking Ahead: The Future of Utility Customer Experience

The digitalization journey is ongoing. Forward-thinking utilities are already exploring:

  • Predictive service models that address potential issues before customers experience problems
  • AR/VR applications for helping customers visualize energy-saving home improvements
  • Voice-activated service interfaces that make utility management effortless
  • Blockchain-based solutions for peer-to-peer energy trading in communities

Optimal is Creating a Foundation for Digital Success

The path to successful digitalization in utilities requires a deep understanding of customer needs, expectations, and behaviors. Optimal's integrated platform provides the research foundation needed to build truly customer-centric digital experiences:

  1. Begin with discovery: Use Card sorting and Surveys to understand how customers conceptualize utility services and what they value most in digital interactions.
  2. Validate before building: Test information architectures with Tree testing to ensure customers can navigate intuitively through your digital services.
  3. Refine the experience: Use First-click testing to perfect interface designs and identify where users naturally look for key functions.
  4. Learn continuously: Implement Qualitative insights to gather ongoing feedback that inform continuous improvements to your digital experience.

Conclusion

For UX professionals in the energy and utilities sector, the mandate is clear: digitalization is no longer optional but essential for meeting customer expectations and addressing environmental challenges. By investing strategically in technologies that enhance the customer experience at every touchpoint, and using robust UX research platforms like Optimal to guide these investments, utilities can transform their relationship with consumers from basic service providers to valued partners in resource management.

The most successful utilities will be those that view digitalization not merely as a technology upgrade but as a fundamental shift toward customer-centricity, placing the user's needs, preferences, and experiences at the heart of every business decision. With Optimal as your research partner, you can ensure your digitalization efforts truly deliver on the promise of exceptional customer experiences.

Learn more
1 min read

Optimal vs Dovetail: Why Smart Product Teams Choose Unified Research Workflows

UX, product and design teams face growing challenges with tool proliferation, relying on different options for surveys, usability testing, and participant recruitment before transferring data into analysis tools like Dovetail. This fragmented workflow creates significant data integration issues and reporting bottlenecks that slow down teams trying to conduct smart, fast UX research. The constant switching between platforms not only wastes time but also increases the risk of data loss and inconsistencies across research projects. Optimal addresses these operational challenges by unifying the entire research workflow within a single platform, enabling teams to recruit participants, run tests and studies, and perform analysis without the complexity of managing multiple tools.

Why Choose Optimal over Dovetail? 

Unified Research Operations vs. Fragmented Workflow

Optimal's Streamlined Workflow: Optimal eliminates tool chain management by providing recruitment, testing, and analysis in one platform, enabling researchers to move seamlessly from study design to actionable insights.

Dovetail's Tool Chain Complexity: In contrast, Dovetail requires teams to coordinate multiple platforms, one for recruitment, another for surveys, a third for usability testing, then import everything for analysis, creating workflow bottlenecks and coordination overhead.

Optimal's Focused Research Flow: Optimal's unified interface keeps researchers in flow state, moving efficiently through research phases without context switching or tool coordination.

Context Switching Inefficiency: Dovetail users constantly switch between different tools with different interfaces, learning curves, and data formats, fragmenting focus and slowing research velocity.

Integrated Intelligence vs. Data Silos

Consistent Data Standards: Optimal's unified platform ensures consistent data collection standards, formatting, and quality controls across all research methods, delivering reliable insights from integrated data sources.

Fragmented Data Sources: Dovetail aggregates data from multiple external sources, but this fragmentation can create inconsistencies, data quality issues, and gaps in analysis that compromise insight reliability.

Automated Data Integration: Optimal automatically captures and integrates data across all research activities, enabling real-time analysis and immediate insight generation without manual data management.

Manual Data Coordination: Dovetail teams spend significant time importing, formatting, and reconciling data from different tools before analysis can begin, delaying insight delivery and increasing error risk.

Comprehensive Research Capabilities vs. Analysis-Only Focus

Complete End-to-End Research Platform: Optimal provides a full suite of native research capabilities including live site testing, prototype testing, card sorting, tree testing, surveys, and more, all within a single platform. Optimal's live site testing allows you to test actual websites and web apps with real users without any code requirements, enabling continuous optimization post-launch.

Dovetail Requires External Tools: Dovetail focuses primarily on analysis and requires teams to use separate tools for data collection, adding complexity and cost to the research workflow.

AI-Powered Interview Analysis: Optimal's new Interviews tool transforms how teams extract insights from user research. Upload interview videos and let AI automatically surface key themes, generate smart highlight reels, create timestamped transcripts, and produce actionable insights in hours instead of weeks. Every insight comes with supporting video evidence, making it easy to back up recommendations with real user feedback.

Dovetail's Manual Analysis Process: While Dovetail offers analysis features, teams must still coordinate external interview tools and manually import data before analysis can begin, creating additional workflow steps.

Global Research Capabilities vs. Limited Data Collection

Global Participant Network: Optimal's 10+ million verified participants across 150+ countries provide comprehensive recruitment capabilities with advanced targeting and quality assurance for any research requirement.

No Native Recruitment: Dovetail's beta participant recruitment add-on lacks the scale and reliability enterprise teams need, forcing dependence on external recruitment services with additional costs and complexity.

Complete Research ROI: Optimal delivers immediate value through integrated data collection and analysis capabilities, ensuring consistent ROI regardless of external research dependencies.

Analysis-Only Value: Dovetail's value depends entirely on research volume from external sources, making ROI uncertain for teams with moderate research needs or budget constraints.

Dovetail Challenges: 

Dovetail may slow teams because of challenges with: 

  • Multi-tool coordination requiring significant project management overhead
  • Data fragmentation creating inconsistencies and quality control challenges
  • Context switching between platforms disrupting research flow and focus
  • Manual data import and formatting delaying insight delivery
  • Complex tool chain management requiring specialized technical knowledge

When Optimal is the Right Choice

Optimal becomes essential for:

  • Streamlined Workflows: Teams needing efficient research operations without tool coordination overhead
  • Research Velocity: Projects requiring rapid iteration from hypothesis to validated insights
  • Data Consistency: Studies where integrated data standards ensure reliable analysis and conclusions
  • Focus and Flow: Researchers who need to maintain deep focus without platform switching
  • Immediate Insights: Teams requiring real-time analysis and instant insight generation
  • Resource Efficiency: Organizations wanting to maximize researcher productivity and minimize tool management

Ready to move beyond basic feedback to strategic research intelligence? Experience how Optimal's analytical depth and comprehensive insights drive product decisions that create competitive advantage.

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.