June 4, 2020

Top Tasks in UX: How to Identify What Really Matters to Your Users

All the way back in 2014, the web passed a pretty significant milestone: 1 billion websites. Of course, fewer than 200 million of these are actually active as of 2019, but there’s an important underlying point. People love to create. If the current digital age that we live in has taught us anything, it’s that it’s never been as easy to get information and ideas out into the world.

Understandably, this ability has been used – and often misused. Overloaded, convoluted websites are par for the course, with a common tactic for website renewal being to simply update them with a new coat of paint while ignoring the swirling pile of outdated and poorly organized content below.

So what are you supposed to do when trying to address this problem on your own website or digital project? Well, there’s a fairly robust technique called top tasks management. Here, we’ll go over exactly what it is and how you can use it.

Getting to grips with top tasks

Ideally, all websites would be given regular, comprehensive reviews. Old content could be revisited and analyzed to see whether it’s still actually serving a purpose. If not, it could be reworked or just removed entirely. Based on research, content creators could add new content to address user needs. Of course, this is just the ideal. The reality is that there’s never really enough time or resource to manage the growing mass of digital content in this way. The solution is to hone in on what your users actually use your website for and tailor the experience accordingly by looking at top tasks.

What are top tasks? They're basically a small set of tasks (typically around 5, but up to 10 is OK too) that are most important to your users. The thinking goes that if you get these core tasks right, your website will be serving the majority of your users and you’ll be more likely to retain them. Ignore top tasks (and any sort of task analysis), and you’ll likely find users leaving your website to find something else that better fits their needs.

The counter to top tasks is tiny tasks. These are everything on a website that’s not all that important for the people actually using it. Commonly, tiny tasks are driven more by the organization’s needs than those of the users. Typically, the more important a task is to a user, the less information there is to support it. On the other hand, the less important a task is to a user, the more information there is. Tiny tasks stem very much from ‘organization first’ thinking, wherein user needs are placed lower on the list of considerations.

According to Jerry McGovern (who penned an excellent write-up of top tasks on A List Apart), the top tasks model says “Focus on what really matters (the top tasks) and defocus on what matters less (the tiny tasks).”

How to identify top tasks

Figuring out your top tasks is an important step in clearing away the fog and identifying what actually matters to your users. We’ll call this stage of the process task discovery, and these are the steps:

  1. Gather: Work with your organization to gather a list of all customer tasks
  2. Refine: Take this list of tasks to a smaller group of stakeholders and work it down into a shortlist
  3. User feedback: Go out to your users and get a representative sample to vote on them
  4. Finalise: Assemble a table of tasks with the one with the highest number of votes at the top and the lowest number of votes at the bottom

We’ll go into detail on the above steps, explaining the best way of handling each one. Keep in mind that this process isn’t something you’ll be able to complete in a week – it’s more likely a 6 to 8-week project, depending on the size of your website, how large your user base is and the receptiveness of your organization to help out.

Step 1: Gather – Figure out the long list of tasks

The first part of the task process is to get out into the wider organization and discover what your users are actually trying to accomplish on your website or by using your products. It’s all about getting into the minds of your users – trying to see the world through their eyes, effectively.

If you’re struggling to think of places where you might find customer tasks, here are some of the best sources:

  • Analytics: Take a deep dive into the analytics of your website or product to find out how people are using them. For websites, you’ll want to look at pages with high traffic and common downloads or interactions. The same applies to products – although the data you have access to will depend on the analytics systems in place.
  • Customer support teams: Your own internal support teams can be a great source of user tasks. Support teams commonly spend all day speaking to users, and as a result, are able to build up a cohesive understanding of the types of tasks users commonly attempt.
  • Sales teams: Similarly, sales teams are another good source of task data. Sales teams typically deal with people before they become your users, but a part of their job is to understand the problems they’re trying to solve and how your website or product can help.
  • Direct customer feedback: Check for surveys your organization has run in the past to see whether any task data already exists.
  • Social media: Head to Twitter, Facebook and LinkedIn to see what people are talking about with regards to your industry. What tasks are being mentioned?

It’s important to note that you need to cast a wide net when gathering task data. You can’t just rely on analytics data. Why? Well, downloads and page visits only reflect what you have, but not what your users might actually be searching for.

As for search, Jerry McGovern explains why it doesn’t actually tell the entire story: “When we worked on the BBC intranet, we found they had a feature called “Top Searches” on their homepage. The problem was that once they published the top searches list, these terms no longer needed to be searched for, so in time a new list of top searches emerged! Similarly, top tasks tend to get bookmarked, so they don’t show up as much in search. And the better the navigation, the more likely the site search is to reflect tiny tasks.”

At the end of the initial task-gathering stage you should be left with around 300 to 500 tasks. Of course, this can scale up or down depending on the size of the website or product.

Step 2: Refine – Create your shortlist

Now that you’ve got your long list of tasks, it’s time to trim them back until you’ve got a shortlist of 100 or less. Keep in mind that working through your long list of tasks is going to take some time, so plan for this process to take at least 4 weeks (but likely more).

It’s important to involve stakeholders from across the organization during the shortlist process. Bring in people from support, sales, product, marketing and leadership areas of the organization. In addition to helping you to create a more concise and usable list, the shortlist process helps your stakeholders to think about areas of overlap and where they may need to work together.

When working your list down to something more usable, try and consolidate and simplify. Stay away from product names as well as internal organization and industry jargon. With your tasks, you essentially want to focus on the underlying thing that a user is trying to do. If you were focusing on tasks for a bank, opt for “Transactions” instead of “Digital mobile payments”. Similarly, bring together tasks where possible. “Customer support”, “Help and support” and “Support center” can all be merged.

At a very technical level, it also helps to avoid lengthy tasks. Stick to around 7 to 8 words and try and avoid verbs, using them only when there’s really no other option. You’ll find that your task list becomes quite to navigate when tasks begin with “look”, “find” and “get”. Finally, stay away from specific audiences and demographics. You want to keep your tasks universal.

Step 3: User feedback – Get users to vote

With your shortlist created, it’s time to take it to your users. Using a survey tool like Optimal's Surveys, add in each one of your shortlisted tasks and have users rank 5 tasks on a scale from 1 to 5, with 5 being the most important and 1 being the least important.

If you’re thinking that your users will never take the time to work through such a long list, consider that the very length of the list means they’ll seek out the tasks that matter to them and ignore the ones that don’t.

A section of the customer survey in Questions.
A section of the customer survey in Questions.

Step 4: Finalize – Analyze your results

Now for the task analysis side of the project. What you want at the end of the user survey end of the project is a league table of entire shortlist of tasks. We’re going to use the example from Cisco’s top tasks project, which has been documented over at A List Apart by Gerry McGovern (who actually ran the project). The entire article is worth a read as it covers the process of running a top task project for a large organization.

Here’s what a league table of the top 20 tasks looks like from Cisco:

A league table of the top 20 tasks from Cisco’s top tasks project.
A league table of the top 20 tasks from Cisco’s top tasks project. Credit: Jerry McGovern.

Here’s the breakdown of the vote for Cisco’s tasks:

  • 3 tasks got the first 25 percent of the vote
  • 6 tasks got 25-50 percent of the vote
  • 14 tasks got 50-75 percent of the vote
  • 44 tasks got 75-100 percent of the vote

While the pattern may seem surprising, it’s actually not unusual. As Jerry explains: “We have done this process over 400 times and the same patterns emerge every single time.”

Final thoughts

Focusing on top tasks management is really a practice that needs to be conducted on a semi-regular basis. The approach benefits organizations in a multitude of ways, bringing different teams and people together to figure out how to best address why your users are coming to your website and what they actually need from you.

As we explained at the beginning of this article, top tasks is really about clearing away the fog and understanding on what really matters. Instead of spreading yourself thin and focusing on a host of tiny tasks, hone in on those top tasks that actually matter to your users.

Understanding how to improve your website

The top tasks approach is an effective way of giving you a clear idea of what you should be focusing on when designing or redesigning your website, but this should really just be one aspect of the work you do.

Utilizing a host of other UX research methods can give you a much more comprehensive idea of what’s working and what’s not. With card sorting, for example, you can learn how your users think the content on your website should be arranged. Then, with this data in hand, you can use tree testing to assemble draft structures of your website and test how people navigate their way through it. You can keep iterating on these structures to ensure you’ve created the most user-friendly navigation.

Take a look at our 101 guides to learn more about card sorting and tree testing, as well as the other user research methods you can use to make solid improvements to your website. If you’d rather just start putting methods into practice using user research tools, take our UX platform for a spin for free here.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Using paper prototypes in UX

In UX research we are told again and again that to ensure truly user-centered design, it’s important to test ideas with real users as early as possible. There are many benefits that come from introducing the voice of the people you are designing for in the early stages of the design process. The more feedback you have to work with, the more you can inform your design to align with real needs and expectations. In turn, this leads to better experiences that are more likely to succeed in the real world.It is not surprising then that paper prototypes have become a popular tool used among researchers. They allow ideas to be tested as they emerge, and can inform initial designs before putting in the hard yards of building the real thing. It would seem that they’re almost a no-brainer for researchers, but just like anything out there, along with all the praise, they have also received a fair share of criticism, so let’s explore paper prototypes a little further.

What’s a paper prototype anyway? 🧐📖

Paper prototyping is a simple usability testing technique designed to test interfaces quickly and cheaply. A paper prototype is nothing more than a visual representation of what an interface could look like on a piece of paper (or even a whiteboard or chalkboard). Unlike high-fidelity prototypes that allow for digital interactions to take place, paper prototypes are considered to be low-fidelity, in that they don’t allow direct user interaction. They can also range in sophistication, from a simple sketch using a pen and paper to simulate an interface, through to using designing or publishing software to create a more polished experience with additional visual elements.

Screen Shot 2016-04-15 at 9.26.30 AM
Different ways of designing paper prototypes, using OptimalSort as an example

Showing a research participant a paper prototype is far from the real deal, but it can provide useful insights into how users may expect to interact with specific features and what makes sense to them from a basic, user-centered perspective. There are some mixed attitudes towards paper prototypes among the UX community, so before we make any distinct judgements, let's weigh up their pros and cons.

Advantages 🏆

  • They’re cheap and fastPen and paper, a basic word document, Photoshop. With a paper prototype, you can take an idea and transform it into a low-fidelity (but workable) testing solution very quickly, without having to write code or use sophisticated tools. This is especially beneficial to researchers who work with tight budgets, and don’t have the time or resources to design an elaborate user testing plan.
  • Anyone can do itPaper prototypes allow you to test designs without having to involve multiple roles in building them. Developers can take a back seat as you test initial ideas, before any code work begins.
  • They encourage creativityFrom both the product teams participating in their design, but also from the users. They require the user to employ their imagination, and give them the opportunity express their thoughts and ideas on what improvements can be made. Because they look unfinished, they naturally invite constructive criticism and feedback.
  • They help minimize your chances of failurePaper prototypes and user-centered design go hand in hand. Introducing real people into your design as early as possible can help verify whether you are on the right track, and generate feedback that may give you a good idea of whether your idea is likely to succeed or not.

Disadvantages 😬

  • They’re not as polished as interactive prototypesIf executed poorly, paper prototypes can appear unprofessional and haphazard. They lack the richness of an interactive experience, and if our users are not well informed when coming in for a testing session, they may be surprised to be testing digital experiences on pieces of paper.
  • The interaction is limitedDigital experiences can contain animations and interactions that can’t be replicated on paper. It can be difficult for a user to fully understand an interface when these elements are absent, and of course, the closer the interaction mimics the final product, the more reliable our findings will be.
  • They require facilitationWith an interactive prototype you can assign your user tasks to complete and observe how they interact with the interface. Paper prototypes, however, require continuous guidance from a moderator in communicating next steps and ensuring participants understand the task at hand.
  • Their results have to be interpreted carefullyPaper prototypes can’t emulate the final experience entirely. It is important to interpret their findings while keeping their limitations in mind. Although they can help minimize your chances of failure, they can’t guarantee that your final product will be a success. There are factors that determine success that cannot be captured on a piece of paper, and positive feedback at the prototyping stage does not necessarily equate to a well-received product further down the track.

Improving the interface of card sorting, one prototype at a time 💡

We recently embarked on a research project looking at the user interface of our card-sorting tool, OptimalSort. Our research has two main objectives — first of all to benchmark the current experience on laptops and tablets and identify ways in which we can improve the current interface. The second objective is to look at how we can improve the experience of card sorting on a mobile phone.

Rather than replicating the desktop experience on a smaller screen, we want to create an intuitive experience for mobiles, ensuring we maintain the quality of data collected across devices.Our current mobile experience is a scaled down version of the desktop and still has room for improvement, but despite that, 9 per cent of our users utilize the app. We decided to start from the ground up and test an entirely new design using paper prototypes. In the spirit of testing early and often, we decided to jump right into testing sessions with real users. In our first testing sprint, we asked participants to take part in two tasks. The first was to perform an open or closed card sort on a laptop or tablet. The second task involved using paper prototypes to see how people would respond to the same experience on a mobile phone.

blog_artwork_01-03

Context is everything 🎯

What did we find? In the context of our research project, paper prototypes worked remarkably well. We were somewhat apprehensive at first, trying to figure out the exact flow of the experience and whether the people coming into our office would get it. As it turns out, people are clever, and even those with limited experience using a smartphone were able to navigate and identify areas for improvement just as easily as anyone else. Some participants even said they prefered the experience of testing paper prototypes over a laptop. In an effort to make our prototype-based tasks easy to understand and easy to explain to our participants, we reduced the full card sort to a few key interactions, minimizing the number of branches in the UI flow.

This could explain a preference for the mobile task, where we only asked participants to sort through a handful of cards, as opposed to a whole set.The main thing we found was that no matter how well you plan your test, paper prototypes require you to be flexible in adapting the flow of your session to however your user responds. We accepted that deviating from our original plan was something we had to embrace, and in the end these additional conversations with our participants helped us generate insights above and beyond the basics we aimed to address. We now have a whole range of feedback that we can utilize in making more sophisticated, interactive prototypes.

Whether our success with using paper prototypes was determined by the specific setup of our testing sessions, or simply by their pure usefulness as a research technique is hard to tell. By first performing a card sorting task on a laptop or tablet, our participants approached the paper prototype with an understanding of what exactly a card sort required. Therefore there is no guarantee that we would have achieved the same level of success in testing paper prototypes on their own. What this does demonstrate, however, is that paper prototyping is heavily dependent on the context of your assessment.

Final thoughts 💬

Paper prototypes are not guaranteed to work for everybody. If you’re designing an entirely new experience and trying to describe something complex in an abstracted form on paper, people may struggle to comprehend your idea. Even a careful explanation doesn’t guarantee that it will be fully understood by the user. Should this stop you from testing out the usefulness of paper prototypes in the context of your project? Absolutely not.

In a perfect world we’d test high fidelity interactive prototypes that resemble the real deal as closely as possible, every step of the way. However, if we look at testing from a practical perspective, before we can fully test sophisticated designs, paper prototypes provide a great solution for generating initial feedback.In his article criticizing the use of paper prototypes, Jake Knapp makes the point that when we show customers a paper prototype we’re inviting feedback, not reactions. What we found in our research however, was quite the opposite.

In our sessions, participants voiced their expectations and understanding of what actions were possible at each stage, without us having to probe specifically for feedback. Sure we also received general comments on icon or colour preferences, but for the most part our users gave us insights into what they felt throughout the experience, in addition to what they thought.

Further reading 🧠

Learn more
1 min read

The Great Debate: Speed vs. Rigor in Modern UX Research

Most product teams treat UX research as something that happens to them:  a necessary evil that slows things down or a luxury they can't afford. The best product teams flip this narrative completely. Their research doesn't interrupt their roadmap; it powers it.

"We need insights by Friday."

"Proper research takes at least three weeks."

This conversation happens in product teams everywhere, creating an eternal tension between the need for speed and the demands of rigor. But what if this debate is based on a false choice?

Research that Moves at the Speed of Product

Product development has accelerated dramatically. Two-week sprints are standard. Daily deployment is common. Feature flags allow instant iterations. In this environment, a four-week research study feels like asking a Formula 1 race car to wait for a horse-drawn carriage.

The pressure is real. Product teams make dozens of decisions per sprint, about features, designs, priorities, and trade-offs. Waiting weeks for research on each decision simply isn't viable. So teams face an impossible choice: make decisions without insights or slow down dramatically.

As a result, most teams choose speed. They make educated guesses, rely on assumptions, and hope for the best. Then they wonder why features flop and users churn.

The False Dichotomy

The framing of "speed vs. rigor" assumes these are opposing forces. But the best research teams have learned they're not mutually exclusive, they require different approaches for different situations.

We think about research in three buckets, each serving a different strategic purpose:

Discovery: You're exploring a space, building foundational knowledge, understanding thelandscape before you commit to a direction. This is where you uncover the problems worth solving and identify opportunities that weren't obvious from inside your product bubble.

Fine-Tuning: You have a direction but need to nail the specifics. What exactly should this feature do? How should it work? What's the minimum viable version that still delivers value? This research turns broad opportunities into concrete solutions.

Delivery: You're close to shipping and need to iron out the final details: copy, flows, edge cases. This isn't about validating whether you should build it; it's about making sure you build it right.

Every week, our product, design, research and engineering leads review the roadmap together. We look at what's coming and decide which type of research goes where. The principle is simple: If something's already well-shaped, move fast. If it's risky and hard to reverse, invest in deeper research.

How Fast Can Good Research Be?

The answer is: surprisingly fast, when structured correctly! 

For our teams, how deep we go isn't about how much time we have: it's about how much it would hurt to get it wrong. This is a strategic choice that most teams get backwards.

Go deep when the stakes are high, foundational decisions that affect your entire product architecture, things that would be expensive to reverse, moments where you need multiple stakeholders aligned around a shared understanding of the problem.

Move fast when you can afford to be wrong,  incremental improvements to existing flows, things you can change easily based on user feedback, places where you want to ship-learn-adjust in tight loops.

Think of it as portfolio management for your research investment. Save your "big research bets" for the decisions that could set you back months, not days. Use lightweight validation for everything else.

And while good research can be fast, speed isn't always the answer. There are definitely situations where deep research needs to run and it takes time. Save those moments for high stakes investments like repositioning your entire product, entering new markets, or pivoting your business model. But be cautious of research perfectionism which is a risk with deep research. Perfection is the enemy of progress. Your research team shouldn’t be asking "Is this research perfect?" but instead "Is this insight sufficient for the decision at hand?"

The research goal should always be appropriate confidence, not perfect certainty.

The Real Trade-Off

The choice shouldn’t be  speed vs. rigor, it's between:

  • Research that matters (timely, actionable, sufficient confidence)
  • Research that doesn't (perfect methodology, late arrival, irrelevant to decisions)

The best research teams have learned to be ruthlessly pragmatic. They match research effort to decision impact. They deliver "good enough" insights quickly for small decisions and comprehensive insights thoughtfully for big ones.

Speed and rigor aren't enemies. They're partners in a portfolio approach where each decision gets the right level of research investment. The teams winning aren't choosing between speed and rigor—they're choosing the appropriate blend for each situation.

Learn more
1 min read

Top User Research Platforms 2025

User research software isn't what it used to be. The days of insights being locked away in specialist UX research teams are fading fast, replaced by a world where product managers, designers, and even marketers are running their own usability testing, prototype validation, and user interviews. The best UX research platforms powering this shift have evolved from complex enterprise software into tools that genuinely enable teams to test with users, analyze results, and share insights faster.

This isn't just about better software, it's about a fundamental transformation in how organizations make decisions. Let's explore the top user research tools in 2025, what makes each one worth considering, and how they're changing the research landscape.


What Makes a UX Research Platform All-in-One?


The shift toward all-in-one UX research platforms reflects a deeper need: teams want to move from idea to insight without juggling multiple tools, logins, or data silos. A truly comprehensive research platform combines several key capabilities within a unified workflow.

The best all-in-one platforms integrate study design, participant recruitment, multiple research methods (from usability testing to surveys to interviews to navigation testing to prototype testing), AI-powered analysis, and insight management in one cohesive experience. This isn't just about feature breadth, it's about eliminating the friction that prevents research from influencing decisions. When your entire research workflow lives in one platform, insights move faster from discovery to action.

What separates genuine all-in-one solutions from feature-heavy tools is thoughtful integration. The best platforms ensure that data flows seamlessly between methods, participants can be recruited consistently across study types, and insights build upon each other rather than existing in isolation. This integrated approach enables both quick validation studies and comprehensive strategic research within the same environment.

1. Optimal: Best End-to-End UX Research Platform


Optimal has carved out a unique position in the UX research landscape: it’s powerful enough for enterprise teams at Netflix, HSBC, Lego, and Toyota, yet intuitive enough that anyone, product managers, designers, even marketers, can confidently run usability studies. That balance between depth and accessibility is hard to achieve, and it's where Optimal shines.

Unlike fragmented tool stacks, Optimal is a complete User Insights Platform that supports the full research workflow. It covers everything from study design and participant recruitment to usability testing, prototype validation, AI-assisted interviews, and a research repository. You don't need multiple logins or wonder where your data lives, it's all in one place.

Two recent features push the platform even further:

  • Live Site Testing: Run usability studies on your actual live product, capturing real user behavior in production environments.

  • Interviews: AI-assisted analysis dramatically cuts down time-to-insight from moderated sessions, without losing the nuance that makes qualitative research valuable.



One of Optimal's biggest advantages is its pricing model. There are no per-seat fees, no participant caps, and no limits on the number of users. Pricing is usage-based, so anyone on your team can run a study without needing a separate license or blowing your budget. It's a model built to support research at scale, not gate it behind permissioning.

Reviews on G2 reflect this balance between power and ease. Users consistently highlight Optimal's intuitive interface, responsive customer support, and fast turnaround from study to insight. Many reviewers also call out its AI-powered features, which help teams synthesize findings and communicate insights more effectively. These reviews reinforce Optimal's position as an all-in-one platform that supports research from everyday usability checks to strategic deep dives.

The bottom line? Optimal isn't just a suite of user research tools. It's a system that enables anyone in your organization to participate in user-centered decision-making, while giving researchers the advanced features they need to go deeper.

2. UserTesting: Remote Usability Testing


UserTesting built its reputation on one thing: remote usability testing with real-time video feedback. Watch people interact with your product, hear them think aloud, see where they get confused. It's immediate and visceral in a way that heat maps and analytics can't match.

The platform excels at both moderated and unmoderated usability testing, with strong user panel access that enables quick turnaround. Large teams particularly appreciate how fast they can gather sentiment data across UX research studies, marketing campaigns, and product launches. If you need authentic user reactions captured on video, UserTesting delivers consistently.

That said, reviews on G2 and Capterra note that while video feedback is excellent, teams often need to supplement UserTesting with additional tools for deeper analysis and insight management. The platform's strength is capturing reactions, though some users mention the analysis capabilities and data export features could be more robust for teams running comprehensive research programs.

A significant consideration: UserTesting operates on a high-cost model with per-user annual fees plus additional session-based charges. This pricing structure can create unpredictable costs that escalate as your research volume grows, teams often report budget surprises when conducting longer studies or more frequent research. For organizations scaling their research practice, transparent and predictable pricing becomes increasingly important.

3. Maze: Rapid Prototype Testing


Maze understands that speed matters. Design teams working in agile environments don't have weeks to wait for findings, they need answers now. The platform leans into this reality with rapid prototype testing and continuous discovery research, making it particularly appealing to individual designers and small product teams.

Its Figma integration is convenient for quick prototype tests. However, the platform's focus on speed involves trade-offs in flexibility as users note rigid question structures and limited test customization options compared to more comprehensive platforms. For straightforward usability tests, this works fine. For complex research requiring custom flows or advanced interactions, the constraints become more apparent.

User feedback suggests Maze excels at directional insights and quick design validation. However, researchers looking for deep qualitative analysis or longitudinal studies may find the platform limited. As one G2 reviewer noted, "perfect for quick design validation, less so for strategic research." The reporting tends toward surface-level metrics rather than the layered, strategic insights enterprise teams often need for major product decisions.

For teams scaling their research practice, some considerations emerge. Lower-tier plans limit the number of studies you can run per month, and full access to card sorting, tree testing, and advanced prototype testing requires higher-tier plans. For teams running continuous research or multiple studies weekly, these study caps and feature gates can become restrictive. Users also report prototype stability issues, particularly on mobile devices and with complex design systems, which can disrupt testing sessions. Originally built for individual designers, Maze works well for smaller teams but may lack the enterprise features, security protocols, and dedicated support that large organizations require for comprehensive research programs.

4. Dovetail: Research Centralization Hub

Dovetail has positioned itself as the research repository and analysis platform that helps teams make sense of their growing body of insights. Rather than conducting tests directly, Dovetail shines as a centralization hub where research from various sources can be tagged, analyzed, and shared across the organization. Its collaboration features ensure that insights don't get buried in individual files but become organizational knowledge.

Many teams use Dovetail alongside testing platforms like Optimal, creating a powerful combination where studies are conducted in dedicated research tools and then synthesized in Dovetail's collaborative environment. For organizations struggling with insight fragmentation or research accessibility, Dovetail offers a compelling solution to ensure research actually influences decisions.

6. Lookback: Moderated User Interviews


Lookback specializes in moderated user interviews and remote testing, offering a clean, focused interface that stays out of the way of genuine human conversation. The platform is designed specifically for qualitative UX work, where the goal is deep understanding rather than statistical significance. Its streamlined approach to session recording and collaboration makes it easy for teams to conduct and share interview findings.

For researchers who prioritize depth over breadth and want a tool that facilitates genuine conversation without overwhelming complexity, Lookback delivers a refined experience. It's particularly popular among UX researchers who spend significant time in one-on-one sessions and value tools that respect the craft of qualitative inquiry.

7. Lyssna: Quick and lite design feedback


Lyssna (formerly UsabilityHub) positions itself as a straightforward, budget-friendly option for teams needing quick feedback on designs. The platform emphasizes simplicity and fast turnaround, making it accessible for smaller teams or those just starting their research practice.

The interface is deliberately simple, which reduces the learning curve for new users. For basic preference tests, first-click tests, and simple prototype validation, Lyssna's streamlined approach gets you answers quickly without overwhelming complexity.

However, this simplicity involves significant trade-offs. The platform operates primarily as a self-service testing tool rather than a comprehensive research platform. Teams report that Lyssna lacks AI-powered analysis, you're working with raw data and manual interpretation rather than automated insight generation. The participant panel is notably smaller (around 530,000 participants) with limited geographic reach compared to enterprise platforms, and users mention quality control issues where participants don't consistently match requested criteria.

For organizations scaling beyond basic validation, the limitations become more apparent. There's no managed recruitment service for complex targeting needs, no enterprise security certifications, and limited support infrastructure. The reporting stays at a basic metrics level without the layered analysis or strategic insights that inform major product decisions. Lyssna works well for simple, low-stakes testing on limited budgets, but teams with strategic research needs, global requirements, or quality-critical studies typically require more robust capabilities.

Emerging Trends in User Research for 2025


The UX and user research industry is shifting in important ways:

Live environment usability testing is growing. Insights from real users on live sites are proving more reliable than artificial prototype studies. Optimal is leading this shift with dedicated Live Site Testing capabilities that capture authentic behavior where it matters most.

AI-powered research tools are finally delivering on their promise, speeding up analysis while preserving depth. The best implementations, like Optimal's Interviews, handle time-consuming synthesis without losing the nuanced context that makes qualitative research valuable.

Research democratization means UX research is no longer locked in specialist teams. Product managers, designers, and marketers are now empowered to run studies. This doesn't replace research expertise; it amplifies it by letting specialists focus on complex strategic questions while teams self-serve for straightforward validation.

Inclusive, global recruitment is now non-negotiable. Platforms that support accessibility testing and global participant diversity are gaining serious traction. Understanding users across geographies, abilities, and contexts has moved from nice-to-have to essential for building products that truly serve everyone.

How to Choose the Right Platform for Your Team


Forget feature checklists. Instead, ask:

Do you need qualitative vs. quantitative UX research? Some platforms excel at one, while others like Optimal provide robust capabilities for both within a single workflow.

Will non-researchers be running studies (making ease of use critical)? If this is your goal, prioritize intuitive interfaces that don't require extensive training.

Do you need global user panels, compliance features, or AI-powered analysis? Consider whether your industry requires specific certifications or if AI-assisted synthesis would meaningfully accelerate your workflow.

How important is integration with Figma, Slack, Jira, or Notion? The best platform fits naturally into your existing stack, reducing friction and increasing adoption across teams.


Evaluating All-in-One Research Capabilities

When assessing comprehensive research platforms, look beyond the feature list to understand how well different capabilities work together. The best all-in-one solutions excel at data continuity, participants recruited for one study can seamlessly participate in follow-up research, and insights from usability tests can inform survey design or interview discussion guides.

Consider your team's research maturity and growth trajectory. Platforms like Optimal that combine ease of use with advanced capabilities allow teams to start simple and scale sophisticated research methods as their needs evolve, all within the same environment. This approach prevents the costly platform migrations that often occur when teams outgrow point solutions.

Pay particular attention to analysis and reporting integration. All-in-one platforms should synthesize findings across research methods, not just collect them. The ability to compare prototype testing results with interview insights, or track user sentiment across multiple touchpoints, transforms isolated data points into strategic intelligence.

Most importantly, the best platform is the one your team will actually use. Trial multiple options, involve stakeholders from different disciplines, and evaluate not just features but how well each tool fits your team's natural workflow.

The Bottom Line: Powering Better Decisions Through Research


Each of these platforms brings strengths. But Optimal stands out for a rare combination: end-to-end research capabilities, AI-powered insights, and usability testing at scale in an all-in-one interface designed for all teams, not just specialists.

With the additions of Live Site Testing capturing authentic user behavior in production environments, and Interviews delivering rapid qualitative synthesis, Optimal helps teams make faster, better product decisions. The platform removes the friction that typically prevents research from influencing decisions, whether you're running quick usability tests or comprehensive mixed-methods studies.

The right UX research platform doesn't just collect data. It ensures user insights shape every product decision your team makes, building experiences that genuinely serve the people using them. That's the transformation happening at the moment; Research is becoming central to how we build, not an afterthought.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.