June 4, 2020

Top Tasks in UX: How to Identify What Really Matters to Your Users

All the way back in 2014, the web passed a pretty significant milestone: 1 billion websites. Of course, fewer than 200 million of these are actually active as of 2019, but there’s an important underlying point. People love to create. If the current digital age that we live in has taught us anything, it’s that it’s never been as easy to get information and ideas out into the world.

Understandably, this ability has been used – and often misused. Overloaded, convoluted websites are par for the course, with a common tactic for website renewal being to simply update them with a new coat of paint while ignoring the swirling pile of outdated and poorly organized content below.

So what are you supposed to do when trying to address this problem on your own website or digital project? Well, there’s a fairly robust technique called top tasks management. Here, we’ll go over exactly what it is and how you can use it.

Getting to grips with top tasks

Ideally, all websites would be given regular, comprehensive reviews. Old content could be revisited and analyzed to see whether it’s still actually serving a purpose. If not, it could be reworked or just removed entirely. Based on research, content creators could add new content to address user needs. Of course, this is just the ideal. The reality is that there’s never really enough time or resource to manage the growing mass of digital content in this way. The solution is to hone in on what your users actually use your website for and tailor the experience accordingly by looking at top tasks.

What are top tasks? They're basically a small set of tasks (typically around 5, but up to 10 is OK too) that are most important to your users. The thinking goes that if you get these core tasks right, your website will be serving the majority of your users and you’ll be more likely to retain them. Ignore top tasks (and any sort of task analysis), and you’ll likely find users leaving your website to find something else that better fits their needs.

The counter to top tasks is tiny tasks. These are everything on a website that’s not all that important for the people actually using it. Commonly, tiny tasks are driven more by the organization’s needs than those of the users. Typically, the more important a task is to a user, the less information there is to support it. On the other hand, the less important a task is to a user, the more information there is. Tiny tasks stem very much from ‘organization first’ thinking, wherein user needs are placed lower on the list of considerations.

According to Jerry McGovern (who penned an excellent write-up of top tasks on A List Apart), the top tasks model says “Focus on what really matters (the top tasks) and defocus on what matters less (the tiny tasks).”

How to identify top tasks

Figuring out your top tasks is an important step in clearing away the fog and identifying what actually matters to your users. We’ll call this stage of the process task discovery, and these are the steps:

  1. Gather: Work with your organization to gather a list of all customer tasks
  2. Refine: Take this list of tasks to a smaller group of stakeholders and work it down into a shortlist
  3. User feedback: Go out to your users and get a representative sample to vote on them
  4. Finalise: Assemble a table of tasks with the one with the highest number of votes at the top and the lowest number of votes at the bottom

We’ll go into detail on the above steps, explaining the best way of handling each one. Keep in mind that this process isn’t something you’ll be able to complete in a week – it’s more likely a 6 to 8-week project, depending on the size of your website, how large your user base is and the receptiveness of your organization to help out.

Step 1: Gather – Figure out the long list of tasks

The first part of the task process is to get out into the wider organization and discover what your users are actually trying to accomplish on your website or by using your products. It’s all about getting into the minds of your users – trying to see the world through their eyes, effectively.

If you’re struggling to think of places where you might find customer tasks, here are some of the best sources:

  • Analytics: Take a deep dive into the analytics of your website or product to find out how people are using them. For websites, you’ll want to look at pages with high traffic and common downloads or interactions. The same applies to products – although the data you have access to will depend on the analytics systems in place.
  • Customer support teams: Your own internal support teams can be a great source of user tasks. Support teams commonly spend all day speaking to users, and as a result, are able to build up a cohesive understanding of the types of tasks users commonly attempt.
  • Sales teams: Similarly, sales teams are another good source of task data. Sales teams typically deal with people before they become your users, but a part of their job is to understand the problems they’re trying to solve and how your website or product can help.
  • Direct customer feedback: Check for surveys your organization has run in the past to see whether any task data already exists.
  • Social media: Head to Twitter, Facebook and LinkedIn to see what people are talking about with regards to your industry. What tasks are being mentioned?

It’s important to note that you need to cast a wide net when gathering task data. You can’t just rely on analytics data. Why? Well, downloads and page visits only reflect what you have, but not what your users might actually be searching for.

As for search, Jerry McGovern explains why it doesn’t actually tell the entire story: “When we worked on the BBC intranet, we found they had a feature called “Top Searches” on their homepage. The problem was that once they published the top searches list, these terms no longer needed to be searched for, so in time a new list of top searches emerged! Similarly, top tasks tend to get bookmarked, so they don’t show up as much in search. And the better the navigation, the more likely the site search is to reflect tiny tasks.”

At the end of the initial task-gathering stage you should be left with around 300 to 500 tasks. Of course, this can scale up or down depending on the size of the website or product.

Step 2: Refine – Create your shortlist

Now that you’ve got your long list of tasks, it’s time to trim them back until you’ve got a shortlist of 100 or less. Keep in mind that working through your long list of tasks is going to take some time, so plan for this process to take at least 4 weeks (but likely more).

It’s important to involve stakeholders from across the organization during the shortlist process. Bring in people from support, sales, product, marketing and leadership areas of the organization. In addition to helping you to create a more concise and usable list, the shortlist process helps your stakeholders to think about areas of overlap and where they may need to work together.

When working your list down to something more usable, try and consolidate and simplify. Stay away from product names as well as internal organization and industry jargon. With your tasks, you essentially want to focus on the underlying thing that a user is trying to do. If you were focusing on tasks for a bank, opt for “Transactions” instead of “Digital mobile payments”. Similarly, bring together tasks where possible. “Customer support”, “Help and support” and “Support center” can all be merged.

At a very technical level, it also helps to avoid lengthy tasks. Stick to around 7 to 8 words and try and avoid verbs, using them only when there’s really no other option. You’ll find that your task list becomes quite to navigate when tasks begin with “look”, “find” and “get”. Finally, stay away from specific audiences and demographics. You want to keep your tasks universal.

Step 3: User feedback – Get users to vote

With your shortlist created, it’s time to take it to your users. Using a survey tool like Optimal's Surveys, add in each one of your shortlisted tasks and have users rank 5 tasks on a scale from 1 to 5, with 5 being the most important and 1 being the least important.

If you’re thinking that your users will never take the time to work through such a long list, consider that the very length of the list means they’ll seek out the tasks that matter to them and ignore the ones that don’t.

A section of the customer survey in Questions.
A section of the customer survey in Questions.

Step 4: Finalize – Analyze your results

Now for the task analysis side of the project. What you want at the end of the user survey end of the project is a league table of entire shortlist of tasks. We’re going to use the example from Cisco’s top tasks project, which has been documented over at A List Apart by Gerry McGovern (who actually ran the project). The entire article is worth a read as it covers the process of running a top task project for a large organization.

Here’s what a league table of the top 20 tasks looks like from Cisco:

A league table of the top 20 tasks from Cisco’s top tasks project.
A league table of the top 20 tasks from Cisco’s top tasks project. Credit: Jerry McGovern.

Here’s the breakdown of the vote for Cisco’s tasks:

  • 3 tasks got the first 25 percent of the vote
  • 6 tasks got 25-50 percent of the vote
  • 14 tasks got 50-75 percent of the vote
  • 44 tasks got 75-100 percent of the vote

While the pattern may seem surprising, it’s actually not unusual. As Jerry explains: “We have done this process over 400 times and the same patterns emerge every single time.”

Final thoughts

Focusing on top tasks management is really a practice that needs to be conducted on a semi-regular basis. The approach benefits organizations in a multitude of ways, bringing different teams and people together to figure out how to best address why your users are coming to your website and what they actually need from you.

As we explained at the beginning of this article, top tasks is really about clearing away the fog and understanding on what really matters. Instead of spreading yourself thin and focusing on a host of tiny tasks, hone in on those top tasks that actually matter to your users.

Understanding how to improve your website

The top tasks approach is an effective way of giving you a clear idea of what you should be focusing on when designing or redesigning your website, but this should really just be one aspect of the work you do.

Utilizing a host of other UX research methods can give you a much more comprehensive idea of what’s working and what’s not. With card sorting, for example, you can learn how your users think the content on your website should be arranged. Then, with this data in hand, you can use tree testing to assemble draft structures of your website and test how people navigate their way through it. You can keep iterating on these structures to ensure you’ve created the most user-friendly navigation.

Take a look at our 101 guides to learn more about card sorting and tree testing, as well as the other user research methods you can use to make solid improvements to your website. If you’d rather just start putting methods into practice using user research tools, take our UX platform for a spin for free here.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

When AI Meets UX: How to Navigate the Ethical Tightrope

As AI takes on a bigger role in product decision-making and user experience design, ethical concerns are becoming more pressing for product teams. From privacy risks to unintended biases and manipulation, AI raises important questions: How do we balance automation with human responsibility? When should AI make decisions, and when should humans stay in control?

These aren't just theoretical questions they have real consequences for users, businesses, and society. A chatbot that misunderstands cultural nuances, a recommendation engine that reinforces harmful stereotypes, or an AI assistant that collects too much personal data can all cause genuine harm while appearing to improve user experience.

The Ethical Challenges of AI

Privacy & Data Ethics

AI needs personal data to work effectively, which raises serious concerns about transparency, consent, and data stewardship:

  • Data Collection Boundaries – What information is reasonable to collect? Just because we can gather certain data doesn't mean we should.
  • Informed Consent – Do users really understand how their data powers AI experiences? Traditional privacy policies often don't do the job.
  • Data Longevity – How long should AI systems keep user data, and what rights should users have to control or delete this information?
  • Unexpected Insights – AI can draw sensitive conclusions about users that they never explicitly shared, creating privacy concerns beyond traditional data collection.

A 2023 study by the Baymard Institute found that 78% of users were uncomfortable with how much personal data was used for personalized experiences once they understood the full extent of the data collection. Yet only 12% felt adequately informed about these practices through standard disclosures.

Bias & Fairness

AI can amplify existing inequalities if it's not carefully designed and tested with diverse users:

  • Representation Gaps – AI trained on limited datasets often performs poorly for underrepresented groups.
  • Algorithmic Discrimination – Systems might unintentionally discriminate based on protected characteristics like race, gender, or disability status.
  • Performance Disparities – AI-powered interfaces may work well for some users while creating significant barriers for others.
  • Reinforcement of Stereotypes – Recommendation systems can reinforce harmful stereotypes or create echo chambers.

Recent research from Stanford's Human-Centered AI Institute revealed that AI-driven interfaces created 2.6 times more usability issues for older adults and 3.2 times more issues for users with disabilities compared to general populations, a gap that often goes undetected without specific testing for these groups.

User Autonomy & Agency

Over-reliance on AI-driven suggestions may limit user freedom and sense of control:

  • Choice Architecture – AI systems can nudge users toward certain decisions, raising questions about manipulation versus assistance.
  • Dependency Concerns – As users rely more on AI recommendations, they may lose skills or confidence in making independent judgments.
  • Transparency of Influence – Users often don't recognize when their choices are being shaped by algorithms.
  • Right to Human Interaction – In critical situations, users may prefer or need human support rather than AI assistance.

A longitudinal study by the University of Amsterdam found that users of AI-powered decision-making tools showed decreased confidence in their own judgment over time, especially in areas where they had limited expertise.

Accessibility & Digital Divide

AI-powered interfaces may create new barriers:

  • Technology Requirements – Advanced AI features often require newer devices or faster internet connections.
  • Learning Curves – Novel AI interfaces may be particularly challenging for certain user groups to learn.
  • Voice and Language Barriers – Voice-based AI often struggles with accents, dialects, and non-native speakers.
  • Cognitive Load – AI that behaves unpredictably can increase cognitive burden for users.

Accountability & Transparency

Who's responsible when AI makes mistakes or causes harm?

  • Explainability – Can users understand why an AI system made a particular recommendation or decision?
  • Appeal Mechanisms – Do users have recourse when AI systems make errors?
  • Responsibility Attribution – Is it the designer, developer, or organization that bears responsibility for AI outcomes?
  • Audit Trails – How can we verify that AI systems are functioning as intended?

How Product Owners Can Champion Ethical AI Through UX

At Optimal, we advocate for research-driven AI development that puts human needs and ethical considerations at the center of the design process. Here's how UX research can help:

User-Centered Testing for AI Systems

AI-powered experiences must be tested with real users to identify potential ethical issues:

  • Longitudinal Studies – Track how AI influences user behavior and autonomy over time.
  • Diverse Testing Scenarios – Test AI under various conditions to identify edge cases where ethical issues might emerge.
  • Multi-Method Approaches – Combine quantitative metrics with qualitative insights to understand the full impact of AI features.
  • Ethical Impact Assessment – Develop frameworks specifically designed to evaluate the ethical dimensions of AI experiences.

Inclusive Research Practices

Ensuring diverse user participation helps prevent bias and ensures AI works for everyone:

  • Representation in Research Panels – Include participants from various demographic groups, ability levels, and socioeconomic backgrounds.
  • Contextual Research – Study how AI interfaces perform in real-world environments, not just controlled settings.
  • Cultural Sensitivity – Test AI across different cultural contexts to identify potential misalignments.
  • Intersectional Analysis – Consider how various aspects of identity might interact to create unique challenges for certain users.

Transparency in AI Decision-Making

UX teams should investigate how users perceive AI-driven recommendations:

  • Mental Model Testing – Do users understand how and why AI is making certain recommendations?
  • Disclosure Design – Develop and test effective ways to communicate how AI is using data and making decisions.
  • Trust Research – Investigate what factors influence user trust in AI systems and how this affects experience.
  • Control Mechanisms – Design and test interfaces that give users appropriate control over AI behavior.

The Path Forward: Responsible Innovation

As AI becomes more sophisticated and pervasive in UX design, the ethical stakes will only increase. However, this doesn't mean we should abandon AI-powered innovations. Instead, we need to embrace responsible innovation that considers ethical implications from the start rather than as an afterthought.

AI should enhance human decision-making, not replace it. Through continuous UX research focused not just on usability but on broader human impact, we can ensure AI-driven experiences remain ethical, inclusive, user-friendly, and truly beneficial.

The most successful AI implementations will be those that augment human capabilities while respecting human autonomy, providing assistance without creating dependency, offering personalization without compromising privacy, and enhancing experiences without reinforcing biases.

A Product Owner's Responsibility: Leading the Charge for Ethical AI

As UX professionals, we have both the opportunity and responsibility to shape how AI is integrated into the products people use daily. This requires us to:

  • Advocate for ethical considerations in product requirements and design processes
  • Develop new research methods specifically designed to evaluate AI ethics
  • Collaborate across disciplines with data scientists, ethicists, and domain experts
  • Educate stakeholders about the importance of ethical AI design
  • Amplify diverse perspectives in all stages of AI development

By embracing these responsibilities, we can help ensure that AI serves as a force for positive change in user experience enhancing human capabilities while respecting human values, autonomy, and diversity.

The future of AI in UX isn't just about what's technologically possible; it's about what's ethically responsible. Through thoughtful research, inclusive design practices, and a commitment to human-centered values, we can navigate this complex landscape and create AI experiences that truly benefit everyone.

Learn more
1 min read

Top Tasks in UX: How to Identify What Really Matters to Your Users

All the way back in 2014, the web passed a pretty significant milestone: 1 billion websites. Of course, fewer than 200 million of these are actually active as of 2019, but there’s an important underlying point. People love to create. If the current digital age that we live in has taught us anything, it’s that it’s never been as easy to get information and ideas out into the world.

Understandably, this ability has been used – and often misused. Overloaded, convoluted websites are par for the course, with a common tactic for website renewal being to simply update them with a new coat of paint while ignoring the swirling pile of outdated and poorly organized content below.

So what are you supposed to do when trying to address this problem on your own website or digital project? Well, there’s a fairly robust technique called top tasks management. Here, we’ll go over exactly what it is and how you can use it.

Getting to grips with top tasks

Ideally, all websites would be given regular, comprehensive reviews. Old content could be revisited and analyzed to see whether it’s still actually serving a purpose. If not, it could be reworked or just removed entirely. Based on research, content creators could add new content to address user needs. Of course, this is just the ideal. The reality is that there’s never really enough time or resource to manage the growing mass of digital content in this way. The solution is to hone in on what your users actually use your website for and tailor the experience accordingly by looking at top tasks.

What are top tasks? They're basically a small set of tasks (typically around 5, but up to 10 is OK too) that are most important to your users. The thinking goes that if you get these core tasks right, your website will be serving the majority of your users and you’ll be more likely to retain them. Ignore top tasks (and any sort of task analysis), and you’ll likely find users leaving your website to find something else that better fits their needs.

The counter to top tasks is tiny tasks. These are everything on a website that’s not all that important for the people actually using it. Commonly, tiny tasks are driven more by the organization’s needs than those of the users. Typically, the more important a task is to a user, the less information there is to support it. On the other hand, the less important a task is to a user, the more information there is. Tiny tasks stem very much from ‘organization first’ thinking, wherein user needs are placed lower on the list of considerations.

According to Jerry McGovern (who penned an excellent write-up of top tasks on A List Apart), the top tasks model says “Focus on what really matters (the top tasks) and defocus on what matters less (the tiny tasks).”

How to identify top tasks

Figuring out your top tasks is an important step in clearing away the fog and identifying what actually matters to your users. We’ll call this stage of the process task discovery, and these are the steps:

  1. Gather: Work with your organization to gather a list of all customer tasks
  2. Refine: Take this list of tasks to a smaller group of stakeholders and work it down into a shortlist
  3. User feedback: Go out to your users and get a representative sample to vote on them
  4. Finalise: Assemble a table of tasks with the one with the highest number of votes at the top and the lowest number of votes at the bottom

We’ll go into detail on the above steps, explaining the best way of handling each one. Keep in mind that this process isn’t something you’ll be able to complete in a week – it’s more likely a 6 to 8-week project, depending on the size of your website, how large your user base is and the receptiveness of your organization to help out.

Step 1: Gather – Figure out the long list of tasks

The first part of the task process is to get out into the wider organization and discover what your users are actually trying to accomplish on your website or by using your products. It’s all about getting into the minds of your users – trying to see the world through their eyes, effectively.

If you’re struggling to think of places where you might find customer tasks, here are some of the best sources:

  • Analytics: Take a deep dive into the analytics of your website or product to find out how people are using them. For websites, you’ll want to look at pages with high traffic and common downloads or interactions. The same applies to products – although the data you have access to will depend on the analytics systems in place.
  • Customer support teams: Your own internal support teams can be a great source of user tasks. Support teams commonly spend all day speaking to users, and as a result, are able to build up a cohesive understanding of the types of tasks users commonly attempt.
  • Sales teams: Similarly, sales teams are another good source of task data. Sales teams typically deal with people before they become your users, but a part of their job is to understand the problems they’re trying to solve and how your website or product can help.
  • Direct customer feedback: Check for surveys your organization has run in the past to see whether any task data already exists.
  • Social media: Head to Twitter, Facebook and LinkedIn to see what people are talking about with regards to your industry. What tasks are being mentioned?

It’s important to note that you need to cast a wide net when gathering task data. You can’t just rely on analytics data. Why? Well, downloads and page visits only reflect what you have, but not what your users might actually be searching for.

As for search, Jerry McGovern explains why it doesn’t actually tell the entire story: “When we worked on the BBC intranet, we found they had a feature called “Top Searches” on their homepage. The problem was that once they published the top searches list, these terms no longer needed to be searched for, so in time a new list of top searches emerged! Similarly, top tasks tend to get bookmarked, so they don’t show up as much in search. And the better the navigation, the more likely the site search is to reflect tiny tasks.”

At the end of the initial task-gathering stage you should be left with around 300 to 500 tasks. Of course, this can scale up or down depending on the size of the website or product.

Step 2: Refine – Create your shortlist

Now that you’ve got your long list of tasks, it’s time to trim them back until you’ve got a shortlist of 100 or less. Keep in mind that working through your long list of tasks is going to take some time, so plan for this process to take at least 4 weeks (but likely more).

It’s important to involve stakeholders from across the organization during the shortlist process. Bring in people from support, sales, product, marketing and leadership areas of the organization. In addition to helping you to create a more concise and usable list, the shortlist process helps your stakeholders to think about areas of overlap and where they may need to work together.

When working your list down to something more usable, try and consolidate and simplify. Stay away from product names as well as internal organization and industry jargon. With your tasks, you essentially want to focus on the underlying thing that a user is trying to do. If you were focusing on tasks for a bank, opt for “Transactions” instead of “Digital mobile payments”. Similarly, bring together tasks where possible. “Customer support”, “Help and support” and “Support center” can all be merged.

At a very technical level, it also helps to avoid lengthy tasks. Stick to around 7 to 8 words and try and avoid verbs, using them only when there’s really no other option. You’ll find that your task list becomes quite to navigate when tasks begin with “look”, “find” and “get”. Finally, stay away from specific audiences and demographics. You want to keep your tasks universal.

Step 3: User feedback – Get users to vote

With your shortlist created, it’s time to take it to your users. Using a survey tool like Optimal's Surveys, add in each one of your shortlisted tasks and have users rank 5 tasks on a scale from 1 to 5, with 5 being the most important and 1 being the least important.

If you’re thinking that your users will never take the time to work through such a long list, consider that the very length of the list means they’ll seek out the tasks that matter to them and ignore the ones that don’t.

A section of the customer survey in Questions.
A section of the customer survey in Questions.

Step 4: Finalize – Analyze your results

Now for the task analysis side of the project. What you want at the end of the user survey end of the project is a league table of entire shortlist of tasks. We’re going to use the example from Cisco’s top tasks project, which has been documented over at A List Apart by Gerry McGovern (who actually ran the project). The entire article is worth a read as it covers the process of running a top task project for a large organization.

Here’s what a league table of the top 20 tasks looks like from Cisco:

A league table of the top 20 tasks from Cisco’s top tasks project.
A league table of the top 20 tasks from Cisco’s top tasks project. Credit: Jerry McGovern.

Here’s the breakdown of the vote for Cisco’s tasks:

  • 3 tasks got the first 25 percent of the vote
  • 6 tasks got 25-50 percent of the vote
  • 14 tasks got 50-75 percent of the vote
  • 44 tasks got 75-100 percent of the vote

While the pattern may seem surprising, it’s actually not unusual. As Jerry explains: “We have done this process over 400 times and the same patterns emerge every single time.”

Final thoughts

Focusing on top tasks management is really a practice that needs to be conducted on a semi-regular basis. The approach benefits organizations in a multitude of ways, bringing different teams and people together to figure out how to best address why your users are coming to your website and what they actually need from you.

As we explained at the beginning of this article, top tasks is really about clearing away the fog and understanding on what really matters. Instead of spreading yourself thin and focusing on a host of tiny tasks, hone in on those top tasks that actually matter to your users.

Understanding how to improve your website

The top tasks approach is an effective way of giving you a clear idea of what you should be focusing on when designing or redesigning your website, but this should really just be one aspect of the work you do.

Utilizing a host of other UX research methods can give you a much more comprehensive idea of what’s working and what’s not. With card sorting, for example, you can learn how your users think the content on your website should be arranged. Then, with this data in hand, you can use tree testing to assemble draft structures of your website and test how people navigate their way through it. You can keep iterating on these structures to ensure you’ve created the most user-friendly navigation.

Take a look at our 101 guides to learn more about card sorting and tree testing, as well as the other user research methods you can use to make solid improvements to your website. If you’d rather just start putting methods into practice using user research tools, take our UX platform for a spin for free here.

Learn more
1 min read

5 Alternatives to Askable for User Research and Participant Recruitment

When evaluating tools for user testing and participant recruitment, Askable often appears on the shortlist, especially for teams based in Australia and New Zealand. But in 2025, many researchers are finding Askable’s limitations increasingly difficult to work around: restricted study volume, inconsistent participant quality, and new pricing that limits flexibility.

If you’re exploring Askable alternatives that offer more scalability, higher data quality, and global reach, here are five strong options.

1. Optimal: Best Overall Alternative for Scalable, AI-Powered Research 

Optimal is a comprehensive user insights platform supporting the full research lifecycle, from participant recruitment to analysis and reporting. Unlike Askable, which has historically focused on recruitment, Optimal unifies multiple research methods in one platform, including prototype testing, card sorting, tree testing, and AI-assisted interviews.

Why teams switch from Askable to Optimal

1. You can only run one study at a time in Askable

Optimal removes that bottleneck, letting you launch multiple concurrent studies across teams and research methods.

2. Askable’s new pricing limits flexibility 

Optimal offers scalable plans with unlimited seats, so teams only pay for what they need.

3. Askable’s participant quality has dropped

Optimal provides access to over 100+ million verified participants worldwide, with strong fraud-prevention and screening systems that eliminate low-effort or AI-assisted responses.



Additional advantages

  • End-to-end research tools in one workspace
  • AI-powered insight generation that tags and summarizes automatically
  • Enterprise-grade reliability with decade-long market trust
  • Dedicated onboarding and SLA-backed support

Best for: Teams seeking an enterprise-ready, scalable research platform that eliminates the operational constraints of Askable.

2. UserTesting: Best for Video-Based Moderated Studies

UserTesting remains one of the most established platforms for moderated and unmoderated usability testing. It excels at gathering video feedback from participants in real time.

Pros:

  • Large participant pool with strong demographic filters
  • Supports moderated sessions and live interviews
  • Integrations with design tools like Figma and Miro


Cons:

  • Higher cost at enterprise scale
  • Less flexible for survey-driven or unmoderated studies compared with Optimal
  • The UI has become increasingly complex and buggy as UserTesting has been expanding their platform through acquisitions such as UserZoom and Validately.


Best for: Companies prioritizing live, moderated usability sessions.

3. Maze: Best for Product Teams Using Figma Prototypes

Maze offers seamless Figma integration and focuses on automating prototype-testing workflows for product and design teams.

Pros:

  • Excellent Figma and Adobe XD integration
  • Automated reporting
  • Good fit for early-stage design validation

Cons:

  • Limited depth for qualitative research
  • Smaller participant pool

Best for: Design-first teams validating prototypes and navigation flows.

4. Lyssna (formerly UsabilityHub): Best for Fast Design Feedback

Lyssna focuses on quick-turn, unmoderated studies such as preference tests, first-click tests, and five-second tests.

Pros:

  • Fast turnaround
  • Simple, intuitive interface
  • Affordable for smaller teams

Cons:

  • Limited participant targeting options
  • Narrower study types than Askable

Best for: Designers and researchers running lightweight validation tests.

5. Dovetail: Best for Research Repository and Analysis

Dovetail is primarily a qualitative data repository rather than a testing platform. It’s useful for centralizing and analyzing insights from research studies conducted elsewhere.

Pros:

  • Strong tagging and note-taking features
  • Centralized research hub for large teams

Cons:

  • Doesn’t recruit participants or run studies
  • Requires manual uploads from other tools like Askable or UserTesting

Best for: Research teams centralizing insights from multiple sources.

Final Thoughts on Alternatives to Askable

If your goal is simply to recruit local participants, Askable can still meet basic needs. But if you’re looking to scale research in your organization, integrate testing and analysis, and automate insights, Optimal stands out as the best long-term investment. Its blend of global reach, AI-powered analysis, and proven enterprise support makes it the natural next step for growing research teams. You can try Optimal for free here.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.