Top Tasks in UX: How to Identify What Really Matters to Your Users
All the way back in 2014, the web passed a pretty significant milestone: 1 billion websites. Of course, fewer than 200 million of these are actually active as of 2019, but there’s an important underlying point. People love to create. If the current digital age that we live in has taught us anything, it’s that it’s never been as easy to get information and ideas out into the world.
Understandably, this ability has been used – and often misused. Overloaded, convoluted websites are par for the course, with a common tactic for website renewal being to simply update them with a new coat of paint while ignoring the swirling pile of outdated and poorly organized content below.
So what are you supposed to do when trying to address this problem on your own website or digital project? Well, there’s a fairly robust technique called top tasks management. Here, we’ll go over exactly what it is and how you can use it.
Getting to grips with top tasks
Ideally, all websites would be given regular, comprehensive reviews. Old content could be revisited and analyzed to see whether it’s still actually serving a purpose. If not, it could be reworked or just removed entirely. Based on research, content creators could add new content to address user needs. Of course, this is just the ideal. The reality is that there’s never really enough time or resource to manage the growing mass of digital content in this way. The solution is to hone in on what your users actually use your website for and tailor the experience accordingly by looking at top tasks.
What are top tasks? They're basically a small set of tasks (typically around 5, but up to 10 is OK too) that are most important to your users. The thinking goes that if you get these core tasks right, your website will be serving the majority of your users and you’ll be more likely to retain them. Ignore top tasks (and any sort of task analysis), and you’ll likely find users leaving your website to find something else that better fits their needs.
The counter to top tasks is tiny tasks. These are everything on a website that’s not all that important for the people actually using it. Commonly, tiny tasks are driven more by the organization’s needs than those of the users. Typically, the more important a task is to a user, the less information there is to support it. On the other hand, the less important a task is to a user, the more information there is. Tiny tasks stem very much from ‘organization first’ thinking, wherein user needs are placed lower on the list of considerations.
According to Jerry McGovern (who penned an excellent write-up of top tasks on A List Apart), the top tasks model says “Focus on what really matters (the top tasks) and defocus on what matters less (the tiny tasks).”
How to identify top tasks
Figuring out your top tasks is an important step in clearing away the fog and identifying what actually matters to your users. We’ll call this stage of the process task discovery, and these are the steps:
Gather: Work with your organization to gather a list of all customer tasks
Refine: Take this list of tasks to a smaller group of stakeholders and work it down into a shortlist
User feedback: Go out to your users and get a representative sample to vote on them
Finalise: Assemble a table of tasks with the one with the highest number of votes at the top and the lowest number of votes at the bottom
We’ll go into detail on the above steps, explaining the best way of handling each one. Keep in mind that this process isn’t something you’ll be able to complete in a week – it’s more likely a 6 to 8-week project, depending on the size of your website, how large your user base is and the receptiveness of your organization to help out.
Step 1: Gather – Figure out the long list of tasks
The first part of the task process is to get out into the wider organization and discover what your users are actually trying to accomplish on your website or by using your products. It’s all about getting into the minds of your users – trying to see the world through their eyes, effectively.
If you’re struggling to think of places where you might find customer tasks, here are some of the best sources:
Analytics: Take a deep dive into the analytics of your website or product to find out how people are using them. For websites, you’ll want to look at pages with high traffic and common downloads or interactions. The same applies to products – although the data you have access to will depend on the analytics systems in place.
Customer support teams: Your own internal support teams can be a great source of user tasks. Support teams commonly spend all day speaking to users, and as a result, are able to build up a cohesive understanding of the types of tasks users commonly attempt.
Sales teams: Similarly, sales teams are another good source of task data. Sales teams typically deal with people before they become your users, but a part of their job is to understand the problems they’re trying to solve and how your website or product can help.
Direct customer feedback: Check for surveys your organization has run in the past to see whether any task data already exists.
Social media: Head to Twitter, Facebook and LinkedIn to see what people are talking about with regards to your industry. What tasks are being mentioned?
It’s important to note that you need to cast a wide net when gathering task data. You can’t just rely on analytics data. Why? Well, downloads and page visits only reflect what you have, but not what your users might actually be searching for.
As for search, Jerry McGovern explains why it doesn’t actually tell the entire story: “When we worked on the BBC intranet, we found they had a feature called “Top Searches” on their homepage. The problem was that once they published the top searches list, these terms no longer needed to be searched for, so in time a new list of top searches emerged! Similarly, top tasks tend to get bookmarked, so they don’t show up as much in search. And the better the navigation, the more likely the site search is to reflect tiny tasks.”
At the end of the initial task-gathering stage you should be left with around 300 to 500 tasks. Of course, this can scale up or down depending on the size of the website or product.
Step 2: Refine – Create your shortlist
Now that you’ve got your long list of tasks, it’s time to trim them back until you’ve got a shortlist of 100 or less. Keep in mind that working through your long list of tasks is going to take some time, so plan for this process to take at least 4 weeks (but likely more).
It’s important to involve stakeholders from across the organization during the shortlist process. Bring in people from support, sales, product, marketing and leadership areas of the organization. In addition to helping you to create a more concise and usable list, the shortlist process helps your stakeholders to think about areas of overlap and where they may need to work together.
When working your list down to something more usable, try and consolidate and simplify. Stay away from product names as well as internal organization and industry jargon. With your tasks, you essentially want to focus on the underlying thing that a user is trying to do. If you were focusing on tasks for a bank, opt for “Transactions” instead of “Digital mobile payments”. Similarly, bring together tasks where possible. “Customer support”, “Help and support” and “Support center” can all be merged.
At a very technical level, it also helps to avoid lengthy tasks. Stick to around 7 to 8 words and try and avoid verbs, using them only when there’s really no other option. You’ll find that your task list becomes quite to navigate when tasks begin with “look”, “find” and “get”. Finally, stay away from specific audiences and demographics. You want to keep your tasks universal.
Step 3: User feedback – Get users to vote
With your shortlist created, it’s time to take it to your users. Using a survey tool like Optimal's Surveys, add in each one of your shortlisted tasks and have users rank 5 tasks on a scale from 1 to 5, with 5 being the most important and 1 being the least important.
If you’re thinking that your users will never take the time to work through such a long list, consider that the very length of the list means they’ll seek out the tasks that matter to them and ignore the ones that don’t.
A section of the customer survey in Questions.
Step 4: Finalize – Analyze your results
Now for the task analysis side of the project. What you want at the end of the user survey end of the project is a league table of entire shortlist of tasks. We’re going to use the example from Cisco’s top tasks project, which has been documented over at A List Apart by Gerry McGovern (who actually ran the project). The entire article is worth a read as it covers the process of running a top task project for a large organization.
Here’s what a league table of the top 20 tasks looks like from Cisco:
A league table of the top 20 tasks from Cisco’s top tasks project. Credit: Jerry McGovern.
Here’s the breakdown of the vote for Cisco’s tasks:
3 tasks got the first 25 percent of the vote
6 tasks got 25-50 percent of the vote
14 tasks got 50-75 percent of the vote
44 tasks got 75-100 percent of the vote
While the pattern may seem surprising, it’s actually not unusual. As Jerry explains: “We have done this process over 400 times and the same patterns emerge every single time.”
Final thoughts
Focusing on top tasks management is really a practice that needs to be conducted on a semi-regular basis. The approach benefits organizations in a multitude of ways, bringing different teams and people together to figure out how to best address why your users are coming to your website and what they actually need from you.
As we explained at the beginning of this article, top tasks is really about clearing away the fog and understanding on what really matters. Instead of spreading yourself thin and focusing on a host of tiny tasks, hone in on those top tasks that actually matter to your users.
Understanding how to improve your website
The top tasks approach is an effective way of giving you a clear idea of what you should be focusing on when designing or redesigning your website, but this should really just be one aspect of the work you do.
Utilizing a host of other UX research methods can give you a much more comprehensive idea of what’s working and what’s not. With card sorting, for example, you can learn how your users think the content on your website should be arranged. Then, with this data in hand, you can use tree testing to assemble draft structures of your website and test how people navigate their way through it. You can keep iterating on these structures to ensure you’ve created the most user-friendly navigation.
Take a look at our 101 guides to learn more about card sorting and tree testing, as well as the other user research methods you can use to make solid improvements to your website. If you’d rather just start putting methods into practice using user research tools, take our UX platform for a spin for free here.
Most product teams treat UX research as something that happens to them: a necessary evil that slows things down or a luxury they can't afford. The best product teams flip this narrative completely. Their research doesn't interrupt their roadmap; it powers it.
"We need insights by Friday."
"Proper research takes at least three weeks."
This conversation happens in product teams everywhere, creating an eternal tension between the need for speed and the demands of rigor. But what if this debate is based on a false choice?
Research that Moves at the Speed of Product
Product development has accelerated dramatically. Two-week sprints are standard. Daily deployment is common. Feature flags allow instant iterations. In this environment, a four-week research study feels like asking a Formula 1 race car to wait for a horse-drawn carriage.
The pressure is real. Product teams make dozens of decisions per sprint, about features, designs, priorities, and trade-offs. Waiting weeks for research on each decision simply isn't viable. So teams face an impossible choice: make decisions without insights or slow down dramatically.
As a result, most teams choose speed. They make educated guesses, rely on assumptions, and hope for the best. Then they wonder why features flop and users churn.
The False Dichotomy
The framing of "speed vs. rigor" assumes these are opposing forces. But the best research teams have learned they're not mutually exclusive, they require different approaches for different situations.
We think about research in three buckets, each serving a different strategic purpose:
Discovery: You're exploring a space, building foundational knowledge, understanding thelandscape before you commit to a direction. This is where you uncover the problems worth solving and identify opportunities that weren't obvious from inside your product bubble.
Fine-Tuning: You have a direction but need to nail the specifics. What exactly should this feature do? How should it work? What's the minimum viable version that still delivers value? This research turns broad opportunities into concrete solutions.
Delivery: You're close to shipping and need to iron out the final details: copy, flows, edge cases. This isn't about validating whether you should build it; it's about making sure you build it right.
Every week, our product, design, research and engineering leads review the roadmap together. We look at what's coming and decide which type of research goes where. The principle is simple: If something's already well-shaped, move fast. If it's risky and hard to reverse, invest in deeper research.
How Fast Can Good Research Be?
The answer is: surprisingly fast, when structured correctly!
For our teams, how deep we go isn't about how much time we have: it's about how much it would hurt to get it wrong. This is a strategic choice that most teams get backwards.
Go deep when the stakes are high, foundational decisions that affect your entire product architecture, things that would be expensive to reverse, moments where you need multiple stakeholders aligned around a shared understanding of the problem.
Move fast when you can afford to be wrong, incremental improvements to existing flows, things you can change easily based on user feedback, places where you want to ship-learn-adjust in tight loops.
Think of it as portfolio management for your research investment. Save your "big research bets" for the decisions that could set you back months, not days. Use lightweight validation for everything else.
And while good research can be fast, speed isn't always the answer. There are definitely situations where deep research needs to run and it takes time. Save those moments for high stakes investments like repositioning your entire product, entering new markets, or pivoting your business model. But be cautious of research perfectionism which is a risk with deep research. Perfection is the enemy of progress. Your research team shouldn’t be asking "Is this research perfect?" but instead "Is this insight sufficient for the decision at hand?"
The research goal should always be appropriate confidence, not perfect certainty.
The Real Trade-Off
The choice shouldn’t be speed vs. rigor, it's between:
Research that matters (timely, actionable, sufficient confidence)
Research that doesn't (perfect methodology, late arrival, irrelevant to decisions)
The best research teams have learned to be ruthlessly pragmatic. They match research effort to decision impact. They deliver "good enough" insights quickly for small decisions and comprehensive insights thoughtfully for big ones.
Speed and rigor aren't enemies. They're partners in a portfolio approach where each decision gets the right level of research investment. The teams winning aren't choosing between speed and rigor—they're choosing the appropriate blend for each situation.
Despite AI being the buzzword in UX right now, there are still lots of concerns about how it’s going to impact research roles. One of the biggest concerns we hear is: is AI just going to replace UX researchers altogether?
The answer, in our opinion, is no. The longer, more interesting answer is that AI is fundamentally transforming what it means to be a UX researcher, and in ways that make the role more strategic, more impactful, and more interesting than ever before.
What AI Actually Does for Research
A 2024 survey by the UX Research Collective found that 68% of UX researchers are concerned about AI's impact on their roles. The fear makes sense, we've all seen how automation has transformed other industries. But what's actually happening is that rather than AI replacing researchers, it's eliminating the parts of research that researchers hate most.
According to Gartner's 2024 Market Guide for User Research, AI tools can reduce analysis time by 60-70%, but not by replacing human insight. Instead, they handle:
Pattern Recognition at Scale: AI can process hundreds of user interviews and identify recurring themes in hours. For a human researcher that same work would take weeks. But those patterns will need human validation because AI doesn't understand why those patterns matter. That's where researchers will continue to add value, and we would argue, become more important than ever.
Synthesis Acceleration: According to research by the Nielsen Norman Group, AI can generate first-draft insight summaries 10x faster than humans. But these summaries still need researcher oversight to ensure context, accuracy, and actionable insights aren't lost.
Multi-language Analysis: AI can analyze feedback in 50+ languages simultaneously, democratizing global research. But cultural context and nuanced interpretation still require human understanding.
Always-On Insights: Traditional research is limited by human availability. Tools like AI interviewers can run 24/7 while your team sleeps, allowing you to get continuous, high-quality user insights.
AI is Elevating the Role of Researchers
We think that what AI is actually doing is making UX researchers more important, not less. By automating the less sophisticated aspects of research, AI is pushing researchers toward the strategic work that only humans can do.
From Operators to Strategists: McKinsey's 2024 research shows that teams using AI research tools spend 45% more time on strategic planning and only 20% on execution, compared to 30% strategy and 60% execution for traditional teams.
From Reporters to Storytellers: With AI handling data processing, researchers can focus on crafting compelling narratives.
From Analysts to Advisors: When freed from manual analysis, researchers become embedded strategic partners.
Human + AI Collaboration
The most effective research teams aren't choosing between human or AI, they're creating collaborative workflows that incorporate AI to augment researchers roles, not replace them:
AI-Powered Data Collection: Automated transcription, sentiment analysis, and preliminary coding happen in real-time during user sessions.
Human-Led Interpretation: Researchers review AI-generated insights, add context, challenge assumptions, and identify what AI might have missed.
Collaborative Synthesis: AI suggests patterns and themes; researchers validate, refine, and connect to business context.
Human Storytelling: Researchers craft narratives, implications, and recommendations that AI cannot generate.
Is it likely that with AI more and more research tasks will become automated? Absolutely. Basic transcription, preliminary coding, and simple pattern recognition are already AI’s bread and butter. But research has never been about these tasks, it's been about understanding users and driving better decisions and that should always be left to humans.
The researchers thriving in 2025 and beyond aren't fighting AI, they're embracing it. They're using AI to handle the tedious 40% of their job so they can focus on the strategic 60% that creates real business value. You have a choice. You can choose to adopt AI as a tool to elevate your role, or you can view it as a threat and get left behind. Our customers tell us that the researchers choosing elevation are finding their roles more strategic, more impactful, and more essential to product success than ever before.
AI isn't replacing UX researchers. It's freeing them to do what they've always done best, understand humans and help build better products. And in a world drowning in data but starving for insight, that human expertise has never been more valuable.
Last night, Optimal brought together an incredible community of product leaders and innovators for "The Automation Breakthrough: Workflows for the AI Era" at Q-Branch in Austin, Texas. This two-hour in-person event featured expert perspectives on how AI and automation are transforming the way we work, create, and lead.
The event featured a lightning Talk on "Designing for Interfaces" featured Cindy Brummer, Founder of Standard Beagle Studio, followed by a dynamic panel discussion titled "The Automation Breakthrough" with industry leaders including Joe Meersman (Managing Partner, Gyroscope AI), Carmen Broomes (Head of UX, Handshake), Kasey Randall (Product Design Lead, Posh AI), and Prateek Khare (Head of Product, Amazon). We also had a fireside chat with our CEO, Alex Burke and Stu Smith, Head of Design at Atlassian.
Here are the key themes and insights that emerged from these conversations:
Trust & Transparency: The Foundation of AI Adoption
Cindy emphasized that trust and transparency aren't just nice-to-haves in the AI era, they're essential. As AI tools become more integrated into our workflows, building systems that users can understand and rely on becomes paramount. This theme set the tone for the entire event, reminding us that technological advancement must go hand-in-hand with ethical considerations.
Automation Liberates Us from Grunt Work
One of the most resonant themes was how AI fundamentally changes what we spend our time on. As Carmen noted, AI reduces the grunt work and tasks we don't want to do, freeing us to focus on what matters most. This isn't about replacing human workers, it's about eliminating the tedious, repetitive tasks that drain our energy and creativity.
Enabling Creativity and Higher-Quality Decision-Making
When automation handles the mundane, something remarkable happens: we gain space for deeper thinking and creativity. The panelists shared powerful examples of this transformation:
Carmen described how AI and workflows help teams get to insights and execution on a much faster scale, rather than drowning in comments and documentation. Prateek encouraged the audience to use automation to get creative about their work, while Kasey shared how AI and automation have helped him develop different approaches to coaching, mentorship, and problem-solving, ultimately helping him grow as a leader.
The decision-making benefits were particularly striking. Prateek explained how AI and automation have helped him be more thoughtful about decisions and make higher-quality choices, while Kasey echoed that these tools have helped him be more creative and deliberate in his approach.
Democratizing Product Development
Perhaps the most exciting shift discussed was how AI is leveling the playing field across organizations. Carmen emphasized the importance of anyone, regardless of their role, being able to get close to their customers. This democratization means that everyone can get involved in UX, think through user needs, and consider the best experience.
The panel explored how roles are blurring in productive ways. Kasey noted that "we're all becoming product builders" and that product managers are becoming more central to conversations. Prateek predicted that teams are going to get smaller and achieve more with less as these tools become more accessible.
Automation also plays a crucial role in iteration, helping teams incorporate customer feedback more effectively, according to Prateek.
Practical Advice for Navigating the AI Era
The panelists didn't just share lofty visions, they offered concrete guidance for professionals navigating this transformation:
Stay perpetually curious. Prateek warned that no acquired knowledge will stay with you for long, so you need to be ready to learn anything at any time.
Embrace experimentation. "Allow your process to misbehave," Prateek advised, encouraging attendees to break from rigid workflows and explore new approaches.
Overcome fear. Carmen urged the audience not to be afraid of bringing in new tools or worrying that AI will take their jobs. The technology is here to augment, not replace.
Just start. Kasey's advice was refreshingly simple: "Just start and do it again." Whether you're experimenting with AI tools or trying "vibe coding," the key is to begin and iterate.
The energy in the room at Q-Branch reflected a community that's not just adapting to change but actively shaping it. The automation breakthrough isn't just about new tools, it's about reimagining how we work, who gets to participate in product development, and what becomes possible when we free ourselves from repetitive tasks.
As we continue to navigate the AI era, events like this remind us that the most valuable insights come from bringing diverse perspectives together. The conversation doesn't end here, it's just beginning.
Interested in joining future Optimal community events? Stay tuned for upcoming gatherings where we'll continue exploring the intersection of design, product, and emerging technologies.