Optimal Blog
Articles and Podcasts on Customer Service, AI and Automation, Product, and more

When people come to us, we often hear the same story. The platforms they’ve used are clunky. Outdated. Confusing. Like navigating a maze of tabs, jargon, and complexity. Just to run a simple study.
That’s not what user testing should feel like.
At Optimal, we believe finding insights should feel energizing, not exhausting. So we’ve been working hard to make our platform easier than ever for anyone – no matter their experience level – to run meaningful research, fast.
We also know that the industry is changing. Teams want to do more with less, and platforms need to be able to empower more roles to run their own tests and find answers fast.
As pioneers in UX research, Optimal has always led the way. Today, Optimal is more powerful, intuitive, and impactful than ever, built to meet the needs of today’s teams and future-proofed for what’s next.
Our Vision is Built on Three Pillars
Access for All
We believe research should be accessible. Whether you’re a seasoned researcher or just getting started, you should be able to confidently run studies and uncover the “why” behind user behavior without facing a steep learning curve. All our latest plans include unlimited users, giving your whole team the ability to run research and find insights.
Speed to Insight
Time and budget shouldn't stand in your way. With smart automation and AI-powered insights, our tools help you go from question to clarity in days, not weeks.
Communicate with Impact
Great insights are only powerful if they’re shared. We help you translate data into clear, actionable stories that influence the right decisions across your team.
What’s New
We’re entering a new era at Optimal, one that’s even faster, smoother, and more enjoyable to use.
Here’s what’s new:
- A refreshed, modern homepage that’s clean, focused, and easier to navigate
- Interactive demos and videos to help you learn how to get set up quickly, recruit, and gather insights faster
- One-click study creation so you can get started instantly
- Streamlined navigation with fewer tabs and clearer pathways

This year, we also launched our new study flow to reduce friction with study creation. It helps you easily visualize and understand the participant experience, from the welcome message to the final thank-you screen, every step of the way. Learn more about the Study Flow.
Our refreshed designs reduces mental load, minimizes unnecessary scrolling, and helps you move from setup to insight faster than ever before.
Haven’t Looked at Optimal in a While?
We’ve gone well beyond a new homepage and design refresh. Now’s the perfect time to take another look. We’ve made big changes to help you get up and running quickly and get more time uncovering the insights that matter.
Using Optimal already? Log in to see what’s new.
New to Optimal? Start a free trial and experience it for yourself.
This is just the beginning. We can’t wait to bring you even more. Welcome to a simpler, faster, more delightful way to find insights.
Topics
Research Methods
Popular
All topics
Latest

What is ResearchOps?
Back in early 2018, user researchers from around the globe got together to try and define an emerging practice – ResearchOps. The project eventually grew into a significant research effort called #WhatisResearchOps, involving 34 workshops, a survey that garnered over 300 responses and reams of analysis.
The goal was quite simple. Generate conversation around the work that researchers do in order to support them as research grows, with an eye toward standardizing common research practices. It’s an important undertaking: a report back carried out in 2017 found that 81 percent of executives agreed that user research made their organization more efficient. Further, 86 percent believed user research improved the quality of their products.
It’s clear that many organizations are starting to understand the value that user researchers bring to the table, it’s now up to the researchers to operationalize their practice.
But for the uninitiated, what exactly is ResearchOps? And why should you care?
What is ResearchOps?
To start off, there’s not a lot of literature about ResearchOps as of early 2020. Right now, it’s a practice that can certainly be classed as ‘emerging’. This is partly why we’re writing about it. We want to add our own kindling to the ResearchOps conversation fire.
ResearchOps as a practice has 2 main goals:
- Socialize research: Make it easier for the people in an organization to access the insights generated by user research, and allow them to actively take part in research activities.
- Operationalize research: Standardize templates, processes and plans to reduce research costs and the time required to get research projects off the ground.
Or, as Vidhya Sriram explains in the post we linked above, ResearchOps “democratizes customer insights, takes down barriers to understand customers, and makes everyone take responsibility for creating remarkable customer experiences.”
ResearchOps certainly hasn’t achieved anything close to ‘mainstream’ understanding yet, so in order to give ResearchOps the best chance of succeeding, it’s quite helpful to look at another ‘Ops’ practice – DesignOps.
As 2 ‘operations’ focused initiatives, DesignOps and ResearchOps share a lot of the same DNA. According to Nielsen Norman’s DesignOps 101 article, DesignOps “refers to the orchestration and optimization of people, processes, and craft in order to amplify design’s value and impact at scale”. Author Kate Kaplan goes on to flesh out this description, noting that it’s a term for addressing such issues as growing or evolving design teams, onboarding people with the right design skills, creating efficient workflows and improving design outputs. Sound familiar?
The world of DesignOps is a veritable smorgasbord of useful learnings for researchers looking to grow the practice of ResearchOps. One particularly useful element is the idea of selecting only the components of DesignOps that are relevant for the organization at that point in time. This is quite important. DesignOps is a broad topic, and there’s little sense in every organization trying to take on every aspect of it. The takeaway, DesignOps (and ResearchOps) should look very different depending on the organization.
Kate Kaplan also touches on another useful point in her Nielsen Norman Group article; the idea of the DesignOps menu:

This menu essentially outlines all of the elements that organizations could focus on when adopting practices to support designers. The DesignOps Menu is a useful framework for those trying to create a similar list of elements for ResearchOps.
Why does ResearchOps matter now?
It’s always been difficult to definitively say “this is the state of user research”. While some organizations intimately understand the value that a focus on customer centricity brings (and have teams devoted to the cause), others are years behind. In these lagging organizations, the researchers (or the people doing research), have to fight to prove the value of their work. This is one of the main reasons why ResearchOps as an initiative matters so much right now.
The other driver for ResearchOps is that the way researchers work together and with other disciplines is changing fast. In general, a growing awareness of the importance of the research is pushing the field together with data science, sales, customer support and marketing. All this to say, researchers are having to spend more and more time both proving the value of their work and operating at a more strategic level. This isn’t likely to slow, either. The coming years will see researchers spending less time doing actual research. With this in mind, ResearchOps becomes all the more valuable. By standardizing common research practices and working out ownership, the research itself doesn’t have to suffer.
What are the different components of ResearchOps?
As we touched on earlier, ResearchOps – like DesignOps – is quite a broad topic. This is necessary. As most practicing researchers know, there are a number of elements that go into ensuring thorough, consistent research.
A useful analogy for ResearchOps is a pizza. There are many different components (toppings) that can go on the pizza, which is reflected in how research exists in different organizations. The real point here is that no 2 research operations should look the same. Research at Facebook will look markedly different to research at a small local government agency in Europe.
We looked at the DesignOps Menu earlier as a model for ResearchOps, but there’s another, more specific map created as part of the #WhatisResearchOps project.

Like the DesignOps Menu, this map functions as a framework for what ResearchOps is. It’s the output of a series of workshops run by researchers across the globe as well as a large survey.
Who practices ResearchOps?
By now you should have a clear idea of the scale and scope of ResearchOps, given that we’ve covered the various components and why the practice matters so much. There are still 2 important topics left to cover, however: Who practices ResearchOps and (perhaps most interestingly) where it’s heading.
As the saying goes, “everyone’s a researcher”, and this certainly holds true when talking about ResearchOps, but here are some of the more specific roles that should be responsible for executing ResearchOps components.
- User researchers – Self-explanatory. The key drivers of research standardization and socialization.
- UX designers – Customer advocates to the core, UX designers follow user researchers quite closely when it comes to execution.
- Designers – Add to that, designers in general. As designers increasingly become involved in the research activities of their organizations, expect to see them having a growing stake in ResearchOps activities.
- Customer experience (CX) and marketing – Though they’re often not the foremost consideration when it comes to research conversations, marketing and CX certainly have a stake in research operations.
There’s also another approach that is worth considering: Research as a way of thinking. This can essentially be taken up by anyone, and it boils down to understanding the importance of a healthy research function, with processes, systems and tools in place to carry out research.
What’s next for ResearchOps?
As Kate Kaplan said in DesignOps 101, “DesignOps is the glue that holds the design organization together, and the bridge that enables collaboration among cross-disciplinary team members”. The same is true of ResearchOps – and it’s only going to become more important.
We’re going to echo the same call made by numerous other people helping to grow ResearchOps and say that if you’ve got some learnings to share, share them back with the community! We’re also always looking to share great UX and research content, so get in touch with us if you’ve got something to share on our blog.
Online card sorting: The comprehensive guide
When it comes to designing and testing in the world of information architecture, it’s hard to beat card sorting. As a usability testing method, card sorting is easy to set up, simple to recruit for and can supply you with a range of useful insights. But there’s a long-standing debate in the world of card sorting, and that’s whether it’s better to run card sorts in person (moderated) or remotely over the internet (unmoderated).
This article should give you some insight into the world of online card sorting. We've included an analysis of the benefits (and the downsides) as well as why people use this approach. Let's take a look!
How an online card sort works
Running a card sort remotely has quickly become a popular option just because of how time-intensive in-person card sorting is. Instead of needing to bring your participants in for dedicated card sorting sessions, you can simply set up your card sort using an online tool (like our very own OptimalSort) and then wait for the results to roll in.
So what’s involved in a typical online card sort? At a very high level, here’s what’s required. We’re going to assume you’re already set up with an online card sorting tool at this point.
- Define the cards: Depending on what you’re testing, add the items (cards) to your study. If you were testing the navigation menu of a hotel website, your cards might be things like “Home”, “Book a room”, “Our facilities” and “Contact us”.
- Work out whether to run a closed or open sort: Determine whether you’ll set the groups for participants to sort cards into (closed) or leave it up to them (open). You may also opt for a mix, where you create some categories but leave the option open for participants to create their own.
- Recruit your participants: Whether using a participant recruitment service or by recruiting through your own channels, send out invites to your online card sort.
- Wait for the data: Once you’ve sent out your invites, all that’s left to do is wait for the data to come in and then analyze the results.
That’s online card sorting in a nutshell – not entirely different from running a card sort in person. If you’re interested in learning about how to interpret your card sorting results, we’ve put together this article on open and hybrid card sorts and this one on closed card sorts.
Why is online card sorting so popular?
Online card sorting has a few distinct advantages over in-person card sorting that help to make it a popular option among information architects and user researchers. There are downsides too (as there are with any remote usability testing option), but we’ll get to those in a moment.
Where remote (unmoderated) card sorting excels:
- Time savings: Online card sorting is essentially ‘set and forget’, meaning you can set up the study, send out invites to your participants and then sit back and wait for the results to come in. In-person card sorting requires you to moderate each session and collate the data at the end.
- Easier for participants: It’s not often that researchers are on the other side of the table, but it’s important to consider the participant’s viewpoint. It’s much easier for someone to spend 15 minutes completing your online card sort in their own time instead of trekking across town to your office for an exercise that could take well over an hour.
- Cheaper: In a similar vein, online card sorting is much cheaper than in-person testing. While it’s true that you may still need to recruit participants, you won’t need to reimburse people for travel expenses.
- Analytics: Last but certainly not least, online card sorting tools (like OptimalSort) can take much of the analytical burden off you by transforming your data into actionable insights. Other tools will differ, but OptimalSort can generate a similarity matrix, dendrograms and a participant-centric analysis using your study data.
Where in-person (moderated) card sorting excels:
- Qualitative insights: For all intents and purposes, online card sorting is the most effective way to run a card sort. It’s cheaper, faster and easier for you. But, there’s one area where in-person card sorting excels, and that’s qualitative feedback. When you’re sitting directly across the table from your participant you’re far more likely to learn about the why as well as the what. You can ask participants directly why they grouped certain cards together.
Online card sorting: Participant numbers
So that’s online card sorting in a nutshell, as well as some of the reasons why you should actually use this method. But what about participant numbers? Well, there’s no one right answer, but the general rule is that you need more people than you’d typically bring in for a usability test.
This all comes down to the fact that card sorting is what’s known as a generative method, whereas usability testing is an evaluation method. Here’s a little breakdown of what we mean by these terms:
Generative method: There’s no design, and you need to get a sense of how people think about the problem you’re trying to solve. For example, how people would arrange the items that need to go into your website’s navigation. As Nielsen Norman Group explains: “There is great variability in different people's mental models and in the vocabulary they use to describe the same concepts. We must collect data from a fair number of users before we can achieve a stable picture of the users' preferred structure and determine how to accommodate differences among users”.
Evaluation method: There’s already a design, and you basically need to work out whether it’s a good fit for your users. Any major problems are likely to crop up even after testing 5 or so users. For example, you have a wireframe of your website and need to identify any major usability issues.
Basically, because you’ll typically be using card sorting to generate a new design or structure from nothing, you need to sample a larger number of people. If you were testing an existing website structure, you could get by with a smaller group.
Where to from here?
Following on from our discussion of generative versus evaluation methods, you’ve really got a choice of 2 paths from here if you’re in the midst of a project. For those developing new structures, the best course of action is likely to be a card sort. However, if you’ve got an existing structure that you need to test in order to usability problems and possible areas of improvement, you’re likely best to run a tree test. We’ve got some useful information on getting started with a tree test right here on the blog.

How to sell human-centered design
Picture this scenario: You're in your local coffee shop and hear a new song. You want to listen to it when you get back to the office. How do you obtain it? If you’re one of the 232 million Spotify users, you’ll simply open the app, search for the song and add it to your playlist. Within seconds, you’ll have the song ready to play whenever and wherever you want.
This new norm of music streaming wasn’t always the status quo. In the early days of the internet, the process of finding music was easy but nowhere nearly as easy as it is now. You’d often still be able to find any song you wanted, but you would need to purchase it individually or as part of an album, download it to your computer and then sync it across to a portable music player like the iPod.
Spotify is a prime example of successful human-centered design. The music service directly addresses the primary pain points with accessing music and building music collections by allowing users to pay a monthly fee and immediately gain access to a significant catalog of the world’s music.
It’s also far from the only example. Take HelloFresh, for example. Founded by Dominik Richter, Thomas Griesel and Jessica Nilsson in 2011, this company delivers a box of ingredients and recipes to your door each week, meaning there’s no need for grocery shopping or thinking about what to cook. It’s a service that addresses a fairly common problem: People struggle to find the time to go out and buy groceries and also create tasty, healthy meals, so the founders addressed both issues.
Both HelloFresh and Spotify are solutions to real user problems. They weren’t born as a result of people sitting in a black box and trying to come up with new products or services. This is the core of human-centered design – identifying something that people have trouble with and then building an appropriate answer.
The origins of human-centered design
But, someone is likely to ask, what’s even the point of human-centered design? Shouldn’t all products and services be designed for the people using them? Well, yes.
Interestingly, while terms like human-centered design and design thinking have become much more popular in recent years, they’re not entirely new methods of design. Designers have been doing this same work for decades, just under a different name: design. Just take one look at some of the products put together by famed industrial designer Dieter Rams (who famously influenced ex-Apple design lead Jony Ive). You can’t look at the product below and say it was designed without the end user in mind.

Why did human-centered design even gain traction as a term? David Howell (a UX designer from Australia) explains that designers often follow Parkinson’s Law, where “work expands so as to fill the time available for its completion”. He notes that designers could always do more (more user research, more ideation, more testing, etc), and that by wrapping everything under a single umbrella (like human-centered design) designers can “speak to their counterparts in business as a process and elevate their standing, getting the coveted seat at the table”.
Human-centered design, for all intents and purposes, is really just a way for designers to package up the important principles intrinsic to good design and sell them to those who may not be sympathetic to exactly why they’re important. At a broader level, the same thinking can be applied to UX as a whole. Good user experience should naturally fall under design, but occasionally a different way of looking at something is necessary to drive the necessary progress.
So human-centered design can really just be thought of as a vehicle to sell the importance of a user-first approach to organizations – that’s useful, but how exactly are you supposed to start? How do you sell something that’s both easily understandable but at the same time quite nebulous? Well, you sell it in the same way you’d sell user research.
How to sell human-centered design
Focus on the product
In the simplest terms, a product designed and built based on user input is going to perform better than one that was assembled based on internal organizational thinking.
When utilized in the right way, taking a human-centered approach to product design leads to products that resonate much more effectively with people. We looked at Spotify at the beginning of this article for a company that’s continuously adopted this practice, but there are countless others. AirBnB, Uber, Pinterest and more all jump to mind. Google and LinkedIn, meanwhile, serve as good examples of the ‘old guard’ that are starting to invest more in the user experience.
Understand the cost-benefit
In 2013, Microsoft was set to unveil the latest version of its Xbox video game console. Up until that point, the company had found significant success in the videogame market. Past versions of the Xbox console had largely performed very well both critically and commercially. With the newest version, however, the company quickly ran into problems.
The new ‘Xbox One’ was announced with several features that attracted scorn from both the target market and the gaming press. The console would, for example, tie both physical and digital purchases to users’ accounts, meaning they wouldn’t be able to sell them on (a popular practice). The console would also need to remain connected to the internet to check these game licenses, likely leading to significant problems for those without reliable internet access. Lastly, Microsoft also stated that users would have to keep an included camera system plugged in at all times otherwise the console wouldn’t function. This led to privacy advocates arguing that the camera system’s data could be used for things like targeted advertising and user surveillance.
Needless to say, after seeing the response from the press and the console’s target market, Microsoft backtracked and eventually released the Xbox One without the always-on requirement, game licensing system or camera connection requirement.
Think of the costs Microsoft likely incurred having to roll back every one of these decisions so late into the product’s development. If you’re able to identify an issue in the research or prototype phase, it’s going to be significantly cheaper to fix here as opposed to 3 years into development with a release on the horizon.
Wrap-up
As the Spotify founders discovered back in back in 2008, taking a human-centered approach to product design can lead to revolutionary products and experiences. It’s not surprising. After all, how can you be expected to build something that people want to use without understanding said people?

Understanding top tasks
All the way back in 2014, the web passed a pretty significant milestone: 1 billion websites. Of course, fewer than 200 million of these are actually active as of 2019, but there’s an important underlying point. People love to create. If the current digital age that we live in has taught us anything, it’s that it’s never been as easy to get information and ideas out into the world.
Understandably, this ability has been used – and often misused. Overloaded, convoluted websites are par for the course, with a common tactic for website renewal being to simply update them with a new coat of paint while ignoring the swirling pile of outdated and poorly organized content below.
So what are you supposed to do when trying to address this problem on your own website or digital project? Well, there’s a fairly robust technique called top tasks management. Here, we’ll go over exactly what it is and how you can use it.
Getting to grips with top tasks
Ideally, all websites would be given regular, comprehensive reviews. Old content could be revisited and analyzed to see whether it’s still actually serving a purpose. If not, it could be reworked or just removed entirely. Based on research, content creators could add new content to address user needs. Of course, this is just the ideal. The reality is that there’s never really enough time or resource to manage the growing mass of digital content in this way. The solution is to hone in on what your users actually use your website for and tailor the experience accordingly by looking at top tasks.
What are top tasks? They're basically a small set of tasks (typically around 5, but up to 10 is OK too) that are most important to your users. The thinking goes that if you get these core tasks right, your website will be serving the majority of your users and you’ll be more likely to retain them. Ignore top tasks (and any sort of task analysis), and you’ll likely find users leaving your website to find something else that better fits their needs.
The counter to top tasks is tiny tasks. These are everything on a website that’s not all that important for the people actually using it. Commonly, tiny tasks are driven more by the organization’s needs than those of the users. Typically, the more important a task is to a user, the less information there is to support it. On the other hand, the less important a task is to a user, the more information there is. Tiny tasks stem very much from ‘organization first’ thinking, wherein user needs are placed lower on the list of considerations.
According to Jerry McGovern (who penned an excellent write-up of top tasks on A List Apart), the top tasks model says “Focus on what really matters (the top tasks) and defocus on what matters less (the tiny tasks).”
How to identify top tasks
Figuring out your top tasks is an important step in clearing away the fog and identifying what actually matters to your users. We’ll call this stage of the process task discovery, and these are the steps:
- Gather: Work with your organization to gather a list of all customer tasks
- Refine: Take this list of tasks to a smaller group of stakeholders and work it down into a shortlist
- User feedback: Go out to your users and get a representative sample to vote on them
- Finalise: Assemble a table of tasks with the one with the highest number of votes at the top and the lowest number of votes at the bottom
We’ll go into detail on the above steps, explaining the best way of handling each one. Keep in mind that this process isn’t something you’ll be able to complete in a week – it’s more likely a 6 to 8-week project, depending on the size of your website, how large your user base is and the receptiveness of your organization to help out.
Step 1: Gather – Figure out the long list of tasks
The first part of the task process is to get out into the wider organization and discover what your users are actually trying to accomplish on your website or by using your products. It’s all about getting into the minds of your users – trying to see the world through their eyes, effectively.
If you’re struggling to think of places where you might find customer tasks, here are some of the best sources:
- Analytics: Take a deep dive into the analytics of your website or product to find out how people are using them. For websites, you’ll want to look at pages with high traffic and common downloads or interactions. The same applies to products – although the data you have access to will depend on the analytics systems in place.
- Customer support teams: Your own internal support teams can be a great source of user tasks. Support teams commonly spend all day speaking to users, and as a result, are able to build up a cohesive understanding of the types of tasks users commonly attempt.
- Sales teams: Similarly, sales teams are another good source of task data. Sales teams typically deal with people before they become your users, but a part of their job is to understand the problems they’re trying to solve and how your website or product can help.
- Direct customer feedback: Check for surveys your organization has run in the past to see whether any task data already exists.
- Social media: Head to Twitter, Facebook and LinkedIn to see what people are talking about with regards to your industry. What tasks are being mentioned?
It’s important to note that you need to cast a wide net when gathering task data. You can’t just rely on analytics data. Why? Well, downloads and page visits only reflect what you have, but not what your users might actually be searching for.
As for search, Jerry McGovern explains why it doesn’t actually tell the entire story: “When we worked on the BBC intranet, we found they had a feature called “Top Searches” on their homepage. The problem was that once they published the top searches list, these terms no longer needed to be searched for, so in time a new list of top searches emerged! Similarly, top tasks tend to get bookmarked, so they don’t show up as much in search. And the better the navigation, the more likely the site search is to reflect tiny tasks.”
At the end of the initial task-gathering stage you should be left with around 300 to 500 tasks. Of course, this can scale up or down depending on the size of the website or product.
Step 2: Refine – Create your shortlist
Now that you’ve got your long list of tasks, it’s time to trim them back until you’ve got a shortlist of 100 or less. Keep in mind that working through your long list of tasks is going to take some time, so plan for this process to take at least 4 weeks (but likely more).
It’s important to involve stakeholders from across the organization during the shortlist process. Bring in people from support, sales, product, marketing and leadership areas of the organization. In addition to helping you to create a more concise and usable list, the shortlist process helps your stakeholders to think about areas of overlap and where they may need to work together.
When working your list down to something more usable, try and consolidate and simplify. Stay away from product names as well as internal organization and industry jargon. With your tasks, you essentially want to focus on the underlying thing that a user is trying to do. If you were focusing on tasks for a bank, opt for “Transactions” instead of “Digital mobile payments”. Similarly, bring together tasks where possible. “Customer support”, “Help and support” and “Support center” can all be merged.
At a very technical level, it also helps to avoid lengthy tasks. Stick to around 7 to 8 words and try and avoid verbs, using them only when there’s really no other option. You’ll find that your task list becomes quite to navigate when tasks begin with “look”, “find” and “get”. Finally, stay away from specific audiences and demographics. You want to keep your tasks universal.
Step 3: User feedback – Get users to vote
With your shortlist created, it’s time to take it to your users. Using a survey tool like Questions, add in each one of your shortlisted tasks and have users rank 5 tasks on a scale from 1 to 5, with 5 being the most important and 1 being the least important.
If you’re thinking that your users will never take the time to work through such a long list, consider that the very length of the list means they’ll seek out the tasks that matter to them and ignore the ones that don’t.

Step 4: Finalize – Analyze your results
Now for the task analysis side of the project. What you want at the end of the user survey end of the project is a league table of entire shortlist of tasks. We’re going to use the example from Cisco’s top tasks project, which has been documented over at A List Apart by Gerry McGovern (who actually ran the project). The entire article is worth a read as it covers the process of running a top task project for a large organization.
Here’s what a league table of the top 20 tasks looks like from Cisco:

Here’s the breakdown of the vote for Cisco’s tasks:
- 3 tasks got the first 25 percent of the vote
- 6 tasks got 25-50 percent of the vote
- 14 tasks got 50-75 percent of the vote
- 44 tasks got 75-100 percent of the vote
While the pattern may seem surprising, it’s actually not unusual. As Jerry explains: “We have done this process over 400 times and the same patterns emerge every single time.”
Final thoughts
Focusing on top tasks management is really a practice that needs to be conducted on a semi-regular basis. The approach benefits organizations in a multitude of ways, bringing different teams and people together to figure out how to best address why your users are coming to your website and what they actually need from you.
As we explained at the beginning of this article, top tasks is really about clearing away the fog and understanding on what really matters. Instead of spreading yourself thin and focusing on a host of tiny tasks, hone in on those top tasks that actually matter to your users.
Understanding how to improve your website
The top tasks approach is an effective way of giving you a clear idea of what you should be focusing on when designing or redesigning your website, but this should really just be one aspect of the work you do.
Utilizing a host of other UX research methods can give you a much more comprehensive idea of what’s working and what’s not. With card sorting, for example, you can learn how your users think the content on your website should be arranged. Then, with this data in hand, you can use tree testing to assemble draft structures of your website and test how people navigate their way through it. You can keep iterating on these structures to ensure you’ve created the most user-friendly navigation.
Take a look at the Intro to UX page to learn more about card sorting and tree testing, as well as the other user research methods you can use to make solid improvements to your website. If you’d rather just start putting methods into practice using user research tools, take our UX platform for a spin for free here.

Introducing our improved header information architecture
Note: We’re gradually rolling out this feature, so it may not appear for you straight away.
We’ve made a few changes to our navigation header in-app to make it easier and more intuitive to use. In this blog post, we’ll explore the changes you’re going to see, why we made these changes and take a look at some of the testing that took place behind the scenes.
Let’s dive in!
What’s changed?
We’ve shuffled most tabs in the header around, but you’ll see the most obvious change when you create a study. You’ll now land directly on the ‘New Study’ screen, instead of being dropped in ‘Settings’ and then having to find your way to ‘Create Study’.

We’ve also made some changes to the ‘Create’, ‘Recruit’ and ‘Results’ tabs. These are now set in the header so you can easily navigate between creating, recruiting and reviewing results, instead of having to go back to ‘Settings’ to manage recruitment.
When you click ‘Back to studies’, you’ll now be taken back to the folder you’re currently working in. Previously you’d be taken back to the ‘All studies’ dashboard, so no more scrolling through the dashboard to find the folder you’re looking for. You can also rename your study in the header now, instead of having to navigate back to ‘Setup’.

Finally, you can now end your study by clicking the ‘End study’ button in the header. We’d discovered that some people had trouble ending their studies in the past, now you can just click the button with the tiny checkered flag.

Why we made these changes
Our goal is to create the world’s best user research platform – and a big part of that is ensuring that our tools are easy to use. Through testing, we found that our old navigation header had usability, consistency and design issues. Specifically, usability testing and tree testing showed that simplifying our information architecture (IA) would drastically improve the user experience and flow.
We asked the question: “How might we improve the flow of our tool without affecting customers that currently use our tools?”
This led us down a research rabbit hole, but we came to the conclusion that we’d need to redesign the navigation header across all of our tools and adjust the IA to match the user journey. In other words, we needed to redesign the navigation to match how people were using our tools.
What we discovered during our user testing
User research and testing inform all of the changes that we make to our tools, and this project was no different.
We ran 3 tree tests using Treejack, testing 3 different IAs with 97 participants (30 per test). We also ran a first-click test using Chalkmark to validate what we were doing in Treejack.
The 3 IA’s we tested were:
- The same as the prototype (Create, Design, Launch, Recruit, Results)
- Removing Launch and splitting up children branches (Create, Design, Recruit, Results)
- Moving Design back into Create (Create, Recruit, Results)
Note: A caveat of the results is that we used the phrase ‘Set up’ in many questions which would bias the second test in favor of any question where ‘Setup’ is the correct answer.
We found that the third tree performed the best during our test. On average, it required a little less backtracking and was less direct than the first 2 tests, but had a higher chance of completion simply because there were fewer options available.
Wrap up
So that’s our new in-app header navigation – we hope you like it. Got a question? Feedback? Click the yellow Intercom circle in the bottom right-hand corner to get in touch with our support team.
Happy testing!

Pinpointing the findability of topics on your website with tree testing
Think about the last time you were visiting a friend or family member’s house and needed to find something. Like a spoon, for example. You start in the kitchen because that’s where we’ve all been trained to find things like spoons. But where do you look next? Some may look in the drawer near the dishwasher, while others may look to the drawer that’s by the cabinet that holds dishes and bowls. Point is, people use context clues and other points of reference (like familiarity and habits) to find things that they’re looking for.
The same goes for navigating a webpage.
However, unlike the spoon example, there’s much more variation in website structure than kitchen layout. Websites vary from industry to industry, ranging in purpose from ecommerce to digital portfolios and everything in between. So when someone lands on your website to find something or complete a task, is it obvious where they should go to get things done?
Your gut might tell you that it is. The people on your design team might tell you that it is too. But what really matters is what happens with actual users. Whenever you’re building a new website or optimizing an existing one, it’s important that you build it with quality user insights. That starts with user experience (UX) research.
And before you ask if we mean qualitative or quantitative research – the answer is both. Let’s explore a particular user research method that’s essential for gathering the information needed to build intuitive site navigation. It’s called tree testing.
How tree testing evaluates findability of topics on a website
For anyone unfamiliar with tree testing (sometimes referred to as ‘reverse card sorting’ or ‘card-based classification’), it’s a series of tests that help you understand where and why people get lost in your content.
It works by taking text-based versions of websites and stripping the influence of navigation aids or design elements to build a study. Participants are shown the text-only version of your website or app and are asked to indicate where they would find specific items or topics.
By removing everything but the labels and how they’re organized, you can work out just how effective your website structure is at enabling people to find what they need. If test participants consistently struggle to locate specific items, you’ll know there are issues with your structure. It can also help you visualize what paths participants take to get to specific site information.
Another point worth mentioning is that tree testing not only uncovers site navigation flaws, but it reveals word choice and microcopy that may not be clear to users. To reference back to the initial analogy, one person might say “where would I find your spoons,” while another may ask, “where do you keep your utensils?” So, tree testing can also be helpful for understanding which words work best in your site structure.
When should I conduct a tree test?
While it’s best to conduct a tree test early in the research phase of a project, there’s never a wrong time. (Unless that time is not at all). And if you have an existing website, this is a good time to establish a base of your existing site structure. The results you gather here can help to highlight any issues with your site’s structure and provide the data needed to compare to any improvements.
If you’re starting from scratch with a new website, you can run tree tests on different versions of your proposed site structure and then compare the results to determine which makes the most sense to your users.
How do I conduct a tree test?
Running a tree test is simple. You can use an online tool (like TreeJack) to collect all of the necessary quantitative data you need to ensure successful site navigation.

But what about the qualitative side of it all? We’re so glad you asked.
How to add qualitative data to your tree test
Running tree tests is a great way to gather quantitative data about your site navigation and topic findability, but it's not so good at providing you with qualitative insights, or why people are looking for information the way that they are.
Traditionally, you could get your gift cards ready and set up an in-person study where you watch people perform the tasks laid out in your tree test, and subsequently ask why they searched for things the way they did. But that takes a lot of time, coordination, and compensation. Alternatively, you could set up a remote usability test using a platform designed to record users as they complete your tree test.
This way, you’ll get the quantitative results you need to make decisions, backed by qualitative insights from real people. You’ll be able to set up your test to get answers to questions like:
- “Why did people go to X category to find information on X?”
- “Why did it take so long to find X?”
- "Why did people take that path to find X?"
Combine quantitative and qualitative research solutions to paint the whole picture
Teams that solely rely on quantitative data are only getting half the story. By pairing tree testing tools, like Optimal Workshop’s Treejack, with UserTesting’s Human Insight Platform, researchers are able to not only see how people search for information on a website but get to the bottom of why. When used together, quantitative and qualitative data reveal the most valuable insights for tree testing.