March 29, 2016
3

Which comes first: card sorting or tree testing?

“Dear Optimal, I want to test the structure of a university website (well certain sections anyway). My gut instinct is that it's pretty 'broken'. Lots of sections feel like they're in the wrong place. I want to test my hypotheses before proposing a new structure. I'm definitely going to do some card sorting, and was planning a mixture of online and offline. My question is about when to bring in tree testing. Should I do this first to test the existing IA? Or is card sorting sufficient? I do intend to tree test my new proposed IA in order to validate it, but is it worth doing it upfront too?" — Matt

Dear Matt,

Ah, the classic chicken or the egg scenario: Which should come first, tree testing or card sorting?

It’s a question that many researchers often ask themselves, but I’m here to help clear the air! You should always use both methods when changing up your information architecture (IA) in order to capture the most information.

Tree testing and card sorting, when used together, can give you fantastic insight into the way your users interact with your site. First of all, I’ll run through some of the benefits of each testing method.


What is card sorting and why should I use it?

Card sorting is a great method to gauge the way in which your users organize the content on your site. It helps you figure out which things go together and which things don’t. There are two main types of card sorting: open and closed.

Closed card sorting involves providing participants with pre-defined categories into which they sort their cards. For example, you might be reorganizing the categories for your online clothing store for women. Your cards would have all the names of your products (e.g., “socks”, “skirts” and “singlets”) and you also provide the categories (e.g.,“outerwear”, “tops” and “bottoms”).

Open card sorting involves providing participants with cards and leaving them to organize the content in a way that makes sense to them. It’s the opposite to closed card sorting, in that participants dictate the categories themselves and also label them. This means you’d provide them with the cards only, and no categories.

Card sorting, whether open or closed, is very user focused. It involves a lot of thought, input, and evaluation from each participant, helping you to form the structure of your new IA.


What is tree testing and why should I use it?

Tree testing is a fantastic way to determine how your users are navigating your site and how they’re finding information. Your site is organized into a tree structure, sorted into topics and subtopics, and participants are provided with some tasks that they need to perform. The results will show you how your participants performed those tasks, if they were successful or unsuccessful, and which route they took to complete the tasks. This data is extremely useful for creating a new and improved IA.

Tree testing is an activity that requires participants to seek information, which is quite the contrast to card sorting. Card sorting is an activity that requires participants to sort and organize information. Each activity requires users to behave in different ways, so each method will give its own valuable results.


Comparing tree testing and card sorting: Key differences

Tree testing and card sorting are complementary methods within your UX toolkit, each unlocking unique insights about how users interact with your site structure. The difference is all about direction.

Card sorting is generative. It helps you understand how users naturally group and label your content; revealing mental models, surfacing intuitive categories, and informing your site’s information architecture (IA) from the ground up. Whether using open or closed methods, card sorting gives users the power to organize content in ways that make sense to them.

Tree testing is evaluative. Once you’ve designed or restructured your IA, tree testing puts it to the test. Participants are asked to complete find-it tasks using only your site structure – no visuals, no design – just your content hierarchy. This highlights whether users can successfully locate information and how efficiently they navigate your content tree.

In short:

  • Card sorting = "How would you organize this?"
  • Tree testing = "Can you find this?"


Using both methods together gives you clarity and confidence. One builds the structure. The other proves it works.


Which method should you choose?

The right method depends on where you are in your IA journey. If you're beginning from scratch or rethinking your structure, starting with card sorting is ideal. It will give you deep insight into how users group and label content.

If you already have an existing IA and want to validate its effectiveness, tree testing is typically the better fit. Tree testing shows you where users get lost and what’s working well. Think of card sorting as how users think your site should work, and tree testing as how they experience it in action.


Should you run a card or tree test first?

In this scenario, I’d recommend running a tree test first in order to find out how your existing IA currently performs. You said your gut instinct is telling you that your existing IA is pretty “broken”, but it’s good to have the data that proves this and shows you where your users get lost.

An initial tree test will give you a benchmark to work with – after all, how will you know your shiny, new IA is performing better if you don’t have any stats to compare it with? Your results from your first tree test will also show you which parts of your current IA are the biggest pain points and from there you can work on fixing them. Make sure you keep these tasks on hand – you’ll need them later!

Once your initial tree test is done, you can start your card sort, based on the results from your tree test. Here, I recommend conducting an open card sort so you can understand how your users organize the content in a way that makes sense to them. This will also show you the language your participants use to name categories, which will help you when you’re creating your new IA.

Finally, once your card sort is done you can conduct another tree test on your new, proposed IA. By using the same (or very similar) tasks from your initial tree test, you will be able to see that any changes in the results can be directly attributed to your new and improved IA.

Once your test has concluded, you can use this data to compare the performance from the tree test for your original information architecture.


Why using both methods together is most effective

Card sorting and tree testing aren’t rivals, view them as allies. Used together, they give you end-to-end clarity. Card sorting informs your IA design based on user mental models. Tree testing evaluates that structure, confirming whether users can find what they need. This combination creates a feedback loop that removes guesswork and builds confidence. You'll move from assumptions to validation, and from confusion to clarity – all backed by real user behavior.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

"Could I A/B test two content structures with tree testing?!"

"Dear Optimal Worshop
I have two huge content structures I would like to A/B test. Do you think Treejack would be appropriate?"
— Mike

Hi Mike (and excellent question)!

Firstly, yes, Treejack is great for testing more than one content structure. It’s easy to run two separate Treejack studies — even more than two. It’ll help you decide which structure you and your team should run with, and it won’t take you long to set them up.

When you’re creating the two tree tests with your two different content structures, include the same tasks in both tests. Using the same tasks will give an accurate measure of which structure performs best. I’ve done it before and I found that the visual presentation of the results — especially the detailed path analysis pietrees — made it really easy to compare Test A with Test B.

Plus (and this is a big plus), if you need to convince stakeholders or teammates of which structure is the most effective, you can’t go past quantitative data, especially when its presented clearly — it’s hard to argue with hard evidence!

Here’s two example of the kinds of results visualizations you could compare in your A/B test: the pietree, which shows correct and incorrect paths, and where people ended up:

treejack pietree

And the overall Task result, which breaks down success and directness scores, and has plenty of information worth comparing between two tests:

treejack task result

Keep in mind that running an A/B tree test will affect how you recruit participants — it may not be the best idea to have the same participants complete both tests in one go. But it’s an easy fix — you could either recruit two different groups from the same demographic, or test one group and have a gap (of at least a day) between the two tests.

I’ve one more quick question: why are your two content structures ‘huge’?

I understand that sometimes these things are unavoidable — you potentially work for a government organization, or a university, and you have to include all of the things. But if not, and if you haven’t already, you could run an open card sort to come up with another structure to test (think of it as an A/B/C test!), and to confirm that the categories you’re proposing work for people.

You could even run a closed card sort to establish which content is more important to people than others (your categories could go from ‘Very important’ to ‘Unimportant’, or ‘Use everyday’ to ‘Never use’, for example). You might be able to make your content structure a bit smaller, and still keep its usefulness. Just a thought... and of course, you could try to get this information from your analytics (if available) but just be cautious of this because of course analytics can only tell you what people did and not what they wanted to do.

All the best Mike!

Learn more
1 min read

Card descriptions: Testing the effect of contextual information in card sorts

The key purpose of running a card sort is to learn something new about how people conceptualize and organize the information that’s found on your website. The insights you gain from running a card sort can then help you develop a site structure with content labels or headings that best represent the way your users think about this information. Card sorts are in essence a simple technique, however it’s the details of the sort that can determine the quality of your results.

Adding context to cards in OptimalSort – descriptions, links and images

In most cases, each item in a card sort has only a short label, but there are instances where you may wish to add additional context to the items in your sort. Currently, the cards tab in OptimalSort allows you to include a tooltip description, a link within the tooltip description or to format the card as an image (with or without a label).

adding descriptions and images - 640px

We generally don’t recommend using tooltip descriptions and links, unless you have a specific reason to do so. It’s likely that they’ll provide your participants with more information than they would normally have when navigating your website, which may in turn influence your results by leading participants to a particular solution.

Legitimate reasons that you may want to use descriptions and links include situations where it’s not possible or practical to translate complex or technical labels (for example, medical, financial, legal or scientific terms) into plain language, or if you’re using a card sort to understand your participants’ preferences or priorities.

If you do decide to include descriptions in your sort, it’s important that you follow the same guidelines that you would otherwise follow for writing card labels. They should be easy for your participants to understand and you should avoid obvious patterns, for example repeating words and phrases, or including details that refer to the current structure of the website.

A quick survey of how card descriptions are used in OptimalSort

I was curious to find out how often people were including descriptions in their card sorts, so I asked our development team to look into this data. It turns out that around 15% of cards created in OptimalSort have at least some text entered in the description field. In order to dig into the data a bit further, both Ania and I reviewed a random sample of recent sorts and noted how descriptions were being used in each case.

We found that out of the descriptions that we reviewed, 40% (6% of the total cards) had text that should not have impacted the sort results. Most often, these cards simply had the card label repeated in the description (to be honest, we’re not entirely sure why so many descriptions are being used this way! But it’s now in our roadmap to stop this from happening — stay tuned!). Approximately 20% (3% of the total cards) used descriptions to add context without obviously leading participants, however another 40% of cards have descriptions that may well lead to biased results. On occasion, this included linking to the current content or using what we assumed to be the current top level heading within the description.

Use of card descriptions

Create pie charts

Testing the effect of card descriptions on sort results

So, how much influence could potentially leading card descriptions have on the results of a card sort? I decided to put it to the test by running a series of card sorts to compare the effect of different descriptions. As I also wanted to test the effect of linking card descriptions to existing content, I had to base the sort on a live website. In addition, I wanted to make sure that the card labels and descriptions were easily comprehensible by a general audience, but not so familiar that participants were highly likely to sort the cards in a similar manner.

I selected the government immigration website New Zealand Now as my test case. This site, which provides information for prospective and new immigrants to New Zealand, fit the above criteria and was likely unfamiliar to potential participants.

Card descriptions

Navigating the New Zealand Now website

When I reviewed the New Zealand Now site, I found that the top level navigation labels were clear and easy to understand for me personally. Of course, this is especially important when much of your target audience is likely to be non-native English speaking! On the whole, the second level headings were also well-labeled, which meant that they should translate to cards that participants were able to group relatively easily.

There were, however, a few headings such as “High quality” and “Life experiences”, both found under “Study in New Zealand”, which become less clear when removed from the context of their current location in the site structure. These headings would be particularly useful to include in the test sorts, as I predicted that participants would be more likely to rely on card descriptions in the cases where the card label was ambiguous.

Card Descriptions2

I selected 30 headings to use as card labels from under the sections “Choose New Zealand”, “Move to New Zealand”, “Live in New Zealand”, “Work in New Zealand” and “Study in New Zealand” and tweaked the language slightly, so that the labels were more generic.

card labels

I then created four separate sorts in OptimalSort:Round 1: No description: Each card showed a heading only — this functioned as the control sort

Card descriptions illustrations - card label only

Round 2: Site section in description: Each card showed a heading with the site section in the description

Card descriptions illustrations - site section

Round 3: Short description: Each card showed a heading with a short description — these were taken from the New Zealand Now topic landing pages

Card descriptions illustrations - short description

Round 4:Link in description: Each card showed a heading with a link to the current content page on the New Zealand Now website

Card descriptions illustrations - link

For each sort, I recruited 30 participants. Each participant could only take part in one of the sorts.

What the results showed

An interesting initial finding was that when we queried the participants following the sort, only around 40% said they noticed the tooltip descriptions and even fewer participants stated that they had used them as an aid to help complete the sort.

Participant recognition of descriptions

Create bar charts

Of course, what people say they do does not always reflect what they do in practice! To measure the effect that different descriptions had on the results of this sort, I compared how frequently cards were sorted with other cards from their respective site sections across the different rounds.Let’s take a look at the “Study in New Zealand” section that was mentioned above. Out of the five cards in this section,”Where & what to study”, “Everyday student life” and “After you graduate” were sorted pretty consistently, regardless of whether a description was provided or not. The following charts show the average frequency with which each card was sorted with other cards from this section. For example in the control round, “Where & what to study” was sorted with “After you graduate” 76% of the time and with “Everyday day student life” 70% of the time, but was sorted with “Life experiences” or “High quality” each only 10% of the time. This meant that the average sort frequency for this card was 42%.

Untitled chartCreate bar charts

On the other hand, the cards “High quality” and “Life experiences” were sorted much less frequently with other cards in this section, with the exception of the second sort, which included the site section in the description.These results suggest that including the existing site section in the card description did influence how participants sorted these cards — confirming our prediction! Interestingly, this round had the fewest number of participants who stated that they used the descriptions to help them complete the sort (only 10%, compared to 40% in round 3 and 20% in round 4).Also of note is that adding a link to the existing content did not seem to increase the likelihood that cards were sorted more frequently with other cards from the same section. Reasons for this could include that participants did not want to navigate to another website (due to time-consciousness in completing the task, or concern that they’d lose their place in the sort) or simply that it can be difficult to open a link from the tooltip pop-up.

What we can take away from these results

This quick investigation into the impact of descriptions illustrates some of the intricacies around using additional context in your card sorts, and why this should always be done with careful consideration. It’s interesting that we correctly predicted some of these results, but that in this case, other uses of the description had little effect at all. And the results serve as a good reminder that participants can often be influenced by factors that they don’t even recognise themselves!If you do decide to use card descriptions in your cards sorts, here are some guidelines that we recommend you follow:

  • Avoid repeating words and phrases, participants may sort cards by pattern-matching rather than based on the actual content
  • Avoid alluding to a predetermined structure, such as including references to the current site structure
  • If it’s important that participants use the descriptions to complete the sort, you should mention this in your task instructions. It may also be worth asking them a post-survey question to validate if they used them or not

We’d love to hear your thoughts on how we tested the effects of card descriptions and the results that we got. Would you have done anything differently?Have you ever completed a card sort only to realize later that you’d inadvertently biased your results? Or have you used descriptions in your card sorts to meet a genuine need? Do you think there’s a case to make descriptions more obvious than just a tooltip, so that when they are used legitimately, most participants don’t miss this information?

Let us know by leaving a comment!

Learn more
1 min read

UX workshop recap: experts from Meta, Netflix & Google share insights to elevate your career

Recently, Optimal Workshop partnered with Eniola Abioye, Lead UX Researcher at Meta and UXR Career Coach at UX Outloud to host a career masterclass featuring practical advice and guidance on how to: revamp and build a portfolio, emphasize the impact of your projects and showcase valuable collaborations. It also included panel discussions with experts from a variety of roles (UX, product management, engineering, career coaching and content design) talking about their journeys to becoming UX leaders. 

Keep reading to get key takeaways from the discussion on:

  • How to show the impact of your UX work
  • Common blockers in UX work
  • How to collaborate with cross-functional UX stakeholders 
  • How to build a resume and portfolio that uses industry language to present your experience

How to show the impact of your UX 💥

At a time when businesses are reducing costs to focus on profitability - proving the value of your work is more important than ever. Unfortunately, measuring the impact of UX isn’t as straightforward as tracking sales or marketing metrics. With this in mind, Eniola asked the panelists - how do you show the impact of UX in your work?Providing insights is simply not enough. “As a product manager, what I really care about is insights plus recommendations, because recommendations make my life easier,” said Kwame Odame. 

Auset Parris added her perspective on this topic as a Growth Content Designer, “the biggest thing for me to be impactful in my space [Content Design] is to consistently document the changes that I’ve made and share them with the team along with recommendations.” Auset also offered her perspective regarding recommendations, “recommendations are not always going to lead to the actual product executions, but recommendations are meant to guide us.” When it comes to deciding which recommendation to proceed with (if any) it's important to consider whether or not they are aligned with the overarching goal. 

Blockers in UX work 🚧

As UXR becomes more democratized in many businesses and the number of stakeholders increases, the ability to gain cross-functional buy-in for the role and outcomes of UXR is a key way to help keep research a priority. 

In his past experience, Kwame has realized that the role of a user experience researcher is just as important as that of a product manager, data scientist, engineer, or designer. However, one of the biggest blockers for him as a product manager is how the role of a UX researcher is often overlooked. “Just because I’m the product manager doesn’t mean that I’m owning every aspect of the product. I don’t have a magic wand right? We all work as a team.” Furthermore, Kwame notes that a user researcher is an incredibly hard role and a very important one, and I think we need to invest more in the UX space.

Auset also shared her perspective on the topic, “I wouldn’t say this is a blocker, but I do think this is a challenging piece of working in a team - there are so many stakeholders.” Although it would be ideal for each of the different departments to work seamlessly together at all times, that’s not always the case. Auset spoke about a time where the data scientists and user researchers were in disagreement. Her role as a Growth Content Designer is to create content that enhances the user experience. “But if I’m seeing two different experiences, how do I move forward? That’s when I have to ask everyone - come on let’s dig deeper. Are we looking at the right things?” If team members are seeing different results, or having different opinions, then maybe they are not asking the right questions and it's time to dig deeper. 

How to collaborate with cross-functional UX stakeholders 🫱🏽🫲🏻

The number and type of roles that now engage with research are increasing. As they do, the ability to collaborate and manage stakeholders in research projects has become essential. 

Kwame discussed how he sets up a meeting for the team to discuss their goals for the next 6 months. Then, he meets with the team on a weekly basis to ensure alignment. The main point of the meeting is to ensure everyone is leaving with their questions answered and blockers addressed. It's important to ensure everyone is moving in the right direction. 

Auset added that she thinks documentation is key to ensuring alignment. “One thing that has been helpful for me is having the documentation in the form of a product brief or content brief.” The brief can include the overarching goal, strategy, and indicate what each member of the team is working on. Team members can always look back at this document to ensure they are on the right track. 

Career advice: documenting the value you bring 💼

One of the participants asked the panel, “how do you secure the stability of your UX career?” 

Eniola took this opportunity to share some invaluable advice as a career coach, “I think the biggest thing that comes to mind is value proposition. It's important to be very clear about the value and impact you bring to the team. It used to be enough to just be really, really good at research and just do research and provide recommendations. Now that’s not enough. Now you have to take your teams through the process, integrate your recommendations into the product, and focus on driving impact.” 

Companies aren’t looking to hire someone who can perform a laundry list of tasks, they’re looking for UX professionals who can drive results. Think about the metrics you can track, to help showcase the impact of your work. For example, if you’re a UX designer - how much less time did the user spend on the task with your new design? Did the abandonment or error rate decrease significantly as a result of your work? How much did the overall customer satisfaction score rise, after you implemented your project? Before starting your project, decide on several metrics to track (make sure they align with your organization’s goals), and reflect on these after each project. 

Fatimah Richmond offered another piece of golden career advice. She encourages UX researchers to create an ongoing impact tracker. She’ll create a document where she lists the company's objectives, the projects she worked on, and the specific impact she made on the companies objectives. It's much easier to keep track of the wins as they happen, and jot a few notes about the impact you’ve made with each project, then scrambling to think of all the impact you’ve made when writing your resume. It's also important to note the impact your work has made on the different departments - product, marketing, sales, etc.

She also advises UX researchers to frequently share their science insights with their colleagues as the project progresses. Instead of waiting until the very end of the project and providing a “perfectly polished” deck, be transparent with the team about what you are working on and the impact it's having throughout the duration of the project.

Another participant asked - what if you need help determining the value you bring? Auset recommends asking for actionable feedback from coworkers. These people work with you every single day, so they know your contributions you are making to the team. 

Documenting the tangible impact you make as a UX professional is crucial - not only will it help create greater stability for your career, but it will also help organizations recognize the importance of a UX research. As Kwame discussed in the “blockers” section, one of the biggest challenges he faces as a product manager is the perception of the UX role as less important than the more traditional product manager, Engineer, and Designer roles. 

About Eniola Abioye

Eniola helps UX researchers improve their research practice. Whether you’re seasoned and looking to level up or a new researcher looking to get your bearings in UX, Eniola can help you focus and apply your skillset. She is a UX Researcher and Founder of UX Outloud. As a career coach, she guides her clients through short and long term SMART goals and then works with them to build a strategic plan of attack. She is innately curious, a self-starter, adaptable, and communicative with a knack for storytelling.

Learn more about UX Outloud.

Connect with Eniola on Linkedin.

About the panelists 🧑🏽🤝🧑🏽

The panel was comprised of talented professionals from a variety of fields including UX research, content strategy, product management & engineering, and career coaching. Their diverse perspectives led to an insightful and informative panel session. Keep reading to get to know each of the amazing panelists: 

Growth Content Designer: Auset Parris is a growth content designer at Meta. She has spent 7 years navigating the ever-evolving landscape of content strategy. She is passionate about the role of user research in shaping content strategies. Furthermore, Auset believes that understanding user behavior and preferences is fundamental to creating content that not only meets but exceeds user expectations. 

Senior UX Researcher: Jasmine Williams, Ph.D. is a senior researcher with over a decade of experience conducting youth-focused research. She has deep expertise in qualitative methods, child and adolescent development, and social and emotional well-being. Jasmine is currently a user experience researcher at Meta and her work focuses on teen safety and wellbeing. 

Product Manager: Kwame Odame has over 7 years of high-tech experience working in product management and software engineering. At Meta, Kwame is currently responsible for building the product management direction for Fan Engagement on Facebook. Kwame has also helped build Mastercard’s SaaS authentication platform, enabling cardholders to quickly confirm their identity when a suspicious transaction occurred, leveraging biometric technology. 

UX Researcher (UXR): Fatimah Richmond is a well-rounded UX researcher with over 15 years of experience, having influenced enterprise products across leading tech giants like Google, SAP, Linkedin, and Microsoft. Fatimah has led strategy for research, programs and operations that have significantly impacted the UXR landscape, from clinician engagement strategist to reshaping Linkedin Recruiter and Jobs. As a forward thinker, she’s here to challenge our assumptions and the status quo on how research gets planned, communicated, and measured.

Career Coach: An Xia spent the first decade of her professional life in consulting and Big Tech data science (Netflix, Meta). As a career coach, An has supported clients in gaining clarity on their career goals, navigating challenges of career growth, and making successful transitions. As a somatic coach, An has helped clients tap into the wisdom of their soma to reconnect with what truly matters to them. 

UX Strategist: Natalie Gauvin is an experienced professional with a demonstrated history of purpose-driven work in agile software development industries and higher education. Skilled in various research methodologies. Doctor of Philosophy (Ph.D.) Candidate in Learning Design and Technology from the University of Hawaii at Manoa, focused on empathy in user experience through personas

Level up your UXR capabilities (for free!) with the Optimal Academy 📚

Here at Optimal we really care about helping UX researchers level up their career. This is why we’ve developed the Optimal Academy, to help you master your Optimal Workshop skills and learn more about user research and information architecture.

Check out some of our free courses here: https://academy.optimalworkshop.com/

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.