How to interpret your card sort results Part 2: closed card sorts and next steps
In Part 1 of this series we looked at how to interpret results from open and hybrid card sorts and now in Part 2, we’re going to talk about closed card sorts. In closed card sorts, participants are asked to sort the cards into predetermined categories and are not allowed to create any of their own. You might use this approach when you are constrained by specific category names or as a quick checkup before launching a new or newly redesigned website.In Part 1, we also discussed the two different - but complementary - types of analysis that are generally used together for interpreting card sort results: exploratory and statistical. Exploratory analysis is intuitive and creative while statistical analysis is all about the numbers. Check out Part 1 for a refresher or learn more about exploratory and statistical analysis in Donna Spencer’s book.
Getting started
Closed card sort analysis is generally much quicker and easier than open and hybrid card sorts because there are no participant created category names to analyze - it’s really just about where the cards were placed. There are some similarities about how you might start to approach your analysis process but overall there’s a lot less information to take in and there isn’t much in the way of drilling down into the details like we did in Part 1.Just like with an open card sort, kick off your analysis process by taking an overall look at the results as a whole. Quickly cast your eye over each individual card sort and just take it all in. Look for common patterns in how the cards have been sorted. Does anything jump out as surprising? Are there similarities or differences between participant sorts?
If you’re redesigning an existing information architecture (IA), how do your results compare to the current state? If this is a final check up before launching a live website, how do these results compare to what you learned during your previous research studies?If you ran your card sort using information architecture tool OptimalSort, head straight to the Overview and Participants Table presented in the results section of the tool. If you ran a moderated card sort using OptimalSort’s printed cards, you’ve probably been scanning them in after each completed session, but now is a good time to double check you got them all. And if you didn’t know about this handy feature of OptimalSort, it’s something to keep in mind for next time!
The Participants Table shows a breakdown of your card sorting data by individual participant. Start by reviewing each individual card sort one by one by clicking on the arrow in the far left column next to the Participants numbers. From here you can easily flick back and forth between participants without needing to close that modal window. Don’t spend too much time on this — you’re just trying to get a general impression of how the cards were sorted into your predetermined categories. Keep an eye out for any card sorts that you might like to exclude from the results. For example participants who have lumped everything into one group and haven’t actually sorted the cards.
Don’t worry- excluding or including participants isn’t permanent and can be toggled on or off at anytime.Once you’re happy with the individual card sorts that will and won’t be included in your results visualizations, it’s time to take a look at the Results Matrix in OptimalSort. The Results Matrix shows the number of times each card was sorted into each of your predetermined categories- the higher the number, the darker the shade of blue (see below).
Results Matrix in OptimalSort.
This table enables you to quickly and easily get across how the cards were sorted and gauge the highest and lowest levels of agreement among your participants. This will tell you if you’re on the right track or highlight opportunities for further refinement of your categories.If we take a closer look (see below) we can see that in this example closed card sort conducted on the Dewey Decimal Classification system commonly used in libraries, The Interpretation of Dreams by Sigmund Freud was sorted into ‘Philosophy and psychology’ 38 times in study a completed by 51 participants.
Results Matrix in OptimalSort zoomed in with hover.
In the real world, that is exactly where that content lives and this is useful to know because it shows that the current state is supporting user expectations around findability reasonably well. Note: this particular example study used image based cards instead of word label based cards so the description that appears in both the grey box and down the left hand side of the matrix is for reference purposes only and was hidden from the participants.Sometimes you may come across cards that are popular in multiple categories. In our example study, How to win friends and influence people by Dale Carnegie, is popular in two categories: ‘Philosophy & psychology’ and ‘Social sciences’ with 22 and 21 placements respectively. The remaining card placements are scattered across a further 5 categories although in much smaller numbers.
Results Matrix showing cards popular in multiple categories.
When this happens, it’s up to you to determine what your number thresholds are. If it’s a tie or really close like it is in this case, you might review the results against any previous research studies to see if anything has changed or if this is something that comes up often. It might be a new category that you’ve just introduced, it might be an issue that hasn’t been resolved yet or it might just be limited to this one study. If you’re really not sure, it’s a good idea to run some in-person card sorts as well so you can ask questions and gain clarification around why your participants felt a card belonged in a particular category. If you’ve already done that great! Time to review those notes and recordings!You may also find yourself in a situation where no category is any more popular than the others for a particular card. This means there’s not much agreement among your participants about where that card actually belongs. In our example closed card sort study, the World Book Encyclopedia was placed into 9 of 10 categories. While it was placed in ‘History & geography’ 18 times, that’s still only 35% of the total placements for that card- it’s hardly conclusive.
Results Matrix showing a card with a lack of agreement.
Sometimes this happens when the card label or image is quite general and could logically belong in many of the categories. In this case, an encyclopedia could easily fit into any of those categories and I suspect this happened because people may not be aware that encyclopedias make up a very large part of the category on the far left of the above matrix: ‘Computer science, information & general works’. You may also see this happening when a card is ambiguous and people have to guess where it might belong. Again - if you haven’t already - if in doubt, run some in-person card sorts so you can ask questions and get to the bottom of it!After reviewing the Results Matrix in OptimalSort, visit the Popular Placements Matrix to see which cards were most popular for each of your categories based on how your participants sorted them (see below 2 images).
Popular Placements Matrix in OptimalSort- top half of the diagram.
Popular Placements Matrix in OptimalSort- scrolled to show the bottom half of the diagram.
The diagram shades the most popular placements for each category in blue making it very easy to spot what belongs where in the eyes of your participants. It’s useful for quickly identifying clusters and also highlights the categories that didn’t get a lot of card sorting love. In our example study (2 images above) we can see that ‘Technology’ wasn’t a popular card category choice potentially indicating ambiguity around that particular category name. As someone familiar with the Dewey Decimal Classification system I know that ‘Technology’ is a bit of a tricky one because it contains a wide variety of content that includes topics on medicine and food science - sometimes it will appear as ‘Technology & applied sciences’. These results appear to support the case for exploring that alternative further!
Where to from here?
Now that we’ve looked at how to interpret your open, hybrid and closed card sorts, here are some next steps to help you turn those insights into action!Once you’ve analyzed your card sort results, it’s time to feed those insights into your design process and create your taxonomy which goes hand in hand with your information architecture. You can build your taxonomy out in Post-it notes before popping it into a spreadsheet for review. This is also a great time to identify any alternate labelling and placement options that came out of your card sorting process for further testing.From here, you might move into tree testing your new IA or you might run another card sort focussing on a specific area of your website. You can learn more about card sorting in general via our 101 guide.
When interpreting card sort results, don’t forget to have fun! It’s easy to get overwhelmed and bogged down in the results but don’t lose sight of the magic that is uncovering user insights.I’m going to leave you with this quote from Donna Spencer that summarizes the essence of card sort analysis quite nicely:Remember that you are the one who is doing the thinking, not the technique... you are the one who puts it all together into a great solution. Follow your instincts, take some risks, and try new approaches. - Donna Spencer
Further reading
Card Sorting 101 – Learn about the differences between open, closed and hybrid card sorts, and how to run your own using OptimalSort.
Your cards have been sorted, and now you have lots of amazing data and insight to help improve your information architecture. So how do you interpret the results?
Never fear, our product ninjas Alex and Aidan are here to help. In our latest live training session they take you on a walk-through of card sort analysis using OptimalSort.
What they cover:
Use cases for open, closed and hybrid card sort methodologies
How, when and why to standardize categories
How to interpret 3D cluster views, dendrograms, and similarity matrix
Tips on turning those results into actionable insights
In 2009, Bob Bailey and Cari Wolfson published published findings that changed how we approach first click testing and usability testing. They analyzed 12 scenario-based user tests and found that if someone gets their first click right, they're about twice as likely to complete their task successfully. This finding was so compelling that we built First Click Testing (formerly Chalkmark) specifically to help teams test this. But we'd never actually validated their research using our own data, until now.
Turns out, we're sitting on one of the world's largest databases of tree testing results. So we analyzed millions of task responses to see if the "first click predicts success" hypothesis holds up.
It does. Convincingly.
Users who get their first click correct are nearly three times more likely to complete their task successfully (70% vs 24% success rate).
Here's how we validated the original study, what our data shows, and why first clicks matter more than you might think.
Original first click testing study: 87% task success rate
Bob and Cari analyzed data from twelve usability studies on websites and products with varying amounts and types of content, a range of subject matter complexity, and distinct user interfaces. They found that people were about twice as likely to complete a task successfully if they got their first click right, than if they got it wrong:
If the first click was correct, the chances of getting the entire scenario correct was 87% if the first click was incorrect, the chances of eventually getting the scenario correct was only 46%.
Our Tree Testing data: First clicks predict 70% task success rate
We analyzed millions of tree testing responses in our database. We've found that people who get the first click correct are almost three times as likely to complete a task successfully:
If the first click was correct, the chances of getting the entire scenario correct was 70% if the first click was incorrect, the chances of eventually getting the scenario correct was 24%
To give you another perspective on the same data, here's the inverse:
If the first click was correct, the chances of getting the entire scenario incorrect was 30% if the first click was incorrect, the chances of getting the whole scenario incorrect was 76%
How Tree Testing measures first click success and task completion
Bob and Cari proved the usefulness of the methodology by linking two key metrics in scenario-based usability studies: first clicks and task success. First Click Testing doesn't measure task success — it's up to the researcher to determine as they're setting up the study what constitutes 'success', and then to interpret the results accordingly. Tree Testing(formerly Treejack) does measure task success — and first clicks.
In a tree test, participants are asked to complete a task by clicking though a text-only version of a website hierarchy, and then clicking 'I'd find it here' when they've chosen an answer. Each task in a tree test has a pre-determined correct answer — as was the case in Bob and Cari's usability studies — and every click is recorded, so we can see participant paths in detail.
Thus, every single time a person completes an individual tree testing task, we record both their first click and whether they are successful or not. When we came to test the 'correct first click leads to task success' hypothesis, we could therefore mine data from millions of task.
To illustrate this, have a look at the results for one task. The overall Task result, you see a score for success and directness, and a breakdown of whether each Success, Fail, or Skip was direct (they went straight to an answer), or indirect (they went back up the tree before they selected an answer):
In the pie tree for the same task, you can look in more detail at how many people went the wrong way from a label (each label representing one page of your website):
In the First Click tab, you get a percentage breakdown of which label people clicked first to complete the task:
And in the Paths tab, you can view individual participant paths in detail (including first clicks), and can filter the table by direct and indirect success, fails, and skips (this table is only displaying direct success and direct fail paths):
How to run first click tests: Best practices for usability testing
First click analysis is one of the most predictive metrics in usability testing. Whether you're testing wireframes, landing pages, or information architecture, measuring first click success gives you early insight into whether your design will work.
This analysis reinforces something we already knew: first clicksmatter. It is worth your time to get that first impression right. You have plenty of options for measuring the link between first clicks and task success in your scenario-based usability tests. From simply noting where your participants go during observations, to gathering quantitative first click data via online tools, you'll win either way. And if you want quantitative first click data, Optimal has you covered. First Click Testing works for wireframes and landing pages, while Tree Testing validates your information architecture.
To finish, here are a few invaluable insights from other researchers on getting the most from first click testing:
This analysis was conducted in 2015 using millions of task responses from Optimal’s First Click and Tree Testing tools. While the dataset predates recent UI trends, the underlying behavioral principle, that a correct first click strongly predicts task success, remains consistent with modern usability research.
The key purpose of running a card sort is to learn something new about how people conceptualize and organize the information that’s found on your website. The insights you gain from running a card sort can then help you develop a site structure with content labels or headings that best represent the way your users think about this information. Card sorts are in essence a simple technique, however it’s the details of the sort that can determine the quality of your results.
Adding context to cards in OptimalSort – descriptions, links and images
In most cases, each item in a card sort has only a short label, but there are instances where you may wish to add additional context to the items in your sort. Currently, the cards tab in OptimalSort allows you to include a tooltip description, a link within the tooltip description or to format the card as an image (with or without a label).
We generally don’t recommend using tooltip descriptions and links, unless you have a specific reason to do so. It’s likely that they’ll provide your participants with more information than they would normally have when navigating your website, which may in turn influence your results by leading participants to a particular solution.
Legitimate reasons that you may want to use descriptions and links include situations where it’s not possible or practical to translate complex or technical labels (for example, medical, financial, legal or scientific terms) into plain language, or if you’re using a card sort to understand your participants’ preferences or priorities.
If you do decide to include descriptions in your sort, it’s important that you follow the same guidelines that you would otherwise follow for writing card labels. They should be easy for your participants to understand and you should avoid obvious patterns, for example repeating words and phrases, or including details that refer to the current structure of the website.
A quick survey of how card descriptions are used in OptimalSort
I was curious to find out how often people were including descriptions in their card sorts, so I asked our development team to look into this data. It turns out that around 15% of cards created in OptimalSort have at least some text entered in the description field. In order to dig into the data a bit further, both Ania and I reviewed a random sample of recent sorts and noted how descriptions were being used in each case.
We found that out of the descriptions that we reviewed, 40% (6% of the total cards) had text that should not have impacted the sort results. Most often, these cards simply had the card label repeated in the description (to be honest, we’re not entirely sure why so many descriptions are being used this way! But it’s now in our roadmap to stop this from happening — stay tuned!). Approximately 20% (3% of the total cards) used descriptions to add context without obviously leading participants, however another 40% of cards have descriptions that may well lead to biased results. On occasion, this included linking to the current content or using what we assumed to be the current top level heading within the description.
Testing the effect of card descriptions on sort results
So, how much influence could potentially leading card descriptions have on the results of a card sort? I decided to put it to the test by running a series of card sorts to compare the effect of different descriptions. As I also wanted to test the effect of linking card descriptions to existing content, I had to base the sort on a live website. In addition, I wanted to make sure that the card labels and descriptions were easily comprehensible by a general audience, but not so familiar that participants were highly likely to sort the cards in a similar manner.
I selected the government immigration website New Zealand Now as my test case. This site, which provides information for prospective and new immigrants to New Zealand, fit the above criteria and was likely unfamiliar to potential participants.
Navigating the New Zealand Now website
When I reviewed the New Zealand Now site, I found that the top level navigation labels were clear and easy to understand for me personally. Of course, this is especially important when much of your target audience is likely to be non-native English speaking! On the whole, the second level headings were also well-labeled, which meant that they should translate to cards that participants were able to group relatively easily.
There were, however, a few headings such as “High quality” and “Life experiences”, both found under “Study in New Zealand”, which become less clear when removed from the context of their current location in the site structure. These headings would be particularly useful to include in the test sorts, as I predicted that participants would be more likely to rely on card descriptions in the cases where the card label was ambiguous.
I selected 30 headings to use as card labels from under the sections “Choose New Zealand”, “Move to New Zealand”, “Live in New Zealand”, “Work in New Zealand” and “Study in New Zealand” and tweaked the language slightly, so that the labels were more generic.
I then created four separate sorts in OptimalSort:Round 1: No description: Each card showed a heading only — this functioned as the control sort
Round 2: Site section in description: Each card showed a heading with the site section in the description
Round 3: Short description: Each card showed a heading with a short description — these were taken from the New Zealand Now topic landing pages
Round 4:Link in description: Each card showed a heading with a link to the current content page on the New Zealand Now website
For each sort, I recruited 30 participants. Each participant could only take part in one of the sorts.
What the results showed
An interesting initial finding was that when we queried the participants following the sort, only around 40% said they noticed the tooltip descriptions and even fewer participants stated that they had used them as an aid to help complete the sort.
Of course, what people say they do does not always reflect what they do in practice! To measure the effect that different descriptions had on the results of this sort, I compared how frequently cards were sorted with other cards from their respective site sections across the different rounds.Let’s take a look at the “Study in New Zealand” section that was mentioned above. Out of the five cards in this section,”Where & what to study”, “Everyday student life” and “After you graduate” were sorted pretty consistently, regardless of whether a description was provided or not. The following charts show the average frequency with which each card was sorted with other cards from this section. For example in the control round, “Where & what to study” was sorted with “After you graduate” 76% of the time and with “Everyday day student life” 70% of the time, but was sorted with “Life experiences” or “High quality” each only 10% of the time. This meant that the average sort frequency for this card was 42%.
On the other hand, the cards “High quality” and “Life experiences” were sorted much less frequently with other cards in this section, with the exception of the second sort, which included the site section in the description.These results suggest that including the existing site section in the card description did influence how participants sorted these cards — confirming our prediction! Interestingly, this round had the fewest number of participants who stated that they used the descriptions to help them complete the sort (only 10%, compared to 40% in round 3 and 20% in round 4).Also of note is that adding a link to the existing content did not seem to increase the likelihood that cards were sorted more frequently with other cards from the same section. Reasons for this could include that participants did not want to navigate to another website (due to time-consciousness in completing the task, or concern that they’d lose their place in the sort) or simply that it can be difficult to open a link from the tooltip pop-up.
What we can take away from these results
This quick investigation into the impact of descriptions illustrates some of the intricacies around using additional context in your card sorts, and why this should always be done with careful consideration. It’s interesting that we correctly predicted some of these results, but that in this case, other uses of the description had little effect at all. And the results serve as a good reminder that participants can often be influenced by factors that they don’t even recognise themselves!If you do decide to use card descriptions in your cards sorts, here are some guidelines that we recommend you follow:
Avoid repeating words and phrases, participants may sort cards by pattern-matching rather than based on the actual content
Avoid alluding to a predetermined structure, such as including references to the current site structure
If it’s important that participants use the descriptions to complete the sort, you should mention this in your task instructions. It may also be worth asking them a post-survey question to validate if they used them or not
We’d love to hear your thoughts on how we tested the effects of card descriptions and the results that we got. Would you have done anything differently?Have you ever completed a card sort only to realize later that you’d inadvertently biased your results? Or have you used descriptions in your card sorts to meet a genuine need? Do you think there’s a case to make descriptions more obvious than just a tooltip, so that when they are used legitimately, most participants don’t miss this information?