Your guide to creating and running an effective card sort
Card sorting is a well-established research technique for discovering how people understand and categorize information. You can use card sorting results to group and label your website information in a way that makes the most sense to your audience.
Card sorting is useful when you want to:
Our aim for this guide is to give you straightforward, practical advice for running effective online card sorts. You'll find plenty of useful advice for in-person (moderated) card sorts as well, and learn how to add this data to your online card sorts for analysis. But for specific and invaluable advice on getting moderated card sorts right, check out Donna Spencer's book Card Sorting: Designing Usable Categories (actually, you'll see her wisdom on card sorting in general throughout).
Card sorting involves creating a set of cards that each represent a concept or item, and asking people to group the cards in a way that makes sense to them. You can run an open, a closed, or a hybrid card sort, depending on what you want to find out.
Open card sort: Participants sort cards into categories that make sense to them, and label each category themselves
Closed card sort: Participants sort cards into categories you give them
Hybrid card sort: Participants sort cards into categories you give them, and can create their own categories as well
Throughout this 101, we'll illustrate how card sorting works with a project we created for this purpose. Our fictional news website The Wellington Daily is a magazine-style website that combines the latest news with opinion pieces and musings on culture and the future.
We created and analyzed three card sorts — an open, a closed, and a hybrid — that each included 48 cards labelled with headlines representing the kinds of content we plan to publish. We had at least 40 completed card sorts for each technique.
The three card sorting techniques you can choose from — open, closed, and hybrid — will each tell you something different about how people understand and group your information. Choosing the right technique at the right time is key to gathering high-quality, relevant data to inform your design decisions.
It's also the best place to start.
In an open card sort, participants sort your cards into groups that make sense to them, and then label the groups themselves. An open card sort is the equivalent of an open-ended question in a traditional study, in that people can give any answer, and are not confined to one type of response.
Open card sorting is a 'generative' exercise, rather than 'evaluative', as Donna points out — you'll get the most from it when you're starting to design a new website or starting to improve one you already have. In particular, you know you need open card sorting if you want to:
To help us with our initial draft designs, we ran an open card sort to find out how our readers would group our articles, and which labels they would use to describe the topics.
This image shows a participant partway through the open card sort (so far with six categories, four labelled):
As well as our case study, here are a few ideas to get you thinking about how open card sorting could be useful for you:
Create a card sort with your current website topics to find out if your website structure matches up with how people would organize the same information, or get ideas for structuring a brand new one:
Create a card sort with your blog tags to find out how your readers would expect to see your blog content categorized, and how they conceptualize what you publish.
Create cards with images of your products to find out what products your customers would expect to find in the same place on your ecommerce website:
Create a card sort with the titles of your help articles to find out how your customers expect to see the articles grouped and labelled in your help center:
In a closed card sort, you give people categories to sort the cards into. This time, instead of trying to find out how people conceptualize your information, you want to know where people think information belongs within your conceptual framework. You can really think outside the box with this technique — use it to find out a bunch of interesting things.
You know you need closed card sorting if you want to:
Aside from how you structure your website content, you can also use closed card sorting to prioritize and rank features, products, search filters, and so on based on criteria such as 'Important' to 'Unimportant', or 'Use often' to 'Use never.'
We drafted our own website structure that we felt fit with our brand, with eight categories labelled in informative and creative ways. We used closed card sorting to find out if people understood our category labels and agreed on which articles belonged where. We added a ninth category, 'Not sure', for articles that people didn't think belonged anywhere.
This image shows a participant partway through our closed card sort with our eight pre-defined categories:
As well as our case study, here are a few ideas to get you thinking about how closed card sorting could be useful for you:
Create cards with topics from your homepage or your search filters, and create categories like 'Use often' and 'Use never' to find out what people need to access the most when they arrive on your site:
Create cards with company values adjectives and set categories like "Our company is" and "Our company is not" to find out how your customers and clients perceive you, and compare it with the data you gather from staff and colleagues.
Create cards with the different versions of a design you're working on, and ask people to sort them into categories like 'My favorite design' and 'My least favorite design'.
(Thanks to UX Designer and Researcher Ashlea McKay for this compelling example)
Create cards with your product features or services and set categories like "Yes, I want this!" and "I could give this a miss" to find out what your customers want and don't want.
In a hybrid card sort, you give people categories to sort your cards into and enable them to create their own categories as well.
Though it is distinct enough from open and closed card sorting to warrant its own approach, a hybrid card sort will be 'more open' or 'more closed', depending on the number and type of categories you create.
When you set much fewer categories than people need to sort all the cards, your hybrid card sort will lean towards open. This means people will be less likely to use your categories and more likely to create new categories to complete the card sort.
You'll run a hybrid card sort like this if you:
When you set enough categories for people to sort all the cards into, your hybrid category will lean towards closed. This means people will be more likely to sort the cards into your categories only, and less likely to create new categories.
You'll run a hybrid card sort like this if you:
We wanted to generate ideas for grouping our articles, and we chose a hybrid card sort instead of an open card sort because:
In this image, you can see the four categories we gave people, and that a participant is about to create their first one.
Hybrid card sorting results are the same as open because you're allowing people to create and name their own categories, but your set categories will be standardized to replicate how they're analyzed in a closed card sort.
You'll approach hybrid card sorting results with questions like:
The biggest decision to make when creating any card sort is what to put on your cards. Though there's no hard limits on how few or how many cards to include, we recommend aiming for between 30 and 60 cards for all card sorts because:
More specifically: for an open card sort, 30 to 50 cards will usually account for the complexity of the task and the depth of thought you want from people. You need to balance the need to keep the time commitment to around 10 to 15 minutes with the need to provide sufficient context (enough similar cards) for groups to form.
Closed card sorts with easy grouping options (yes/no/maybe decisions) tend to require less depth of thought and more automatic responses, and so going well beyond 60 cards can work really well.
You can create more cards if you'd like to, and set OptimalSort to give a random subset of cards to each participant.
Start by opening a spreadsheet and collecting all the possible concepts or items you could include in your card sort. Once you have all your ideas in hand, you'll reduce and refine the possibilities until you're left with only the most relevant cards.
To come up with your list of possibilities, you could:
Card sorting is a conceptual activity, not a usability test or game of Snap. Your goal is to discover how people think about and make sense of your information, not whether or not they can find it quickly on a homepage. So when you're deciding what cards to include, look underneath the language for the concept they represent.
Card sorting tests concepts, not usability — so the cards themselves don't need to be written in the most 'usable' format, or exactly as they are on your website. Jakob Nielsen points out that "It's OK to actually reduce the usability of the cards, because people don't actually use them in the UI [user interface]".
The cards need to be on the same conceptual level and similar enough for participants to actually be able to sort them into groups. By conceptual level, we mean that if you want people to sort grocery items, you won't include the higher level category 'Vegetables' as a card at the same time as the lower-level 'Carrots':
In her book on card sorting, Donna is firm on the importance of making sure all cards are actually groupable. She cites the example of a 100-card card sort she tested with a colleague before taking it to a client. Her colleague was unable to create coherent groups because the cards were inconsistent and often unrelated, and therefore the card sort couldn’t "provide much insight into how the content could be grouped on the site".
The solution? She recommends reviewing your card sort with this in mind to make sure "each item...could have a potential partner (or many partners)". Ask someone to test it out for you (which you can do by sharing a study preview link with people in your team).
And if you find a card that is difficult to partner with any others, but that you think is valuable to your study, follow her lead: "On a recent sort, I deliberately included three cards I didn’t really need...so participants would have some cards that were easy to group...[to give] participants the confidence to proceed to more difficult groupings."
When you ask people to complete a card sort, you're asking them to look for and create patterns with your cards. The human mind is so fond of pattern-finding that we use it regularly as a shortcut when making decisions, especially on intellectually-taxing tasks.
For user research, this is one of the great strengths of card sorting, and one of its biggest pitfalls. If you include enough cards with the same opening phrases, casings, and sentence structures, it's likely that most people will group them together — but instead of them approaching it conceptively, they've quickly and without realizing it played a simple game of Snap.
Jakob Nielsen illustrates the issue these example cards, that are written the way they would be on a website, but that offer too-obvious pairings:
|Card Set A||Card Set B|
|Strawberry Planting||Planting Strawberry|
|Strawberry Growing||Growing Strawberry|
|Strawberry Harvesting||Harvesting Strawberry|
|Wheat Planting||Planting Wheat|
|Wheat Growing||Growing Wheat|
|Wheat Harvesting||Harvesting Wheat|
To solve this, he recommends editing your labels using synonyms and non-parallel structures, an approach that doesn't need to involve extensive rewrites: instead of 'Harvesting strawberries', we could say 'Picking strawberries'; and instead of two cards that begin with planting, you could make their structures different by labelling one 'Planting corn' and another 'Wheat planting'.
Images can be as effective as text for representing concepts and items, and in some cases more so. You can include images to illustrate or clarify the text on your cards, or you can include images on their own.
You might choose to add images to your cards if you:
In OptimalSort, you can upload JPEG or PNG files of any size, and each image will be resized to a maximum width of 200px. Resize all your images before you upload them if you want them to be the same size, and preview your card sort to make sure it looks how you want it to. Giving each image a descriptive label will make your analysis easier.
When you're creating a closed or hybrid card sort, take care to craft categories that help you achieve your objectives.
For closed card sorts, you need to create enough categories that people can find a home for most of your cards, but not too many that only include categories that match your intentions for your website or your research questions. The more categories you create, the more options participants will have, and the more likely it'll be that you find out which categories are preferred over others.
When you run closed and hybrid card sorts, the categories you set will lead people to think about your cards in a particular way, whether it's on purpose or not.
For example, if you run an open card sort with 40 cards containing grocery items, you might find that some people group the items by type (vegetables, fruit, dairy) and some by meal (breakfast, lunch, dinner). If you run a hybrid card sort with even just one category, most people will take your lead: Set the category 'Vegetables' and most people will create the category 'Fruit'.
The number and type of categories you set for a hybrid card sort will determine whether the card sort is more open or more closed.
Since one of the goals of card sorting is to get inside the minds of the people you design for, take time to establish, recruit, and manage participants that will give you the most true-to-life data. For card sorting participants, we recommend:
You can recruit participants in a bunch of different ways, and how you do so will depend on a few different factors. If you have access to a pool of participants (like employees if you're working on an internal product, or your customer mailing list) then sending them an email invitation, along with an incentive or chance to win a prize, can be a useful way to get responses. Similarly, you could invite people via your social media channels or add banners to your website.
Keep in mind that if people don't receive an incentive or are not obligated to participate, you'll need to invite a whole lot more people than your minimum required.
You can also make use of high-quality recruitment panels, which can be effective if you want fast, pain-free options with minimal effort. You can recruit participants from quite specific demographics, and be confident that the participants will take your study seriously (they are getting paid, after all).
Running an in-person, moderated card sort with 5 to 15 participants will give you invaluable qualitative insights to go with the quantitative numbers you get by running the card sort online.
After you’ve created your card sort in OptimalSort, download and print your cards onto thick paper or card stock, and then use Donna’s guidance to set up and run your sessions. Then, you can upload the card sorts for each participant in the Participant’s tab of your OptimalSort results. Their results will be included with the unmoderated participants, and you’ll have the extra insight from observing and listening to your participants. If you lack the time or resources to invite participants to in-person sessions, your next best option is to use a service such as UserTesting.com — you can order 5+ videos of people completing your card sort online and speaking aloud as they do so (which means you won’t have to print the cards out at all).
Running an open card sort with OptimalSort is a generative exercise: the results give you lots of ideas of how you could label and organize your website content. As such, the quantitative numbers you need for techniques like tree testing and first-click testing may not be your objective.Instead, you’ll want enough completed card sorts to get ideas and see consensus forming, and not too many that you’re overwhelmed with the data.
Also, keep in mind that the more participants you have completing your card sort, the potential for more complexity in your analysis increases as well. This is simply because narrowing down the most effective structure from 40 different suggested categorizations will probably be an easier task than from 200 different suggestions.
If you want to gather as many suggested categorizations as you can, though, don't be afraid to recruit more than 50.
In her book on card sorting, Donna helpfully distinguishes between exploratory analysis (when you look through your data to get impressions, pull ideas out, and be intuitive and creative in your approach) and statistical analysis (when it's all about the numbers).
You'll get great insights from both.
Every card sort you run with OptimalSort will present you with a Results Overview and Participant data. The Results Overview tells you big picture information about your card sort.
The Participants table displays useful information about every participant who started your card sort, and can be used to filter your your data. At any time during your card sort or analysis, you can:
You'll explore your open and hybrid card sorting results to get ideas for labelling and grouping your information, so kick off your analysis with these questions in your head:
The Categories tab is a great doorway into your open and hybrid card sorting results. Spend just a few minutes scanning the categories people came up with and you'll quickly form an impression their 'mental models' (how they perceived the overall theme or concept of your cards).
A useful way to explore the categories results and refine your data is to standardize categories that have similar labels (with different wording, spelling, capitalizations, and so on) and contain similar cards. In a hybrid card sort, your set categories are already standardized.
But before you eye up a bunch of similar-looking labels and standardize straight away, it's important to look at the similarities between the categories in more depth. You can use both exploratory and statistical analysis approaches to do this.
When you first look at the Categories table, take some time to get familiar with the labels your participants have suggested — you'll probably start seeing patterns pretty quickly.
When we opened our study, we quickly saw that groups that had the word 'Animals' in the label were rather popular (with 15 all together):
It's tempting to throw categories with the same or similar labels together straight away, but before you do, cast your eye over both the labels and the cards in each category to ensure the participants are all thinking in the same way.
Of the 15 groups with the word 'Animal' in the label, 13 had a similar set of cards, but two participants had labelled their categories slightly differently (Animals and Environment' and 'Animals and Nature') and had thus included extra cards the others didn't have ('Glacier's melting faster than previously thought', for example).
When we place every category that uses the same words or phrases into a standardize category, we can assess the effectiveness of the category by looking at the agreement score.
The agreement score tells you the agreement level between included participants on the cards that belong in each category. Beside the agreement score, you can see the number of participants included in that score.
A perfect agreement score is 1.0 (100%) which is what you'll see before standardizing (as each category has only been created by one person).
Once you standardize a category, check the agreement score to get an objective assessment of how similar the groupings are. Agreement scores from .8 and over means that 80% of participants in the new category agree with the grouping.
We standardized all 15 participant-created categories with the word 'animal' in the label, and saw that our agreement score was only .41 (41%).
So, what now? Well, a low agreement score isn't the only number you can rely on to tell you something useful. The number of participants who placed the card in that category will tell you something important as well: out of 15 people, 13 people put the top two cards into that category, and 12 people put the third card in that category. So we can be confident that most participants who created this group think these cards belong here.
However, we don't want to leave this category standardized because the agreement score suggests the categories aren't similar enough.
Instead, we can undo this group, and then re-standardardized including just the 13 participants who had similar sets of cards, and now we have an agreement score of .74 — awesome! A high level of agreement (over about .6, or 60% agreement) means you'll find it useful to keep this category standardized for the rest of your analysis.
As well as looking at the cards and assessing the agreement score, head over to the Standardization grid to see your standardized categories in more detail. The grid simply shows you how the number of times a particular card appears in a standardized category.
The similarity matrix shows you the percentage of participants who grouped two cards together, and clusters the most closely related pairings along the right edge. The darker the blue and the larger the cluster, the higher the agreement between participants on which cards go together.
You could use the Similarity Matrix to:
This matrix shows strong clusters along the right edge, which tells us many people agreed about which cards belong together. A glance tells us this immediately, before we've even looked at the detail:
When we look more closely, we can find out which cards are paired together the most often:
And which cards are rarely paired together rarely, if at all:
With agreement levels like this, one option for us is to draft a set of categories based on the darkest blue clusters along the right (which include around 60% of our cards). And although this isn't an exact science, we found this draft incredibly useful:
|Cluster 1||Cluster 2||Cluster 3|
One critical after Eastern motorway crash
Explosion in factory kills 12
Dunedin flats destroyed by fire
Ikea stabbing confession
Male model charged with drug offences
Should women be more confident?
What advice would you give your teenage self?
Is marriage still relevant
Do rich kids have rights?
How to leave your food comfort zone
Rose-flavored burger anyone?
Apple semifreddo with fruit
What it's like to drink only water for 20 days
|Cluster 4||Cluster 5||Cluster 6|
Amazing new photos from space station
Robots start to reproduce
Artificial intelligence lawn mower gets approval
Tracking technology stops baby kidnappings
Understanding the new job market
Over 50s lack retirement plans
Deal for young homebuyers
House sold over $11k in unpaid rates
Credit card flight awards costly
Man attacked by shark in Sydney
Whale comes to the aid of humans
Leopard seal spotted on Kapiti Coast
The secret world of octopuses
Bird enjoys seaside life
In our hybrid card sort similarity matrix, we can see larger clusters and higher agreements on the similarity matrix for our hybrid card sort, which reflects the fact that we gave participants four categories to start with. The two strongest clusters towards the top and centre are cards placed in our two categories 'The natural world' and 'Food and lifestyle':
The PCA aims to find the most popular grouping strategy, and then to find two more popular alternatives among those who disagreed with the first strategy. It works best when you have a lot of results. Besides the first option hopefully being useful to you, you might get useful ideas for footer and sidebar facets from options two and three.
It's called a 'Participant-centric analysis' because every participant response is treated as a potential solution and ranked for similarity with other entire responses. If you see a sort with a 11/43 agreement score, it's telling you that 10 other participants sorted their cards into groups similar to these ones.
The dendrograms give you another way to quickly spot popular groups of cards, and to get a general sense of how similar (or different) your participants' card sorts were to each other.
You have two dendrograms to explore, which you'll find more or less useful depending on the number of completed card sorts you have:
The connected blue lines show clusters that represent the percentage of participants who agree with the highlighted cards being grouped together. The vertical axis gives the exact percentage, and you can drag the vertical line across the dendrogram to get accurate data.
At a glance, we can see a few strong clusters, which show us the same data we’ve observed on the Similarity Matrix:
From our hybrid sort at 65%
You'll approach closed card sorting results with questions like:
The Categories table for a closed card sort tells you how many times a card was sorted into your each given category. You'll also find out how many unique cards were sorted into each category, with fewer unique cards meaning higher agreement among participants.
With the closed card sort categories results, you could:
We gave participants nine categories (including 'Not sure'), and we could quickly see high agreement for some categories. For example, the top cards placed in the category 'Crime and Punishment' were 'Ikea stabbing confession (35/41 participants), 'Male model charged with drug offences' (33/41), and 'Digital transfer case goes to trial' (25/41).
The category 'Global news update' had the highest number of unique cards: 35 cards out of 45 were placed there at least once, suggesting that the label itself was too broad, and thus too ambiguous for our website.
One other result we found particularly useful was the 'Not sure' category results: 9 out of 41 people found no suitable category for 'Top spellers go head to head', and 5 out of 41 people found none for 'Led Zeppelin cover band set to thrill'. Though the numbers seem small, it prompted us to ask one two very useful questions 'Can we reword one of our categories so that these items have a home?'. Or 'This item is an outlier…so does it belong on our website at all?'
And if not, perhaps we should get rid of it entirely.
The Cards table for a closed card sort tells you which category each card was placed in the most often. You'll also see the number of unique categories for each card, with fewer categorizations meaning higher agreement among participants.
You can use the cards results to:
We started by looking at the number of unique categorizations for each card, and saw that 7 cards (out of 45) appeared in only 2 or 3 categories (out of 9), and thus proved a near-perfect match between card and category:
Although some cards had a higher number of unique categorizations, we could see on closer inspection that many people placed them in the same category. For example, the card 'Artificial intelligence lawn mower gets approval' was placed by 34 out of 41 participants into the 'Imagining the future category':
We also spotted some cards that participants were divided on, which tells us useful things about how people perceive the type of content. The card 'A wonderful alternative to burning man' ended up in an almost-three-way-tie, appearing 13 times under 'Living the good life', 12 times under 'Musings and opinions', and 11 times under 'Living on the edge' (and curiously appearing twice under 'Crime and punishment'!).
Similarly, the card 'What it's like to drink only water for 20 days' appeared 16 times under 'Living on the edge' (presumably by people who couldn't fathom that long without coffee or alcohol) and 14 times under 'Living the good life' (presumably by those who felt otherwise).
The Results Matrix shows you the number of times each card was sorted into your pre-set categories. The darker the blue, the more times a card appears in the corresponding category. The number of participants is also displayed (not the percentages).
The Results Matrix tells you pretty quickly which cards had the highest and lowest agreement between participants on where the cards were best placed.
The stand-out dark blue boxes on this matrix tell us straight away that we have high agreement between participants on where cards belong.
The popular placements matrix shows the percentage participants who sorted each card into the corresponding category, and attempts to propose the most popular groups based on each individual card's highest placement score.
In this matrix, we can see that the clustered groups contain cards with very high agreements between participants, which gives a good indication that the groupings are a good place for us to start when coming up with our initial draft IA.
Remember that you are the one who is doing the thinking, not the technique... you are the one who puts it all together into a great solution. Follow your instincts, take some risks, and try new approaches.
Creating and running a card sort is the best way to learn the technique and start using the data to improve your designs — so go for it! You’ll also find it useful to check out our sample cards sorting (from both the participant’s and the researcher’s perspective). And you’ll find a bunch of useful and inspirational information on our blog.
Try card sorting for yourselfSign up for free