Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Card Sorting

Learn more
1 min read

5 ways to increase user research in your organization

Co-authored by Brandon Dorn, UX designer at Viget.As user experience designers, making sure that websites and tools are usable is a critical component of our work, and conducting user research enables us to assess whether we’re achieving that goal or not. Even if we want to incorporate research, however, certain constraints may stand in our way.

A few years ago, we realized that we were facing this issue at Viget, a digital design agency, and we decided to make an effort to prioritize user research. Almost two years ago, we shared initial thoughts on our progress in this blog post. We’ve continued to learn and grow as researchers since then and hope that what we’ve learned along the way can help your clients and coworkers understand the value of research and become better practitioners. Below are some of those lessons.

Make research a priority for your organization

Before you can do more research, it needs to be prioritized across your entire organization — not just within your design team. To that end, you should:

  • Know what you’re trying to achieve. By defining specific goals, you can share a clear message with the broader organization about what you’re after, how you can achieve those goals, and how you will measure success. At Viget, we shared our research goals with everyone at the company. In addition, we talked to the business development and project management teams in more depth about specific ways that they could help us achieve our goals, since they have the greatest impact on our ability to do more research.
  • Track your progress. Once you’ve made research a priority, make sure to review your goals on an ongoing basis to ensure that you’re making progress and share your findings with the organization. Six months after the research group at Viget started working on our goals, we held a retrospective to figure out what was working — and what wasn’t.
  • Adjust your approach as needed. You won’t achieve your goals overnight. As you put different tactics into action, adjust your approach if something isn’t helping you achieve your goals. Be willing to experiment and don’t feel bad if a specific tactic isn’t successful.

Educate your colleagues and clients

If you want people within your organization to get excited about doing more research, they need to understand what research means. To educate your colleagues and clients, you should:

  • Explain the fundamentals of research. If someone has not conducted research before, they may not be familiar or feel comfortable with the vernacular. Provide an overview of the fundamental terminology to establish a basic level of understanding. In a blog post, Speaking the Same Language About Research, we outline how we established a common vocabulary at Viget.
  • Help others understand the landscape of research methods. As designers, we feel comfortable talking about different methodologies and forget that that information will be new to many people. Look for opportunities to increase understanding by sharing your knowledge. At Viget, we make this happen in several ways. Internally, we give presentations to the company, organize group viewing sessions for webinars about user research, and lead focused workshops to help people put new skills into practice. Externally, we talk about our services and share knowledge through our blog posts. We are even hosting a webinar about conducting user interviews in November and we'd love for you to join us.
  • Incorporate others into the research process. Don't just tell people what research is and why it's important — show them. Look for opportunities to bring more people into the research process. Invite people to observe sessions so they can experience research firsthand or have them take on the role of the notetaker. Another simple way to make people feel involved is to share findings on an ongoing basis rather than providing a report at the end of the process.

Broaden your perspective while refining your skill set

Our commitment to testing assumptions led us to challenge ourselves to do research on every project. While we're dogmatic about this goal, we're decidedly un-dogmatic about the form our research takes from one project to another. To pursue this goal, we seek to:

  • Expand our understanding. To instill a culture of research at Viget, we've found it necessary to question our assumptions about what research looks like. Books like Erika Hall’s Just Enough Research teach us the range of possible approaches for getting useful user input at any stage of a project, and at any scale. Reflect on any methodological biases that have become well-worn paths in your approach to research. Maybe your organization is meticulous about metrics and quantitative data, and could benefit from a series of qualitative studies. Maybe you have plenty of anecdotal and qualitative evidence about your product that could be better grounded in objective analysis. Aim to establish a balanced perspective on your product through a diverse set of research lenses, filling in gaps as you learn about new approaches.
  • Adjust our approach to project constraints. We've found that the only way to consistently incorporate research in our work is to adjust our approach to the context and constraints of any given project. Client expectations, project type, business goals, timelines, budget, and access to participants all influence the type, frequency, and output of our research. Iterative prototype testing of an email editor, for example, looks very different than post-launch qualitative studies for an editorial website. While some projects are research-intensive, short studies can also be worthwhile.
  • Reflect on successes and shortcomings. We have a longstanding practice of holding post-project team retrospectives to reflect on and document lessons for future work. Research has naturally come up in these conversations, and many of the things we've discussed you're reading right now. As an agency with a diverse set of clients, it's been important for us to understand what types of research work for what types of clients, and when. Make sure to take time to ask these questions after projects. Mid-project retrospectives can be beneficial, especially on long engagements, yet it's hard to see the forest when you're in the weeds.

Streamline qualitative research processes 🚄

Learning to be more efficient at planning, conducting, and analyzing research has helped us overturn the idea that some projects merit research while others don't. Remote moderated usability tests are one of our preferred methods, yet, in our experience, the biggest obstacle to incorporating these tests isn't the actual moderating or analyzing, but the overhead of acquiring and scheduling participants. While some agencies contract out the work of recruiting, we've found it less expensive and more reliable to collaborate with our clients to find the right people for our tests. That said, here are some recommendations for holding efficient qualitative tests:

  • Know your tools ahead of time. We use a number of tools to plan, schedule, annotate, and analyze qualitative tests (we're inveterate spreadsheet users). Learn your tools beforehand, especially if you're trying something new. Tools should fade into the background during tests, which Reframer does nicely.
  • Establish a recruiting process. When working with clients to find participants, we'll often provide an email template tailored to the project for them to send to existing or potential users of their product. This introductory email will contain a screener that asks a few project-related demographic or usage questions, and provides us with participant email addresses which we use to follow-up with a link to a scheduling tool. Once this process is established, the project manager will ensure that the UX designer on the team has a regular flow of participants. The recruiting process doesn't take care of itself – participants cancel, or reschedule, or sometimes don't respond at all – yet establishing an approach ahead of time allows you, the researcher, to focus on the research in the midst of the project.
  • Start recruiting early. Don't wait until you've finished writing a testing script to begin recruiting participants. Once you determine the aim and focal points of your study, recruit accordingly. Scripts can be revised and approved in the meantime.

Be proactive about making research happen 🤸

As a generalist design agency, we work with clients whose industries and products vary significantly. While some clients come to us with clear research priorities in mind, others treat it as an afterthought. Rare, however, is the client who is actively opposed to researching their product. More often than not, budget and timelines are the limiting factors. So we try not to make research an ordeal, but instead treat it as part of our normal process even if a client hasn't explicitly asked for it. Common-sense perspectives like Jakob Nielsen’s classic “Discount Usability for the Web” remind us that some research is always better than none, and that some can still be meaningfully pursued. We aren’t pushy about research, of course, but instead try to find a way to make it happen when it isn't a definite priority.

World Usability Day is coming up on November 9, so now is a great time to stop and reflect on how you approach research and to brainstorm ways to improve your process. The tips above reflect some of the lessons we’ve learned at Viget as we’ve tried to improve our own process. We’d love to hear about approaches you’ve used as well.

Learn more
1 min read

Card descriptions: Testing the effect of contextual information in card sorts

The key purpose of running a card sort is to learn something new about how people conceptualize and organize the information that’s found on your website. The insights you gain from running a card sort can then help you develop a site structure with content labels or headings that best represent the way your users think about this information. Card sorts are in essence a simple technique, however it’s the details of the sort that can determine the quality of your results.

Adding context to cards in OptimalSort – descriptions, links and images

In most cases, each item in a card sort has only a short label, but there are instances where you may wish to add additional context to the items in your sort. Currently, the cards tab in OptimalSort allows you to include a tooltip description, a link within the tooltip description or to format the card as an image (with or without a label).

adding descriptions and images - 640px

We generally don’t recommend using tooltip descriptions and links, unless you have a specific reason to do so. It’s likely that they’ll provide your participants with more information than they would normally have when navigating your website, which may in turn influence your results by leading participants to a particular solution.

Legitimate reasons that you may want to use descriptions and links include situations where it’s not possible or practical to translate complex or technical labels (for example, medical, financial, legal or scientific terms) into plain language, or if you’re using a card sort to understand your participants’ preferences or priorities.

If you do decide to include descriptions in your sort, it’s important that you follow the same guidelines that you would otherwise follow for writing card labels. They should be easy for your participants to understand and you should avoid obvious patterns, for example repeating words and phrases, or including details that refer to the current structure of the website.

A quick survey of how card descriptions are used in OptimalSort

I was curious to find out how often people were including descriptions in their card sorts, so I asked our development team to look into this data. It turns out that around 15% of cards created in OptimalSort have at least some text entered in the description field. In order to dig into the data a bit further, both Ania and I reviewed a random sample of recent sorts and noted how descriptions were being used in each case.

We found that out of the descriptions that we reviewed, 40% (6% of the total cards) had text that should not have impacted the sort results. Most often, these cards simply had the card label repeated in the description (to be honest, we’re not entirely sure why so many descriptions are being used this way! But it’s now in our roadmap to stop this from happening — stay tuned!). Approximately 20% (3% of the total cards) used descriptions to add context without obviously leading participants, however another 40% of cards have descriptions that may well lead to biased results. On occasion, this included linking to the current content or using what we assumed to be the current top level heading within the description.

Use of card descriptions

Create pie charts

Testing the effect of card descriptions on sort results

So, how much influence could potentially leading card descriptions have on the results of a card sort? I decided to put it to the test by running a series of card sorts to compare the effect of different descriptions. As I also wanted to test the effect of linking card descriptions to existing content, I had to base the sort on a live website. In addition, I wanted to make sure that the card labels and descriptions were easily comprehensible by a general audience, but not so familiar that participants were highly likely to sort the cards in a similar manner.

I selected the government immigration website New Zealand Now as my test case. This site, which provides information for prospective and new immigrants to New Zealand, fit the above criteria and was likely unfamiliar to potential participants.

Card descriptions

Navigating the New Zealand Now website

When I reviewed the New Zealand Now site, I found that the top level navigation labels were clear and easy to understand for me personally. Of course, this is especially important when much of your target audience is likely to be non-native English speaking! On the whole, the second level headings were also well-labeled, which meant that they should translate to cards that participants were able to group relatively easily.

There were, however, a few headings such as “High quality” and “Life experiences”, both found under “Study in New Zealand”, which become less clear when removed from the context of their current location in the site structure. These headings would be particularly useful to include in the test sorts, as I predicted that participants would be more likely to rely on card descriptions in the cases where the card label was ambiguous.

Card Descriptions2

I selected 30 headings to use as card labels from under the sections “Choose New Zealand”, “Move to New Zealand”, “Live in New Zealand”, “Work in New Zealand” and “Study in New Zealand” and tweaked the language slightly, so that the labels were more generic.

card labels

I then created four separate sorts in OptimalSort:Round 1: No description: Each card showed a heading only — this functioned as the control sort

Card descriptions illustrations - card label only

Round 2: Site section in description: Each card showed a heading with the site section in the description

Card descriptions illustrations - site section

Round 3: Short description: Each card showed a heading with a short description — these were taken from the New Zealand Now topic landing pages

Card descriptions illustrations - short description

Round 4:Link in description: Each card showed a heading with a link to the current content page on the New Zealand Now website

Card descriptions illustrations - link

For each sort, I recruited 30 participants. Each participant could only take part in one of the sorts.

What the results showed

An interesting initial finding was that when we queried the participants following the sort, only around 40% said they noticed the tooltip descriptions and even fewer participants stated that they had used them as an aid to help complete the sort.

Participant recognition of descriptions

Create bar charts

Of course, what people say they do does not always reflect what they do in practice! To measure the effect that different descriptions had on the results of this sort, I compared how frequently cards were sorted with other cards from their respective site sections across the different rounds.Let’s take a look at the “Study in New Zealand” section that was mentioned above. Out of the five cards in this section,”Where & what to study”, “Everyday student life” and “After you graduate” were sorted pretty consistently, regardless of whether a description was provided or not. The following charts show the average frequency with which each card was sorted with other cards from this section. For example in the control round, “Where & what to study” was sorted with “After you graduate” 76% of the time and with “Everyday day student life” 70% of the time, but was sorted with “Life experiences” or “High quality” each only 10% of the time. This meant that the average sort frequency for this card was 42%.

Untitled chartCreate bar charts

On the other hand, the cards “High quality” and “Life experiences” were sorted much less frequently with other cards in this section, with the exception of the second sort, which included the site section in the description.These results suggest that including the existing site section in the card description did influence how participants sorted these cards — confirming our prediction! Interestingly, this round had the fewest number of participants who stated that they used the descriptions to help them complete the sort (only 10%, compared to 40% in round 3 and 20% in round 4).Also of note is that adding a link to the existing content did not seem to increase the likelihood that cards were sorted more frequently with other cards from the same section. Reasons for this could include that participants did not want to navigate to another website (due to time-consciousness in completing the task, or concern that they’d lose their place in the sort) or simply that it can be difficult to open a link from the tooltip pop-up.

What we can take away from these results

This quick investigation into the impact of descriptions illustrates some of the intricacies around using additional context in your card sorts, and why this should always be done with careful consideration. It’s interesting that we correctly predicted some of these results, but that in this case, other uses of the description had little effect at all. And the results serve as a good reminder that participants can often be influenced by factors that they don’t even recognise themselves!If you do decide to use card descriptions in your cards sorts, here are some guidelines that we recommend you follow:

  • Avoid repeating words and phrases, participants may sort cards by pattern-matching rather than based on the actual content
  • Avoid alluding to a predetermined structure, such as including references to the current site structure
  • If it’s important that participants use the descriptions to complete the sort, you should mention this in your task instructions. It may also be worth asking them a post-survey question to validate if they used them or not

We’d love to hear your thoughts on how we tested the effects of card descriptions and the results that we got. Would you have done anything differently?Have you ever completed a card sort only to realize later that you’d inadvertently biased your results? Or have you used descriptions in your card sorts to meet a genuine need? Do you think there’s a case to make descriptions more obvious than just a tooltip, so that when they are used legitimately, most participants don’t miss this information?

Let us know by leaving a comment!

Learn more
1 min read

A quick analysis of feedback collected with OptimalSort

Card sorting is an invaluable tool for understanding how people organize information in their minds, making websites more intuitive and content easier to navigate. It’s a useful method outside of information architecture and UX research, too. It can be a useful prioritization technique, or used in a more traditional sense. For example, it’s handy in psychology, sociology or anthropology to inform research and deepen our understanding of how people conceptualize information.

The introduction of remote card sorting has provided many advantages, making it easier than ever to conduct your own research. Tools such as our very own OptimalSort allow you to quickly and easily gather findings from a large number of participants from all around the world. Not having to organize moderated, face-to-face sessions gives researchers more time to focus on their work, and easier access to larger data sets.

One of the main disadvantages of remote card sorting is that it eliminates the opportunity to dive deeper into the choices made by your participants. Human conversation is a great thing, and when conducting a remote card sort with users who could potentially be on the other side of the world, opportunities for our participants to provide direct feedback and voice their opinions are severely limited.Your survey design may not be perfect.

The labels you provide your participants may be incorrect, confusing or redundant. Your users may have their own ideas of how you could improve your products or services beyond what you are trying to capture in your card sort. People may be more willing to provide their feedback than you realize, and limiting their insights to a simple card sort may not capture all that they have to offer.So, how can you run an unmoderated, remote card sort, but do your best to mitigate this potential loss of insight?

A quick look into the data

In an effort to evaluate the usefulness of the existing “Leave a comment” feature in OptimalSort, I recently asked our development team to pull out some data.You might be asking “There’s a comment box in OptimalSort?”If you’ve never noticed this feature, I can’t exactly blame you. It’s relatively hidden away as an unassuming hyperlink in the top right corner of your card sort.

OptimalSortCommentBox1

OptimalSortCommentBox2

Comments left by your participants can be viewed in the “Participants” tab in your results section, and are indicated by a grey speech bubble.

OptimalSortSpeechBubble

The history of the button is unknown even to long-time Optimal Workshop team members. The purpose of the button is also unspecified. “Why would anyone leave a comment while participating in a card sort?”, I found myself wondering.As it turns out, 133,303 comments have been left by participants. This means 133,303 insights, opinions, critiques or frustrations. Additionally, these numbers only represent the participants who noticed the feature in the first place. Considering the current button can easily be missed when focusing on the task at hand, I can’t help but wonder how this number might change if we drew more attention to the feature.

Breaking down the comments

To avoid having to manually analyze and code 133,303 open text fields, I decided to only spend enough time to decipher any obvious patterns. Luckily for me, this didn’t take very long. After looking at only a hundred or so random entries, four distinct types of comments started to emerge.

  1. This card/group doesn’t make sense.Comments related to cards and groups dominate. This is a great thing, as it means that the majority of comments made by participants relate specifically to the task they are completing. For closed and hybrid sorts, comments frequently relate to the predefined categories available, and since the participants most likely to leave a comment are those experiencing issues, the majority of the feedback relates to issues with category names themselves. Many comments are related to card labels and offer suggestions for improving naming conventions, while many others draw attention to some terms being confusing, unclear or jargony. Comments on task length can also be found, along with reasons for why certain cards may be left ungrouped, e.g., “I’ve left behind items I think the site could do without”.
  2. Your organization is awesome for doing this/you’re doing it all wrong. A substantial number of participants used the comment box as an opportunity to voice their general feedback on the organization or company running the study. Some of the more positive comments include an appreciation for seeing private companies or public sector organizations conducting research with real users in an effort to improve their services. It’s also nice to see many comments related to general enjoyment in completing the task.On the other hand, some participants used the comment box as an opportunity to comment on what other areas of their services should be improved, or what features they would like to see implemented that may otherwise be missed in a card sort, e.g., “Increased, accurate search functionality is imperative in a new system”.
  3. This isn’t working for me. Taking a closer look at some of the comments reveals some useful feedback for us at Optimal Workshop, too. Some of the comments relate specifically to UI and usability issues. The majority of these issues are things we are already working to improve or have dealt with. However, for researchers, comments that relate to challenges in using the tool or completing the survey itself may help explain some instances of data variability.
  4. #YOLO, hello, ;) And of course, the unrelated. As you may expect, when you provide people with the opportunity to leave a comment online, you can expect just about anything in return.

How to make the most of your user insights in OptimalSort

If you’re running a card sort, chances are you already place a lot of value in the voice of your users. To ensure you capture any additional insights, it’s best to ensure your participants are aware of the opportunity to do so. Here are two ways you may like to ensure your participants have a space to voice their feedback:

Adding more context to the “Leave a comment” feature

One way to encourage your participants to leave comments is to promote the use of the this feature in your card sort instructions. OptimalSort gives you flexibility to customize your instructions every time you run a survey. By making your participants aware of the feature, or offering ideas around what kinds of comments you may be looking for, you not only make them more likely to use the feature, but also open yourself up to a whole range of additional feedback. An advantage of using this feature is that comments can be added in real time during a card sort, so any remarks can be made as soon as they arise.

Making use of post-survey questions

Adding targeted post-survey questions is the best way to ensure your participants are able to voice any thoughts or concerns that emerged during the activity. Here, you can ask specific questions that touch upon different aspects of your card sort, such as length, labels, categories or any other comments your participants may have. This can not only help you generate useful insights but also inform the design of your surveys in the future.

Make your remote card sorts more human

Card sorts are exploratory by nature. Avoid forcing your participants into choices that may not accurately reflect their thinking by giving them the space to voice their opinions. Providing opportunities to capture feedback opens up the conversation between you and your users, and can lead to surprising insights from unexpected places.

Further reading

Learn more
1 min read

How Andy is using card sorting to prioritize our product improvements

There has been a flurry of new faces in the Optimal Workshop office since the beginning of the year, myself included! One of the more recent additions is Andy (not to be confused with our CEO Andrew) who has stepped into the role of product manager. I caught up with Andy to hear about how he’s making use of OptimalSort to fast-track the process of prioritizing product improvements.

I was also keen to learn more about how he ensures our users are at the forefront throughout the prioritization process.Only a few weeks in, it’s no surprise that the current challenges of the product manager role are quite different to what they’ll be in a year or two. Aside from learning all he can about Optimal Workshop and our suite of tools, Andy says that the greatest task he currently faces is prioritizing the infinite list of things that we could do. There's certainly no shortage of high value ideas!

Product improvement prioritization: a plethora of methods

So, what’s the best approach for prioritization, especially when everything is brand new to you? Andy says that despite his experience working with a variety of people and different techniques, he’s found that there’s no single, perfect answer. Factors that could favor a particular technique over another range from company strategy, type of product or project, team structure, and time constraints. Just to illustrate the range of potential approaches, this guide by Daniel Zacarias, a freelance product management consultant, discusses no less than 20 popular product prioritization techniques! Above all, a product manager should never make decisions in isolation; you can only be successful if you bring in experts on the business direction and the technical considerations — and of course your users!

Fact-packed prioritization with card sorting

For his first pass at tackling the lengthy list of improvements, Andy settled on running a prioritization exercise in OptimalSort. As an added benefit, this gave him the chance to familiarize himself with one of Optimal Workshop’s tools from a user’s perspective.In preparation for the sort, Andy ran quick interviews with members across the Optimal Workshop team in order to understand what they saw as the top priority features. The Customer Success and User Research teams, in particular, were encouraged to contribute suggestions directly from the wealth of user feedback that they receive.

From these suggestions, Andy eliminated any duplicates and created a list of 30 items that covered the top priority features. He then created a closed card sort with these items and asked the whole team to to rank cards as ‘Most important’, ‘Very important’, and ‘Important’. He also added the options ‘Not sure what these cards mean’ and ‘No opinion on these cards’.

He provided descriptions to give a short explanation of each feature, and set the survey options so that participants were required to sort all cards. Although this is not compulsory for an internal prioritization sort such as this, particularly if your participants are motivated to provide feedback, it can ensure that you gather as much feedback as possible.

The benefit of using OptimalSort to prioritize product improvements was that it allowed Andy to efficiently tap into the collective knowledge of the whole team. He admits that he could have approached the activity by running a series of more focussed, detailed meetings with key decision makers, but this wouldn’t have allowed him to engage the whole team and may have taken him longer to arrive at similar insights.

Ranking the results of the prioritization sort 🥇

Following an initial review of the prioritization sort results, there were some clear areas of agreement across the team. Topping the lot was implementing the improvements to Reframer that our research has identified as critical. Other clear priorities were increasing the functionality of Chalkmark and streamlining the process of upgrading surveys, so that users can carry this out themselves.Outside of this, the other priorities were not quite as evident. Andy decided to apply a two-tiered approach for ranking the sorted cards by including:

  1. any card that was placed in the ‘Most important’ group by at least two people,
  2. and any card whose weighted priority was 20 or greater. (He calculated the weighted priority by multiplying the total of each card placed in ‘Most important’, ‘Very important’ and ‘Important’ by four, two and one, respectively.)

By applying the following criteria to the sort results, Andy was left with a solid list of 15 priority features to take forward. While there’s still more work to be done in terms of integrating these priorities into the product roadmap, the prioritization sort got Andy to the point where he could start having more useful conversations. In addition, he said the exercise gave him confidence in understanding the areas that need more investigation.

Improving the process of prioritizing with card sorting 🃏

Is there anything that we’d do differently when using card sorting for future prioritization exercises? For our next exercise, Andy recommended ensuring each card represented a feature of a similar size. For this initial sort, some cards described smaller, specific features, while others were larger and less well-defined, which meant it could be difficult to compare them side by side in terms of priority.

Thinking back, a ‘Not important’ category could also have been useful. He had initially shied away from doing this, as each card had come from at least one team member’s top five priorities. Andy now recognizes this could have actually encouraged good debate if some team members thought a particular feature was a priority, whereas others ranked it as ‘Not important’.

For the purposes of this sort, he didn’t make use of the card ranking feature which shows the order in which each participant sorted a card within a category. However, he thinks this would be invaluable if he was looking to carry out finer analysis for future prioritization sorts.

Prioritizing with a public roadmap 🛣️

While this initial prioritization sort included indirect user feedback via the Customer Success and User Research teams, it would also be invaluable to run a similar exercise with users themselves. In the longer-term, Andy mentioned he’d love to look into developing a customer-facing roadmap and voting system, similar to those run by companies such as Atlassian.

"It’s a product manager’s dream to have a community of highly engaged users and for them to be able to view and directly feedback on the development pipeline. People then have visibility over the range of requests, can see how others’ receive their requests and can often answer each other’s questions," Andy explains.

Have you ever used OptimalSort for a prioritization exercise? What other methods do you use to prioritize what needs to be done? Have you worked somewhere with a customer-facing product road map and how did this work for you? We’d love to learn about your ideas and experience, so leave us a comment below!

Learn more
1 min read

Which comes first: card sorting or tree testing?

“Dear Optimal Workshop,I want to test the structure of a university website (well certain sections anyway). My gut instinct is that it's pretty 'broken'. Lots of sections feel like they're in the wrong place. I want to test my hypotheses before proposing a new structure. I'm definitely going to do some card sorting, and was planning a mixture of online and offline. My question is about when to bring in tree testing. Should I do this first to test the existing IA? Or is card sorting sufficient? I do intend to tree test my new proposed IA in order to validate it, but is it worth doing it upfront too?" — Matt

Dear Matt,

Ah, the classic chicken or the egg scenario: Which should come first — tree testing or card sorting?

It’s a question that many researchers often ask themselves, but I’m here to help clear the air!You should always use both methods when changing up your information architecture (IA) in order to capture the most information.

Tree testing and card sorting, when used together, can give you fantastic insight into the way your users interact with your site. First of all, I’ll run through some of the benefits of each testing method.

What is card sorting and why should I use it?

Card sorting is a great method to gauge the way in which your users organize the content on your site. It helps you figure out which things go together and which things don’t. There are two main types of card sorting: open and closed.

Closed card sorting involves providing participants with pre-defined categories into which they sort their cards. For example, you might be reorganizing the categories for your online clothing store for women. Your cards would have all the names of your products (e.g., “socks”, “skirts” and “singlets”) and you also provide the categories (e.g.,“outerwear”, “tops” and “bottoms”).

Open card sorting involves providing participants with cards and leaving them to organize the content in a way that makes sense to them. It’s the opposite to closed card sorting, in that participants dictate the categories themselves and also label them. This means you’d provide them with the cards only — no categories.

Card sorting, whether open or closed, is very user focused. It involves a lot of thought, input, and evaluation from each participant, helping you to form the structure of your new IA.

What is tree testing and why should I use it?

Tree testing is a fantastic way to determine how your users are navigating your site and how they’re finding information. Your site is organised into a tree structure, sorted into topics and subtopics, and participants are provided with some tasks that they need to perform. The results will show you how your participants performed those tasks, if they were successful or unsuccessful, and which route they took to complete the tasks. This data is extremely useful for creating a new and improved IA.

Tree testing is an activity that requires participants to seek information, which is quite the contrast to card sorting — an activity that requires participants to sort and organize information. Each activity requires users to behave in different ways, so each method will give its own valuable results.

Should you run a card or tree test first?

In this scenario, I’d recommend running a tree test first in order to find out how your existing IA currently performs. You said your gut instinct is telling you that your existing IA is pretty “broken”, but it’s good to have the data that proves this and shows you where your users get lost.

An initial tree test will give you a benchmark to work with — after all, how will you know your shiny, new IA is performing better if you don’t have any stats to compare it with? Your results from your first tree test will also show you which parts of your current IA are the biggest pain points and from there you can work on fixing them. Make sure you keep these tasks on hand — you’ll need them later!

Once your initial tree test is done, you can start your card sort, based on the results from your tree test. Here, I recommend conducting an open card sort so you can understand how your users organize the content in a way that makes sense to them. This will also show you the language your participants use to name categories, which will help you when you’re creating your new IA.

Finally, once your card sort is done you can conduct another tree test on your new, proposed IA. By using the same (or very similar) tasks from your initial tree test, you will be able to see that any changes in the results can be directly attributed to your new and improved IA.

Once your test has concluded, you can use this data to compare the performance from the tree test for your original information architecture — hopefully it is much better now!

Learn more
1 min read

Card Sorting outside UX: How I use online card sorting for in-person sociological research

Hello, my name is Rick and I’m a sociologist. All together, “Hi, Rick!” Now that we’ve got that out of the way, let me tell you about how I use card sorting in my research. I'll soon be running a series of in-person, moderated card sorting sessions. This article covers why card sorting is an integral part of my research, and how I've designed the study toanswer specific questions about two distinct parts of society.

Card sorting to establish how different people comprehend their worlds

Card sorting,or pile sorting as it’s sometimes called, has a long history in anthropology, psychology and sociology. Anthropologists, in particular, have used it to study how different cultures think about various categories. Researchers in the 1970s conducted card sorts to understand how different cultures categorize things like plants and animals. Sociologists of that era also used card sorts to examine how people think about different professions and careers. And since then, scholars have continued to use card sorts to learn about similar categorization questions.

In my own research, I study how different groups of people in the United States imagine the category of 'religion'. Asthose crazy 1970s anthropologists showed, card sorting is a great way to understand how people cognitively understand particular social categories. So, in particular,I’m using card sorting in my research to better understand how groups of people with dramatically different views understand 'religion' — namely, evangelical Christians and self-identified atheists. Thinkof it like this. Some people say that religion is the bedrock of American society.

Others say that too much religion in public life is exactly what’s wrong with this country. What's not often considered is these two groups oftenunderstand the concept of 'religion' in very different ways. It’s like the group of blind men and the elephant: one touches the trunk, one touches the ears, and one touches the tail. All three come away with very different ideas of what an elephant is. So you could say that I study how different people experience the 'elephant' of religion in their daily lives. I’m doing so using primarily in-person moderated sorts on an iPad, which I’ll describe below.

How I generated the words on the cards

The first step in the process was to generate lists of relevant terms for my subjects to sort. Unlike in UX testing, where cards for sorting might come from an existing website, in my world these concepts first have to be mined from the group of people being studied. So the first thing I did was have members of both atheist and evangelical groups complete a free listing task. In a free listing task, participants simply list as many words as they can that meet the criteria given. Sets of both atheist and evangelical respondents were given the instructions: "What words best describe 'religion?' Please list as many as you can.” They were then also asked to list words that describe 'atheism', 'spirituality', and 'Christianity'.

I took the lists generated and standardizedthem by combining synonyms. For example, some of my atheists used words like 'ancient', 'antiquated', and 'archaic' to describe religion. SoI combined all of these words into the one that was mentioned most: 'antiquated'. By doing this, I created a list of the most common words each group used to describe each category. Doing this also gave my research another useful dimension, ideal for exploring alongside my card sorting results. Free lists can beanalyzed themselves using statistical techniques likemulti-dimensional scaling, so I used this technique for apreliminary analysis of the words evangelicals used to describe 'atheism':

Optimalsort and sociological research

Now that I’m armed with these lists of words that atheist and evangelicals used to describe religion, atheism etc., I’m about to embark on phase two of the project: the card sort.

Why using card sorting software is a no-brainer for my research

I’ll be conducting my card sorts in person, for various reasons. I have relatively easy access to the specific population that I’m interested in, and for the kind of academic research I’m conducting, in-person activities are preferred. In theory, I could just print the words on some index cards and conduct a manual card sort, but I quickly realized that a software solution would be far preferable, for a bunch of reasons.

First of all, it's important for me to conductinterviews in coffee shops and restaurants, and an iPad on the table is, to put it mildly, more practical than a table covered in cards — no space for the teapot after all.

Second, usingsoftwareeliminates the need for manual data entry on my part. Not only is manual data entry a time consuming process, but it also introduces the possibly of data entry errors which may compromise my research results.

Third, while the bulk of the card sorts are going to be done in person, having an online version will enable meto scale the project up after the initial in-person sorts are complete. The atheist community, in particular, has a significant online presence, making a web solution ideal for additional data collection.

Fourth, OptimalSort gives the option to re-direct respondents after they complete a sort to any webpage, which allows multiple card sorts to be daisy-chained together. It also enables card sorts to be easily combined with complex survey instruments from other providers (e.g. Qualtrics or Survey Monkey), so card sorting data can be gathered in conjunction with other methodologies.

Finally, and just as important, doing card sorts on a tablet is more fun for participants. After all, who doesn’t like to play with an iPad? If respondents enjoy the unique process of the experiment, this is likely to actually improve the quality of the data, andrespondents are more likely to reflect positively on the experience, making recruitment easier. And a fun experience also makes it more likely that respondents will complete the exercise.

What my in-person, on-tablet card sorting research will look like

Respondents will be handed an iPad Air with 4G data capability. While the venues where the card sorts will take place usually have public Wi-Fi networks available, these networks are not always reliable, so the cellular data capabilities are needed as a back-up (and my pre-testing has shown that OptimalSort works on cellular networks too).

The iPad’s screen orientation will be locked to landscape and multi-touch functions will be disabled to prevent respondents from accidentally leaving the testing environment. In addition, respondents will have the option of using a rubber tipped stylus for ease of sorting the cards. While I personally prefer to use a microfiber tipped stylus in other applications, pre-testing revealed that an old fashioned rubber tipped stylus was easier for sorting activities.

using a tablet to conduct a card sort

When the respondent receives the iPad, the card sort first page with general instructions will already be open on the tablet in the third party browser Perfect Web. A third party browser is necessary because it is best to run OptimalSort locked in a full screen mode, both for aesthetic reasons and to keep the screen simple and uncluttered for respondents. Perfect Web is currently the best choice in the ever shifting app landscape.

participants see the cards like this

I'll give respondents their instructions and then go to another table to give them privacy (because who wants the creepy feeling of some guy hanging over you as you do stuff?). Altogether, respondents will complete two open card sorts and a fewsurvey-style questions, all chained together by redirect URLs. First, they'll sort 30 cards into groups based on how they perceive 'religion', and name the categories they create. Then, they'll complete a similar card sort, this time based on how they perceive 'atheism'.

Both atheist and evangelicals will receive a mixture of some of the top words that both groups generated in the earlier free listing tasks. To finish, they'll answer a few questions that will provide further data on how they think about 'religion'. After I’ve conducted these card sorts with both of my target populations, I’ll analyze the resulting data on its own and also in conjunction with qualitative data I’ve already collected via ethnographic research and in-depth interviews. I can't wait, actually. In a few months I’ll report back and let you know what I’ve found.

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.