Online or offline card sorting?

From time to time I hear people say that they prefer online card sorting to offline card sorting or vice versa. I think they complement each other! (and you should do both)

Let’s start with a quick rundown on the major differences in outcomes from moderated and unmoderated card sorting.

Remote & Unmoderated Card Sorting (Online):

  • Unlimited scale. You can have as many participants as required to get the answer you need.
  • Much closer to “fire & forget”. Set up a study, fire it out to potential participants, enjoy the afternoon in the sun.
  • Relatively cheap. Compared to the cost of having a facilitator, note taker, clients on site, reception, coffees, compensation… remote testing is clearly cheaper to conduct.
  • It can be difficult to know why¬†things happen. Qualitative results are not nearly as apparent because participants are not facilitated, moderated, steered and often not recorded. You don’t get to hear them thinking out loud or discussing decisions.
  • Great for gathering quantitative results. If you have a hunch of your own or as a result of qualitative tests then remote and unmoderated user testing is a great way to back it up with some numbers.

In-Person & Moderated Card Sorting (Offline):

  • Limited scale. You can only bring in as many participants as you can afford in terms of time and budget.
  • Relatively heavy investment per participant. Each participant will have associated costs and will create work for you. (I’m not saying it isn’t worthwhile, it generally is, I’m just pointing out the differences)
  • Great for gathering qualitative results. This is where you get insight into how people feel about what they’re doing or saying in the study.
  • It is usually too expensive to get quantitative results from moderated testing. Yes, you will undoubtedly uncover most of the problems and convince yourself that something must be done, but many situations call for more.

So what should you do?

I recommend that people conduct from 1 to 5 offline, in-person and moderated card sorts to get a good understanding for themselves of how other people would organise their content and the rationale for it. Then I suggest that people conduct an online study using OptimalSort to put some numbers behind the hunches. By the way, I don’t mean to belittle any professional observations by calling them hunches, I’m just making the point that however duly convinced you might be it is usually not unreasonable for a stakeholder to want more data if a change will impact thousands or potentially millions of other people (or dollars for that matter).

If you are fortunate enough to have crystal clear direction from your qualitative research to propose an immediate way forward then I suggest you could skip the online card sort and move directly to validating your proposed new information architecture using tree testing. Either way you should be validating your chosen labels and content hierarchy using Treejack after a card sort.

We believe there is so much value in both qualitative and quantitative research techniques that we want you to do both. To assist you with this we have recently implemented an important change to OptimalSort: You can now print your OptimalSort cards (from a generated PDF) for moderated and in-person paper based card sorting and easily get the results back into OptimalSort for analysis alongside your quantitative research data. Hooray!

Step 1: Print the cards

OptimalSort's printed cards

OptimalSort's printed cards are printed with crop marks for easy cutting

Step 2: Sort the cards

OptimalSort paper card sort

Please don't analyse this card sort. I just laid them out to look pretty.

Step 3: Scan the groups back into OptimalSort

OptimalSort works with common barcode scanners

OptimalSort works with common barcode scanners so that you can quickly get your results into the tool for analysis

I’d love to know what you think of this new feature and whether it will be useful to you in your own card sorts. It certainly beats trying to moderate card sorts around a screen or retrospectively entering participant sorts by doing multiple sorts yourself (you know who you are!).

  1. I agree – online and offline compliment each other enormously. Personally I prefer conducting open sorts in person, and closed sorts online (with a larger audience).

    The bar code scanning is a cool idea, but when sorting in-person I find the the number of partipants is often low (10-20) and so entering data isn’t the bottleneck.

    I love your product but I wish that there were some more analysis options for open sorts. A few years ago I wrote a small script that took results and turned them into an associated adjacency matrix (which are normally associated with graphs, although you could think of card sorting results as a graph dataset!) when the matrix is reordered (so that similar rows and columns are next to each other) it’s a great visualization where their are overlaps in people’s mental models of content.

    Keep up the great work :)

    Mathew Sanders 13/12/2011
    • Hi Mathew, thanks for the compliments! I’m glad to hear you’re using a blend of on and offline techniques.
      The barcode scanning has been done because we found a surprising number of people were entering their data for open sorts into OptimalSort by sitting down and recreating the participant’s own sort themselves using the participant sorting interface. So we thought about what the most efficient way of entering result data from a paper based sort would be, and this is what we did.
      Have you tried using OptimalSort for an open sort in the past year or so? There are already a number of good options for open sort analysis and some of it may be new to you. Thanks for the note about the adjacency matrix though, I will definitely look into that.
      As for closed sorts, if your use case is validating some top level content categories I would suggest using Treejack for this purpose instead. We consider open card sorting to be useful in defining an IA, and tree testing for validating or refining an IA.

      Andrew Mayfield 13/12/2011