December 4, 2018

How to interpret your card sort results Part 1: open and hybrid card sorts

Cards have been created, sorted and sorted again. The participants are all finished and you’re left with a big pile of awesome data that will help you improve the user experience of your information architecture. Now what?Whether you’ve run an open, hybrid or closed card sort online using an information architecture tool or you’ve run an in person (moderated) card sort, it can be a bit daunting trying to figure out where to start the card sort analysis process.

About this guide

This two-part guide will help you on your way! For Part 1, we’re going to look at how to interpret and analyze the results from open and hybrid card sorts.

  • In open card sorts, participants sort cards into categories that make sense to them and they give each category a name of their own making.
  • In hybrid card sorts, some of the categories have already been defined for participants to sort the cards into but they also have the ability to create their own.

Open and hybrid card sorts are great for generating ideas for category names and labels and understanding not only how your users expect your content to be grouped but also what they expect those groups to be called.In both parts of this series, I’m going to be talking a lot about interpreting your results using Optimal Workshop’s online card sorting tool, OptimalSort, but most of what I’m going to share is also applicable if you’re analyzing your data using a spreadsheet or using another tool.

Understanding the two types of analysis: exploratory and statistical

Similar to qualitative and quantitative methods, exploratory and statistical analysis in card sorting are two complementary approaches that work together to provide a detailed picture of your results.

  • Exploratory analysis is intuitive and creative. It’s all about going through the data and shaking it to see what ideas, patterns and insights fall out. This approach works best when you don’t have the numbers (smaller sample sizes) and when you need to dig into the details and understand the ‘why’ behind the statistics.

  • Statistical analysis is all about the numbers. Hard data that tells you exactly how many people expected X to be grouped with Y and more and is very useful when you’re dealing with large sample sizes and when identifying similarities and differences across different groups of people.

Depending on your objectives - whether you are starting from scratch or redesigning an existing IA - you’ll generally need to use some combination of both of these approaches when analyzing card sort results. Learn more about exploratory and statistical analysis in Donna Spencer’s book.

Start with the big picture

When analyzing card sort results, start by taking an overall look at the results as a whole. Quickly cast your eye over each individual card sort and just take it all in. Look for common patterns in how the cards have been sorted and the category names given by participants. Does anything jump out as surprising? Are there similarities or differences between participant sorts? If you’re redesigning an existing IA, how do your results compare to the current state?If you ran your card sort using OptimalSort, your first port of call will be the Overview and Participants Table presented in the results section of the tool.If you ran a moderated card sort using OptimalSort’s printed cards, now is a good time to double check you got them all. And if you didn’t know about this handy feature of OptimalSort, it’s something to keep in mind for next time!The Participants Table shows a breakdown of your card sorting data by individual participant. Start by reviewing each individual card sort one by one by clicking on the arrow in the far left column next to the Participants numbers.

A screenshot of the individual participant card sort results pop-up in OptimalSort.
Viewing individual participant card sorts in detail.

From here you can easily flick back and forth between participants without needing to close that modal window. Don’t spend too much time on this — you’re just trying to get a general impression of what happened.Keep an eye out for any card sorts that you might like to exclude from the results. For example participants who have lumped everything into one group and haven’t actually sorted the cards. Don’t worry - excluding or including participants isn’t permanent and can be toggled on or off at anytime.If you have a good number of responses, then the Participant Centric Analysis (PCA) tab (below) can be a good place to head next. It’s great for doing a quick comparison of the different high-level approaches participants took when grouping the cards.The PCA tab provides the most insight when you have lots of results data (30+ completed card sorts) and at least one of the suggested IAs has a high level of agreement among your participants (50% or more agree with at least one IA).

A screenshot of the Participant Centric Analysis (PCA) tab in OptimalSort, showing an example study.
Participant Centric Analysis (PCA) tab for an open or hybrid card sort in OptimalSort.

The PCA tab compares data from individual participants and surfaces the top three ways the cards were sorted. It also gives you some suggestions based on participant responses around what these categories could be called but try not to get too bogged down in those - you’re still just trying to gain an overall feel for the results at this stage.Now is also a good time to take a super quick peek at the Categories tab as it will also help you spot patterns and identify data that you’d like to dive deeper into a bit later on!Another really useful visualization tool offered by OptimalSort that will help you build that early, high-level picture of your results is the Similarity Matrix. This diagram helps you spot data clusters, or groups of cards that have been more frequently paired together by your participants, by surfacing them along the edge and shading them in dark blue. It also shows the proportion of times specific card pairings occurred during your study and displays the exact number on hover (below).

A screenshot of the Similarity Matrix tab in OptimalSort, with the results from an example study displaying.
OptimalSort’s Similarity Matrix showing that ‘Flat sandals’ and ‘Court shoes’ were paired by 91% of participants (31 times) in this example study.

In the above screenshot example we can see three very clear clusters along the edge: ‘Ankle Boots’ to ‘Slippers’ is one cluster, ‘Socks’ to ‘Stockings & Hold Ups’ is the next and then we have ‘Scarves’ to ‘Sunglasses’. These clusters make it easy to spot the that cards that participants felt belonged together and also provides hard data around how many times that happened.Next up are the dendrograms. Dendrograms are also great for gaining an overall sense of how similar (or different) your participants’ card sorts were to each other. Found under the Dendrogram tab in the results section of the tool, the two dendrograms are generated by different algorithms and which one you use depends largely on how many participants you have.

If your study resulted in 30 or more completed card sorts, use the Actual Agreement Method (AAM) dendrogram and if your study had fewer than 30 completed card sorts, use the Best Merge Method (BMM) dendrogram.The AAM dendrogram (see below) shows only factual relationships between the cards and displays scores that precisely tell you that ‘X% of participants in this study agree with this exact grouping’.In the below example, the study shown had 34 completed card sorts and the AAM dendrogram shows that 77% of participants agreed that the cards highlighted in green belong together and a suggested name for that group is ‘Bling’. The tooltip surfaces one of the possible category names for this group and as demonstrated here it isn’t always the best or ‘recommended’ one. Take it with a grain of salt and be sure to thoroughly check the rest of your results before committing!

A screenshot of the Actual Agreement Method (AAM) dendrogram in OptimalSort.
AAM Dendrogram in OptimalSort.

The BMM dendrogram (see below) is different to the AAM because it shows the percentage of participants that agree with parts of the grouping - it squeezes the data from smaller sample sizes and makes assumptions about larger clusters based on patterns in relationships between individual pairs.The AAM works best with larger sample sizes because it has more data to work with and doesn’t make assumptions while the BMM is more forgiving and seeks to fill in the gaps.The below screenshot was taken from an example study that had 7 completed card sorts and its BMM dendrogram shows that 50% of participants agreed that the cards highlighted in green down the left hand side belong to ‘Accessories, Bottoms, Tops’.

A screenshot of the Best Merge Method (BMM) dendrogram in OptimalSort.
BMM Dendrogram in OptimalSort.

Drill down and cross-reference

Once you’ve gained a high level impression of the results, it’s time to dig deeper and unearth some solid insights that you can share with your stakeholders and back up your design decisions.Explore your open and hybrid card sort data in more detail by taking a closer look at the Categories tab. Open up each category and cross-reference to see if people were thinking along the same lines.Multiple participants may have created the same category label, but what lies beneath could be a very different story. It’s important to be thorough here because the next step is to start standardizing or chunking individual participant categories together to help you make sense of your results.In open and hybrid sorts, participants will be able to label their categories themselves. This means that you may identify a few categories with very similar labels or perhaps spelling errors or different formats. You can standardize your categories by merging similar categories together to turn them into one.OptimalSort makes this really easy to do - you pretty much just tick the boxes alongside each category name and then hit the ‘Standardize’ button up the top (see below). Don’t worry if you make a mistake or want to include or exclude groupings; you can unstandardize any of your categories anytime.

A screenshot of the categories tab in OptimalSort, showing how categorization works.
Standardizing categories in OptimalSort.

Once you’ve standardized a few categories, you’ll notice that the Agreement number may change. It tells you how many participants agreed with that grouping. An agreement number of 1.0 is equal to 100% meaning everyone agrees with everything in your newly standardized category while 0.6 means that 60% of your participants agree.Another number to watch for here is the number of participants who sorted a particular card into a category which will appear in the frequency column in dark blue in the right-hand column of the middle section of the below image.

A screenshot of the categories tab after the creation of two groupings.
Categories table after groupings called ‘Accessories’ and ‘Bags’ have been standardized.

A screenshot of the Categories tab showing some of the groupings under 'Accessories'.
A closer look at the standardized category for ‘Accessories’.

From the above screenshot we can see that in this study, 18 of the 26 participant categories selected agree that ‘Cat Eye Sunglasses’ belongs under ‘Accessories’.Once you’ve standardized a few more categories you can head over to the Standardization Grid tab to review your data in more detail. In the below image we can see that 18 participants in this study felt that ‘Backpacks’ belong in a category named ‘Bags’ while 5 grouped them under ‘Accessories’. Probably safe to say the backpacks should join the other bags in this case.

A screenshot of the Standardization grid tab in OptimalSort.
Standardization Grid in OptimalSort.

So that’s a quick overview of how to interpret the results from your open or hybrid card sorts.Here's a link to Part 2 of this series where we talk about interpreting results from closed card sorts as well as next steps for applying these juicy insights to your IA design process.

Further reading

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

A quick analysis of feedback collected with OptimalSort

Card sorting is an invaluable tool for understanding how people organize information in their minds, making websites more intuitive and content easier to navigate. It’s a useful method outside of information architecture and UX research, too. It can be a useful prioritization technique, or used in a more traditional sense. For example, it’s handy in psychology, sociology or anthropology to inform research and deepen our understanding of how people conceptualize information.

The introduction of remote card sorting has provided many advantages, making it easier than ever to conduct your own research. Tools such as our very own OptimalSort allow you to quickly and easily gather findings from a large number of participants from all around the world. Not having to organize moderated, face-to-face sessions gives researchers more time to focus on their work, and easier access to larger data sets.

One of the main disadvantages of remote card sorting is that it eliminates the opportunity to dive deeper into the choices made by your participants. Human conversation is a great thing, and when conducting a remote card sort with users who could potentially be on the other side of the world, opportunities for our participants to provide direct feedback and voice their opinions are severely limited.Your survey design may not be perfect.

The labels you provide your participants may be incorrect, confusing or redundant. Your users may have their own ideas of how you could improve your products or services beyond what you are trying to capture in your card sort. People may be more willing to provide their feedback than you realize, and limiting their insights to a simple card sort may not capture all that they have to offer.So, how can you run an unmoderated, remote card sort, but do your best to mitigate this potential loss of insight?

A quick look into the data

In an effort to evaluate the usefulness of the existing “Leave a comment” feature in OptimalSort, I recently asked our development team to pull out some data.You might be asking “There’s a comment box in OptimalSort?”If you’ve never noticed this feature, I can’t exactly blame you. It’s relatively hidden away as an unassuming hyperlink in the top right corner of your card sort.

OptimalSortCommentBox1

OptimalSortCommentBox2

Comments left by your participants can be viewed in the “Participants” tab in your results section, and are indicated by a grey speech bubble.

OptimalSortSpeechBubble

The history of the button is unknown even to long-time Optimal Workshop team members. The purpose of the button is also unspecified. “Why would anyone leave a comment while participating in a card sort?”, I found myself wondering.As it turns out, 133,303 comments have been left by participants. This means 133,303 insights, opinions, critiques or frustrations. Additionally, these numbers only represent the participants who noticed the feature in the first place. Considering the current button can easily be missed when focusing on the task at hand, I can’t help but wonder how this number might change if we drew more attention to the feature.

Breaking down the comments

To avoid having to manually analyze and code 133,303 open text fields, I decided to only spend enough time to decipher any obvious patterns. Luckily for me, this didn’t take very long. After looking at only a hundred or so random entries, four distinct types of comments started to emerge.

  1. This card/group doesn’t make sense.Comments related to cards and groups dominate. This is a great thing, as it means that the majority of comments made by participants relate specifically to the task they are completing. For closed and hybrid sorts, comments frequently relate to the predefined categories available, and since the participants most likely to leave a comment are those experiencing issues, the majority of the feedback relates to issues with category names themselves. Many comments are related to card labels and offer suggestions for improving naming conventions, while many others draw attention to some terms being confusing, unclear or jargony. Comments on task length can also be found, along with reasons for why certain cards may be left ungrouped, e.g., “I’ve left behind items I think the site could do without”.
  2. Your organization is awesome for doing this/you’re doing it all wrong. A substantial number of participants used the comment box as an opportunity to voice their general feedback on the organization or company running the study. Some of the more positive comments include an appreciation for seeing private companies or public sector organizations conducting research with real users in an effort to improve their services. It’s also nice to see many comments related to general enjoyment in completing the task.On the other hand, some participants used the comment box as an opportunity to comment on what other areas of their services should be improved, or what features they would like to see implemented that may otherwise be missed in a card sort, e.g., “Increased, accurate search functionality is imperative in a new system”.
  3. This isn’t working for me. Taking a closer look at some of the comments reveals some useful feedback for us at Optimal Workshop, too. Some of the comments relate specifically to UI and usability issues. The majority of these issues are things we are already working to improve or have dealt with. However, for researchers, comments that relate to challenges in using the tool or completing the survey itself may help explain some instances of data variability.
  4. #YOLO, hello, ;) And of course, the unrelated. As you may expect, when you provide people with the opportunity to leave a comment online, you can expect just about anything in return.

How to make the most of your user insights in OptimalSort

If you’re running a card sort, chances are you already place a lot of value in the voice of your users. To ensure you capture any additional insights, it’s best to ensure your participants are aware of the opportunity to do so. Here are two ways you may like to ensure your participants have a space to voice their feedback:

Adding more context to the “Leave a comment” feature

One way to encourage your participants to leave comments is to promote the use of the this feature in your card sort instructions. OptimalSort gives you flexibility to customize your instructions every time you run a survey. By making your participants aware of the feature, or offering ideas around what kinds of comments you may be looking for, you not only make them more likely to use the feature, but also open yourself up to a whole range of additional feedback. An advantage of using this feature is that comments can be added in real time during a card sort, so any remarks can be made as soon as they arise.

Making use of post-survey questions

Adding targeted post-survey questions is the best way to ensure your participants are able to voice any thoughts or concerns that emerged during the activity. Here, you can ask specific questions that touch upon different aspects of your card sort, such as length, labels, categories or any other comments your participants may have. This can not only help you generate useful insights but also inform the design of your surveys in the future.

Make your remote card sorts more human

Card sorts are exploratory by nature. Avoid forcing your participants into choices that may not accurately reflect their thinking by giving them the space to voice their opinions. Providing opportunities to capture feedback opens up the conversation between you and your users, and can lead to surprising insights from unexpected places.

Further reading

Learn more
1 min read

67 ways to use Optimal for user research

User research and design can be tough in this fast-moving world. Sometimes we can get so wrapped up in what we’re doing, or what we think we’re supposed to be doing, that we don’t take the time to look for other options and other ways to use the tools we already know and love. I’ve compiled this list over last few days (my brain hurts) by talking to a few customers and a few people around the office. I’m sure it's far from comprehensive. I’ve focused on quick wins and unique examples. I’ll start off with some obvious ones, and we’ll get a little more abstract, or niche, as we go. I hope you get some ideas flying as you read through, enjoy!

#1 Benchmark your information architecture (IA)

Without a baseline for your information architecture, you can’t easily tell if any changes you make have a positive effect. If you haven’t done so, benchmark your existing website on Tree testing now. Upload your site structure and get results the same day. Now you’ll have IA scores to beat each month. Easy.

#2 Find out precisely where people get lost

Use Tree testing Pietree to find out exactly where people are getting lost in your website structure and where they go instead. You can also use First-click testing for this if you’re only interested in the first click, and let’s face it, that is where you’ll get the biggest bang for your buck.

#3 Start at the start

If you’re just not sure where to begin then take a screenshot of your homepage, or any page that you think might have some issues and get going with First-click testing. Write up a string of things that people might want to do when they find themselves on this page and use these as your tasks. Surprise all your colleagues with a maddening heatmap showing where people actually clicked in response to your tasks. Now you’ll know have a better idea of which area of your site to focus a tree test or card sort on for your next step.

#4 A/B test your site structure

Tree testing is great for testing more than one content structure. It’s easy to run two separate Tree testing studies, even more than two. It’ll help you decide which structure you and your team should run with, and it won’t take you long to set them up. Learn more.

#5 Make collaborative design decisions

Use Optimal Sort to get your team involved and let their feedback feed your designs: logos, icons, banners, images, the list goes on. By creating a closed image sort with categories where your team can group designs based on their preferences, you can get some quick feedback to help you figure out where you should focus your efforts.

#6 Do your (market) research

Card sorting is a great UX research technique, but it can also be a fun way to involve your users in some market research. Get a better sense of what your users and customers actually want to see on your website, by conducting an image sort of potential products. By providing categories like ‘I would buy this’, ‘I wouldn’t buy this’ to indicate their preferences for each item, you can figure out what types of products appeal to your customers.

#7 Customer satisfaction surveys with surveys

The thoughts and feelings of your users are always important. A simple survey can help you take a deeper look at your checkout process, a recently launched product or service, or even on the packaging your product arrives in, your options are endless.


#8 Crowdsource content ideas

Whether you’re running a blog or a UX conference, Questions can help you generate content ideas and understand any knowledge gaps that might be out there. Figure out what your users and attendees like to read on your blog, or what they want to hear about at your event, and let this feed into what you offer.

#9 Do some sociological research

Using card sorting for sociological research is a great way to deepen your understanding of how different groups may categorize information. Rather than focusing solely on how your users interact with your product or service, consider broadening your research horizons to understand your audience’s mental models. For example, by looking at how young people group popular social media platforms, you can understand the relationships between them, and identify where your product may fit in the mix.

#10 Create tests to fit in your onboarding process

Onboarding new customers is crucial to keeping them engaged with your product, especially if it involves your users learning how to use it. You can set up a quick study to help your users stay on track with onboarding. For example, say your company provided online email marketing software. You can set up a First-click testing study using a photo of your app, with a task asking your participants where they’d click to see the open rates for a particular email that went out.


#11 Quantify the return on investment of UX

Some people, including UX Agony Aunt, define return on UX as time saved, money made, and people engaged. By attaching a value to the time spent completing tasks, or to successful completion of tasks, you can approximate an ROI or at least illustrate the difference between two options.


#12 Collate all your user testing notes using qualitative Insights

Making sense of your notes from qualitative research activities can be simultaneously exciting and overwhelming. It’s fun being out on the field and jotting down observations on a notepad, or sitting in on user interviews and documenting observations into a spreadsheet. You can now easily import all your user research and give it some traceability.


#13 Establish which tags or filters people consider to be the most important

Create a card sort with your search filters or tags as labels, and have participants rank them according to how important they consider them to be. Analytics can tell you half of the story (where people actually click), so the card sort can give another side: a better idea of what people actually think or want.

#14 Reduce content on landing pages to what people access regularly

Before you run an open card sort to generate new category ideas, you can run a closed card sort to find out if you have any redundant content. Say you wanted to simplify the homepage of your intranet. You can ask participants to sort cards (containing homepage links) based on how often they use them. You could compare this card sort data with analytics from your intranet and see if people’s actual behavior and perception are well aligned.

#15 Crowd-source the values you want your team/brand/product to represent

Card sorting is a well-established technique in the ‘company values’ realm, and there are some great resources online to help you and your team brainstorm the values you represent. These ‘in-person’ brainstorm sessions are great, and you can run a remote closed card sort to support your findings. And if you want feedback from more than a small group of people (if your company has, say, more than 15 staff) you can run a remote closed card sort on its own. Use Microsoft’s Reaction Card Method as card inspiration.

#16 Input your learnings and observations from a UX conference with qualitative insights

If you're lucky enough to attend a UX conference, you can now share the experience with your colleagues. You can easily jot down ideas quotes and key takeaways in a Reframer project and keep your notes organized by using a new session for each presenter Bonus, if you’re part of a team, they can watch the live feed rolling into Reframer!


#17 Find out what actions people take across time

Use card sorting to understand when your participants are most likely to perform certain activities over the course of a day, week, or over the space of a year. Create categories that represent time, for example, ‘January to March’, ‘April to June’, ‘July to September’, and ‘October to December’, and ask your participants to sort activities according to the time they are most likely to do them (go on vacation, do their taxes, make big purchases, and so on). While there may be more arduous and more accurate methods for gathering this data, sometimes you need quick insights to help you make the right decisions.


#18 Gather quantitative data on prioritizing project tasks or product features

Closed card sorting can give you data that you might usually gather in team meetings or in Post-its on the wall, or that you might get through support channels. You can model your method on other prioritization techniques, including Eisenhower’s Decision Matrix, for example.

#19 Test your FAQs page with new users

Your support and knowledge base within your website can be just as important as any other core action on your website. If your support site is lacking in navigation and UX, this will no doubt increase support tickets and resources. Make sure your online support section is up to scratch. Here’s an article on how to do it quickly.

#20 Figure out if your icons need labels

Figure out if your icons are doing their job by testing whether your users are understanding them as intended. Uploading icons you currently use, or plan to use in your interface to First-click testing, and ask your users to identify their meaning by making use of post-task questions.

#21 Give your users some handy quick tools

In some cases, users may use your website with very specific goals in mind. Giving your users access to quick toos as soon as they land on your website is a great way to ensure they are able to get what they need done easily. Look at your analytics for things people do often that take several clicks to find, and check whether they can find your ‘quick tool’ in a single click using First-click testing.

#22 Benchmark the IA of your competition

We all have some sort of competitors, and researchers also need to pay attention to what they get up too. Make life easy in your reporting by benchmarking their IA and then reviewing it each quarter for the board and leaders to be wowed with. Also, not a perfect comparison, as users and separate sites have different flows, but compare your success scores with theirs. Makes your work feel like the Olympics with the healthy competition going on.

#23 Improve website conversions

Make the marketing team’s day by doing a fast improvement on some core conversions on your website. Now, there are loads of ways to improve conversions for a check out cart or signup form, but using First-click testing to test out ideas before you start going live A/B test can take mere minutes and give your B version a confidence boost.

#24 Reduce the bounce rates of certain sections of your website

People jumping off your website and not continuing their experience is something (depending on the landing page) everyone tries to improve. The metric ‘time on site’ and ‘average page views’ is a metric that shows the value your whole website has to offer. Again, there are many different ways to do this, but one big reason for people jumping off the website is not being able to find what they’re looking for. That’s where our IA toolkit comes in.

#25 Test your website’s IA in different countries

No, you don’t have to spend thousands of dollars to go to all these countries to test, although that’d be pretty sweet. You can remotely research participants from all over the world, using our integrated recruitment panel. Start seeing how different cultures, languages, and countries interact with your website.

#26 Run an empathy test (card sort)

Empathy – the ability to understand and share the experience of another person – is central to the design process. An empathy test is another great tool to use in the design phase because it enables you to find out if you are creating the right kind of feelings with your user. Take your design and show it to users. Provide them with a variety of words that could represent the design – for example “minimalistic”, “dynamic”, or “professional” – and ask them to pick out which the words which they think are best suited to their experience.

#27 Test visual hierarchy with first-click testing

Use first-click testing to understand which elements draw users' attention first on your page. Upload your design and ask participants to click on the most important element, or what catches their eye first. The resulting heatmap will show you if your visual hierarchy is working as intended - are users clicking where you expect them to? This technique helps validate design decisions about sizing, color, positioning, and contrast without needing to build the actual page.

#28 Take Qualitative Insights into the field

Get out of the office or the lab and observe social behaviour in the field. Use Qualitative Insights to input your observations on your field research. Then head back to your office to start making sense of the data in the Theme Builder.

#29 Use heatmaps to get the first impressions of designs

Heatmaps in our First-click testing tool are a great way of getting first impressions of any design. You can see where people clicked (correctly and incorrectly), giving you insights on what works and doesn’t work with your designs. Because it’s so fast to test, you can iterate until your designs start singing.

#30 Multivariate testing

Multivariate testing is when more than two versions of your studies are compared and allows you to understand which version performs better with your audience. Use multivariate testing with Tree testing and First-click testing to find the right design on which to focus and iterate.

#31 Improve your search engine optimization (SEO) with tree testing

Yes, a good IA improves your SEO. Search engines want to know how your users navigate throughout your site. Make sure people can easily find what they’re looking for, and you’ll start to see improvement in your search engine ranking.

#32 Test your mobile information architecture

As more and more people are using their smartphones for apps and to browse sites, you need to ensure its design gives your users a great experience. Test the IA of your mobile site to ensure people aren’t getting lost in the mobile version of your site. If you haven’t got a mobile-friendly design yet, now’s the time to start designing it!

#33 Run an Easter egg hunt using the correct areas in first-click testing

Liven up the workday by creating a fun Easter egg hunt in first-click testing. Simply upload a photo (like those really hard “spot the X” photos), set the correct area of your target, then send out your study with participant identifiers enabled. You can also send these out as competitions and have closing rules based on time, number of participants, or both.

#34 Keystroke level modeling

When interface efficiency is important you'll want to measure how much a new design can improve task times. You can actually estimate time saved (or lost) using some well-tested approaches that are based on average human performance for typical computer-based operations like clicking, pointing and typing. Read more about measuring task times without users.

#35 Feature prioritization and get some help for your roadmap

Find out what people think are the most important next steps for your team. Set up a card sort and ask people to categorize items and rank them in descending order of importance or impact on their work. This can also help you gauge their thoughts on potential new features for your site, and for bonus points compare team responses with customer responses.

#36 Tame your blog

Get the tags and categories in your blog under control to make life easier for your readers. Set up a card sort and use all your tags and categories as card labels. Either use your existing ones or test a fresh set of new tags and categories.

#37 Test your home button

Would an icon or text link work better for navigating to your home page? Before you go ahead and make changes to your site, you can find out by setting up a first-click testing test.

#38 Validate the designs in your head

As designers, you’ve probably got umpteen designs floating around in your head at any one time. But which of these are really worth pursuing? Figure this out by using The Optimal Workshop Suite to test out wireframes of new designs before putting any more work into them.

#39 ‘Buy now’ button shopping cart visibility

If you’re running an e-commerce site, ease of use and a great user experience are crucial. To see if your shopping cart and checkout processes are as good as they can be, run a first-click test.

#40 IA periodic health checks

Raise the visibility of good IA by running periodic IA health checks using Tree testing and reporting the results. Management loves metrics and catching any issues early is good too!

#41 Focus groups with qualitative insights

Thinking of launching a new product, app or website, or seeking opinions on an existing one? Focus groups can provide you with a lot of candid information that may help get your project off the ground. They’re also dangerous because they’re susceptible to groupthink, design by committee, and tunnel vision. Use with caution, but if you do then use with Qualitative Insights! Compare notes and find patterns across sessions. Pay attention to emotional triggers.

#42 Gather opinions with surveys

Whether you want the opinions of your users or from members of your team, you can set up a quick and simple survey using Surveys. It’s super useful for getting opinions on new ideas (consider it almost like a mini-focus group), or even for brainstorming with teammates.

#43 Design a style guide with card sorting

Style guides (for design and content) can take a lot of time and effort to create, especially when you need to get the guide proofed by various people in your company. To speed this up, simply create a card sort to find out what your guide should consist of. Find out the specifics in this article.

#44 Improve your company's CRM system

As your company grows, oftentimes your CRM can become riddled with outdated information and turn into a giant mess, especially if you deal with a lot of customers every day. To help clear this up, you can use card sorting and tree testing to solve navigational issues and get rid of redundant features. Learn more.

#45 Sort your life out

Let your creativity run wild, and get your team or family involved in organizing or prioritizing the things that matter. And the possibilities really are endless. Organize a long list of DIY projects, or ask the broader team how the functional pods should be re-organized. It’s up to you. How can card sorting help you in your work and daily life?

#46 Create an online diary study

Whether it’s a product, app or website, finding out the long-term behaviour and thoughts of your users is important. That’s where diary studies come in. For those new to this concept, diary studies are a longitudinal research method, aimed at collecting insights about a participant’s needs and behaviors. Participants note down activities as they’re using a particular product, app, or website. Add your participants into a qualitative study and allow them to create their diary study with ease.

#47 Source-specific data with an online survey

Online survey tools can complement your existing research by sourcing specific information from your participants. For example, if you need to find out more about how your participants use social media, which sites they use, and on which devices, you can do it all through a simple survey questionnaire. Additionally, if you need to identify usage patterns, device preferences or get information on what other products/websites your users are aware of/are using, a questionnaire is the ticket.

#48 Guerrilla testing with First-click testing

For really quick first-click testing, take First-click testing on a tablet, mobile device or laptop to a local coffee shop. Ask people standing in line if they’d like to take part in your super quick test in exchange for a cup of joe. Easy!

#50 Ask post-task questions for tree testing and first-click testing

You can now set specific task-related questions for both Tree testing and First-click testing. This is a great way to dive deeper into the mushy minds of your participants. Check out how to use this new(ish) feature here!

#51 Start testing prototypes

Paper prototypes are great, but what happens when your users are scattered around the globe, and you can’t invite them to an in-person test? By scanning (or taking a photo) of your paper prototypes, you can use first-click testingto test them with your users quickly and easily. Read more about our approach here.

#52 Take better notes for sense making

Qualitative research involves a lot of note-taking. So naturally, to be better at this method, improving how you take notes is important. Reframer is designed to make note-taking easy but it can still be an art. Learn more.

#53 Make sure you get the user's first-click right

Like most things, read a little, and then it’s all about practice.We’ve found that people who get the first click correct are almost three times as likely to complete a task successfully. Get your first clicks right in tree testing and first-click testing and you’ll start seeing your customers smile.


#54 Run a cat survey. Yep, cats!

We’ve gained some insight into how people intuitively group cats, and so can you (unless you’re a dog person). Honestly, doing something silly can be a useful way to introduce your team to a new method on a Friday afternoon. Remember to distribute the results!


#55 Destroy evil attractors in your tree

Evil attractors are those labels in your IA that attract unjustified clicks across tasks. This usually means the chosen label is ambiguous, or possibly a catch-all phrase like ‘Resources’. Read how to quickly identify evil attractors in the Destinations table of tree test results and how to fix them.

#56 Affinity map using card sorts

We all love our Post-its and sticking things on walls. But sometimes you need something quicker and accessible for people in remote areas. Try out using Card Sorts for a distributed approach to making sense of all the notes. Plus, you can easily import any qualitative insights when creating cards in card sort. Easy.

#57 Preference test with first-click testing

Whether you’re coming up with a new logo design, headline, featured image, or anything, you can preference test it with First-click testing. Create an image that shows the two designs side by side and upload it to First-click testing. From there, you can ask people to click whichever one they prefer!

#58 Add moderated card sort results to your card sort

An excellent way of gathering valuable qualitative insights alongside the results of your remote card sorts is to run a moderated version of the sorts with a smaller group of participants. When you can observe and interact with your participants as they complete the sort, you’ll be able to ask questions and learn more about their mental models and the reasons why they have categorized things in a particular way. Learn more.

#59 Test search box variations with first-click clicking

Case study by Viget: “One of the most heavily used features of the website is its keyword search, so we wanted to make absolutely certain that our redesigned search box didn’t make search harder for users to find and use.”

#60 Run an image card sort to organize products into groups

You can add images to each card that allows you understand how your participants may organize and label particular items. Very useful if you want to organize some retail products and want to find out how other people would organize them given a visual including shape, color, and other potential context.

#61 Test your customers' perceptions of different logo and brand image designs

Understand how customers perceive your brand by creating a closed card sort. Come up with a list of categories, and ask participants to sort images such as logos, and branded images.

#62 Run an open image card sort to classify images into groups based on the emotions they elicit

Are these pictures exhilarating, or terrifying? Are they humoros, or offensive? Relaxing, or boring? Productive, or frantic? Happy memories, or a deep sigh?

#63 Run an image card sort to organize your library

Whether it’s a physical library of books, or a digital drive full of ebooks, you can run a card sort to help organize them in a way that makes sense. Will it be by genre, author name, color or topic? Send out the study to your coworkers to get their input! You can also do this at home for your own personal library, and you can include music/CDs/vinyl records and movies!

#64 HR exercises to determine the motivations of your team

It’s simple to ask your team about their thoughts, feelings, and motivations with a Questions survey. You can choose to leave participant identifiers blank (so responses are anonymous), or you can ask for a name/email address. As a bonus, you can set up a calendar reminder to send out a new survey in the next quarter. Duplicate the survey and send it out again!

#65 Designing physical environments

If your company has a physical environment in which your customers visit, you can research new structures using a mixture of tools in The Optimal Workshop Suite. This especially comes in handy if your customers require certain information within the physical environment in order to make decisions. For example, picture a retail store. Are all the signs clear and communicate the right information? Are people overwhelmed by the physical environment?

#66 Use tree testing to refine an interactive phone menu system

Similar to how you’d design an IA, you can create a tree test to design an automated phone system. Whether you’re designing from the ground up, or improving your existing system, you will be able to find out if people are getting lost.


#67 Have your research team categorize and prioritize all these ideas

Before you dig deeper into more of these ideas, ask the rest of the team to help you decide which one to focus on. Let’s not get in the way of your work. Start your quick wins and log into your account. Here’s a spreadsheet of this list to upload to card sort. Aaaaaaaaaaand that’s a wrap! *Takes out gym towel and wipes sweaty face.
*Got any more suggestions to add to this list? We’d love to hear them in our comments section — we might even add them into this list

Learn more
1 min read

Decoding Taylor Swift: A data-driven deep dive into the Swiftie psyche 👱🏻‍♀️

Taylor Swift's music has captivated millions, but what do her fans really think about her extensive catalog? We've crunched the numbers, analyzed the data, and uncovered some fascinating insights into how Swifties perceive and categorize their favorite artist's work. Let's dive in!

The great debate: openers, encores, and everything in between ⋆.˚✮🎧✮˚.⋆

Our study asked fans to categorize Swift's songs into potential opening numbers, encores, and songs they'd rather not hear (affectionately dubbed "Nah" songs). The results? As diverse as Swift's discography itself!

Opening with a bang 💥

Swifties seem to agree that high-energy tracks make for the best concert openers, but the results are more nuanced than previously suggested. "Shake It Off" emerged as the clear favorite for opening a concert, with 17 votes. "Love Story" follows closely behind with 14 votes, showing that nostalgia indeed plays a significant role. Interestingly, both "Cruel Summer" and "Blank Space" tied for third place with 13 votes each.

This mix of songs from different eras of Swift's career suggests that fans appreciate both her newer hits and classic favorites when it comes to kicking off a show. The strong showing for "Love Story" does indeed speak to the power of nostalgia in concert experiences. It's worth noting that "...Ready for It?", while a popular song, received fewer votes (9) for the opening slot than might have been expected.

Encore extravaganza 🎤

When it comes to encores, fans seem to favor a diverse mix of Taylor Swift's discography, with a surprising tie at the top. "Slut!" (Taylor's Version), "exile", "Guilty as Sin?", and "Bad Blood (Remix)" all received the highest number of votes with 13 each. This variety showcases the breadth of Swift's career and the different aspects of her artistry that resonate with fans for a memorable show finale.

Close behind are "evermore", "Wildest Dreams", "ME!", "Love Story", and "Lavender Haze", each garnering 12 votes. It's particularly interesting to see both newer tracks and classic hits like "Love Story" maintaining strong popularity for the encore slot. This balance suggests that Swifties appreciate both nostalgia and Swift's artistic evolution when it comes to closing out a concert experience.

The "Nah" list 😒

Interestingly, some of Taylor Swift's tracks found themselves on the "Nah" list, indicating that fans might prefer not to hear them in a concert setting. "Clara Bow" tops this category with 13 votes, closely followed by "You're On Your Own, Kid", "You're Losing Me", and "Delicate", each receiving 12 votes.

This doesn't necessarily mean fans dislike these songs - they might just feel they're not well-suited for live performances or don't fit as well into a concert setlist. It's particularly surprising to see "Delicate" on this list, given its popularity. The presence of both newer tracks like "Clara Bow" and older ones like "Delicate" suggests that the "Nah" list isn't tied to a specific era of Swift's career, but rather to individual song preferences in a live concert context.

It's worth noting that even popular songs can end up on this list, highlighting the complex relationship fans have with different tracks in various contexts. This data provides an interesting insight into how Swifties perceive songs differently when considering them for a live performance versus general listening.

The Similarity Matrix: set list synergies ⚡

Our similarity matrix revealed fascinating insights into how fans envision Taylor Swift's songs fitting together in a concert set list:

1. The "Midnights" Connection: Songs from "Midnights" like "Midnight Rain", "The Black Dog", and "The Tortured Poets Department" showed high similarity in set list placement. This suggests fans see these tracks working well in similar parts of a concert, perhaps as a cohesive segment showcasing the album's distinct sound.

2. Cross-album transitions: There's an intriguing connection between "Guilty as Sin?" and "exile", with a high similarity percentage. This indicates fans see these songs from different albums as complementary in a live setting, potentially suggesting a smooth transition point in the set list that bridges different eras of Swift's career.

3. The show-stoppers: "Shake It Off" stands out as dissimilar to most other songs in terms of placement. This likely reflects its perceived role as a high-energy, statement piece that occupies a unique position in the set list, perhaps as an opener, closer, or peak moment.

4. Set list evolution: There's a noticeable pattern of higher similarity between songs from the same or adjacent eras, suggesting fans envision distinct segments for different periods of Swift's career within the concert. This could indicate a preference for a chronological journey through her discography or strategic placement of different styles throughout the show.

5. Thematic groupings: Some songs from different albums showed higher similarity, such as "Is It Over Now? (Taylor's Version)" and "You're On Your Own, Kid". This suggests fans see them working well together in the set list based on thematic or emotional connections rather than just album cohesion.

What does it all mean?! 💃🏼📊

This card sort data paints a picture of an artist who continually evolves while maintaining certain core elements that define her work. Swift's ability to create cohesive album experiences, make bold stylistic shifts, and maintain thematic threads throughout her career is reflected in how fans perceive and categorize her songs. Moreover, the diversity of opinions on song categorization - with 59 different songs suggested as potential openers - speaks to the depth and breadth of Swift's discography. It also highlights the personal nature of music appreciation; what one fan sees as the perfect opener, another might categorize as a "Nah".

In the end, this analysis gives us a fascinating glimpse into the complex web of associations in Swift's discography. It shows us not just how Swift has evolved as an artist, but how her fans have evolved with her, creating deep and sometimes unexpected connections between songs across her entire career. Whether you're a die-hard Swiftie or a casual listener, or a weirdo who just loves a good card sort, one thing is clear: Taylor Swift's music is rich, complex, and deeply meaningful to her fans. And with each new album, she continues to surprise, delight, and challenge our expectations.

Conclusion: shaking up our understanding 🥤🤔

This deep dive into the Swiftie psyche through a card sort reveals the complexity of Taylor Swift's discography and fans' relationship with it. From strategic song placement in a dream setlist to unexpected cross-era connections, we've uncovered layers of meaning that showcase Swift's artistry and her fans' engagement. The exercise demonstrates how a song can be a potential opener, mid-show energy boost, poignant closer, or a skip-worthy track, highlighting Swift's ability to create diverse, emotionally resonant music that serves various roles in the listening experience.

The analysis underscores Swift's evolving career, with distinct album clusters alongside surprising connections, painting a picture of an artist who reinvents herself while maintaining a core essence. It also demonstrates how fan-driven analyses like card sorting can be insightful and engaging, offering a unique window into music fandom and reminding us that in Swift's discography, there's always more to discover. This exercise proves valuable whether you're a die-hard Swiftie, casual listener, or someone who loves to analyze pop culture phenomena.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.