May 26, 2016
4 min

Card descriptions: Testing the effect of contextual information in card sorts

The key purpose of running a card sort is to learn something new about how people conceptualize and organize the information that’s found on your website. The insights you gain from running a card sort can then help you develop a site structure with content labels or headings that best represent the way your users think about this information. Card sorts are in essence a simple technique, however it’s the details of the sort that can determine the quality of your results.

Adding context to cards in OptimalSort – descriptions, links and images

In most cases, each item in a card sort has only a short label, but there are instances where you may wish to add additional context to the items in your sort. Currently, the cards tab in OptimalSort allows you to include a tooltip description, a link within the tooltip description or to format the card as an image (with or without a label).

adding descriptions and images - 640px

We generally don’t recommend using tooltip descriptions and links, unless you have a specific reason to do so. It’s likely that they’ll provide your participants with more information than they would normally have when navigating your website, which may in turn influence your results by leading participants to a particular solution.

Legitimate reasons that you may want to use descriptions and links include situations where it’s not possible or practical to translate complex or technical labels (for example, medical, financial, legal or scientific terms) into plain language, or if you’re using a card sort to understand your participants’ preferences or priorities.

If you do decide to include descriptions in your sort, it’s important that you follow the same guidelines that you would otherwise follow for writing card labels. They should be easy for your participants to understand and you should avoid obvious patterns, for example repeating words and phrases, or including details that refer to the current structure of the website.

A quick survey of how card descriptions are used in OptimalSort

I was curious to find out how often people were including descriptions in their card sorts, so I asked our development team to look into this data. It turns out that around 15% of cards created in OptimalSort have at least some text entered in the description field. In order to dig into the data a bit further, both Ania and I reviewed a random sample of recent sorts and noted how descriptions were being used in each case.

We found that out of the descriptions that we reviewed, 40% (6% of the total cards) had text that should not have impacted the sort results. Most often, these cards simply had the card label repeated in the description (to be honest, we’re not entirely sure why so many descriptions are being used this way! But it’s now in our roadmap to stop this from happening — stay tuned!). Approximately 20% (3% of the total cards) used descriptions to add context without obviously leading participants, however another 40% of cards have descriptions that may well lead to biased results. On occasion, this included linking to the current content or using what we assumed to be the current top level heading within the description.

Use of card descriptions

Create pie charts

Testing the effect of card descriptions on sort results

So, how much influence could potentially leading card descriptions have on the results of a card sort? I decided to put it to the test by running a series of card sorts to compare the effect of different descriptions. As I also wanted to test the effect of linking card descriptions to existing content, I had to base the sort on a live website. In addition, I wanted to make sure that the card labels and descriptions were easily comprehensible by a general audience, but not so familiar that participants were highly likely to sort the cards in a similar manner.

I selected the government immigration website New Zealand Now as my test case. This site, which provides information for prospective and new immigrants to New Zealand, fit the above criteria and was likely unfamiliar to potential participants.

Card descriptions

Navigating the New Zealand Now website

When I reviewed the New Zealand Now site, I found that the top level navigation labels were clear and easy to understand for me personally. Of course, this is especially important when much of your target audience is likely to be non-native English speaking! On the whole, the second level headings were also well-labeled, which meant that they should translate to cards that participants were able to group relatively easily.

There were, however, a few headings such as “High quality” and “Life experiences”, both found under “Study in New Zealand”, which become less clear when removed from the context of their current location in the site structure. These headings would be particularly useful to include in the test sorts, as I predicted that participants would be more likely to rely on card descriptions in the cases where the card label was ambiguous.

Card Descriptions2

I selected 30 headings to use as card labels from under the sections “Choose New Zealand”, “Move to New Zealand”, “Live in New Zealand”, “Work in New Zealand” and “Study in New Zealand” and tweaked the language slightly, so that the labels were more generic.

card labels

I then created four separate sorts in OptimalSort:Round 1: No description: Each card showed a heading only — this functioned as the control sort

Card descriptions illustrations - card label only

Round 2: Site section in description: Each card showed a heading with the site section in the description

Card descriptions illustrations - site section

Round 3: Short description: Each card showed a heading with a short description — these were taken from the New Zealand Now topic landing pages

Card descriptions illustrations - short description

Round 4:Link in description: Each card showed a heading with a link to the current content page on the New Zealand Now website

Card descriptions illustrations - link

For each sort, I recruited 30 participants. Each participant could only take part in one of the sorts.

What the results showed

An interesting initial finding was that when we queried the participants following the sort, only around 40% said they noticed the tooltip descriptions and even fewer participants stated that they had used them as an aid to help complete the sort.

Participant recognition of descriptions

Create bar charts

Of course, what people say they do does not always reflect what they do in practice! To measure the effect that different descriptions had on the results of this sort, I compared how frequently cards were sorted with other cards from their respective site sections across the different rounds.Let’s take a look at the “Study in New Zealand” section that was mentioned above. Out of the five cards in this section,”Where & what to study”, “Everyday student life” and “After you graduate” were sorted pretty consistently, regardless of whether a description was provided or not. The following charts show the average frequency with which each card was sorted with other cards from this section. For example in the control round, “Where & what to study” was sorted with “After you graduate” 76% of the time and with “Everyday day student life” 70% of the time, but was sorted with “Life experiences” or “High quality” each only 10% of the time. This meant that the average sort frequency for this card was 42%.

Untitled chartCreate bar charts

On the other hand, the cards “High quality” and “Life experiences” were sorted much less frequently with other cards in this section, with the exception of the second sort, which included the site section in the description.These results suggest that including the existing site section in the card description did influence how participants sorted these cards — confirming our prediction! Interestingly, this round had the fewest number of participants who stated that they used the descriptions to help them complete the sort (only 10%, compared to 40% in round 3 and 20% in round 4).Also of note is that adding a link to the existing content did not seem to increase the likelihood that cards were sorted more frequently with other cards from the same section. Reasons for this could include that participants did not want to navigate to another website (due to time-consciousness in completing the task, or concern that they’d lose their place in the sort) or simply that it can be difficult to open a link from the tooltip pop-up.

What we can take away from these results

This quick investigation into the impact of descriptions illustrates some of the intricacies around using additional context in your card sorts, and why this should always be done with careful consideration. It’s interesting that we correctly predicted some of these results, but that in this case, other uses of the description had little effect at all. And the results serve as a good reminder that participants can often be influenced by factors that they don’t even recognise themselves!If you do decide to use card descriptions in your cards sorts, here are some guidelines that we recommend you follow:

  • Avoid repeating words and phrases, participants may sort cards by pattern-matching rather than based on the actual content
  • Avoid alluding to a predetermined structure, such as including references to the current site structure
  • If it’s important that participants use the descriptions to complete the sort, you should mention this in your task instructions. It may also be worth asking them a post-survey question to validate if they used them or not

We’d love to hear your thoughts on how we tested the effects of card descriptions and the results that we got. Would you have done anything differently?Have you ever completed a card sort only to realize later that you’d inadvertently biased your results? Or have you used descriptions in your card sorts to meet a genuine need? Do you think there’s a case to make descriptions more obvious than just a tooltip, so that when they are used legitimately, most participants don’t miss this information?

Let us know by leaving a comment!

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

How to interpret your card sort results Part 2: closed card sorts and next steps

In Part 1 of this series we looked at how to interpret results from open and hybrid card sorts and now in Part 2, we’re going to talk about closed card sorts. In closed card sorts, participants are asked to sort the cards into predetermined categories and are not allowed to create any of their own. You might use this approach when you are constrained by specific category names or as a quick checkup before launching a new or newly redesigned website.In Part 1, we also discussed the two different - but complementary - types of analysis that are generally used together for interpreting card sort results: exploratory and statistical. Exploratory analysis is intuitive and creative while statistical analysis is all about the numbers. Check out Part 1 for a refresher or learn more about exploratory and statistical analysis in Donna Spencer’s book.

Getting started

Closed card sort analysis is generally much quicker and easier than open and hybrid card sorts because there are no participant created category names to analyze - it’s really just about where the cards were placed. There are some similarities about how you might start to approach your analysis process but overall there’s a lot less information to take in and there isn’t much in the way of drilling down into the details like we did in Part 1.Just like with an open card sort, kick off your analysis process by taking an overall look at the results as a whole. Quickly cast your eye over each individual card sort and just take it all in. Look for common patterns in how the cards have been sorted. Does anything jump out as surprising? Are there similarities or differences between participant sorts?

If you’re redesigning an existing information architecture (IA), how do your results compare to the current state? If this is a final check up before launching a live website, how do these results compare to what you learned during your previous research studies?If you ran your card sort using information architecture tool OptimalSort, head straight to the Overview and Participants Table presented in the results section of the tool. If you ran a moderated card sort using OptimalSort’s printed cards, you’ve probably been scanning them in after each completed session, but now is a good time to double check you got them all. And if you didn’t know about this handy feature of OptimalSort, it’s something to keep in mind for next time!

The Participants Table shows a breakdown of your card sorting data by individual participant. Start by reviewing each individual card sort one by one by clicking on the arrow in the far left column next to the Participants numbers. From here you can easily flick back and forth between participants without needing to close that modal window. Don’t spend too much time on this — you’re just trying to get a general impression of how the cards were sorted into your predetermined categories. Keep an eye out for any card sorts that you might like to exclude from the results. For example participants who have lumped everything into one group and haven’t actually sorted the cards.

Don’t worry- excluding or including participants isn’t permanent and can be toggled on or off at anytime.Once you’re happy with the individual card sorts that will and won’t be included in your results visualizations, it’s time to take a look at the Results Matrix in OptimalSort. The Results Matrix shows the number of times each card was sorted into each of your predetermined categories- the higher the number, the darker the shade of blue (see below).

A screenshot of the Results Matrix tab in OptimalSort.
Results Matrix in OptimalSort.

This table enables you to quickly and easily get across how the cards were sorted and gauge the highest and lowest levels of agreement among your participants. This will tell you if you’re on the right track or highlight opportunities for further refinement of your categories.If we take a closer look (see below) we can see that in this example closed card sort conducted on the Dewey Decimal Classification system commonly used in libraries, The Interpretation of Dreams by Sigmund Freud was sorted into ‘Philosophy and psychology’ 38 times in study a completed by 51 participants.

A screenshot of the Results Matrix in OptimalSort zoomed in.
Results Matrix in OptimalSort zoomed in with hover.

In the real world, that is exactly where that content lives and this is useful to know because it shows that the current state is supporting user expectations around findability reasonably well. Note: this particular example study used image based cards instead of word label based cards so the description that appears in both the grey box and down the left hand side of the matrix is for reference purposes only and was hidden from the participants.Sometimes you may come across cards that are popular in multiple categories. In our example study, How to win friends and influence people by Dale Carnegie, is popular in two categories: ‘Philosophy & psychology’ and ‘Social sciences’ with 22 and 21 placements respectively. The remaining card placements are scattered across a further 5 categories although in much smaller numbers.

A screenshot of the Results Matrix in OptimalSort showing cards popular in multiple categories.
Results Matrix showing cards popular in multiple categories.

When this happens, it’s up to you to determine what your number thresholds are. If it’s a tie or really close like it is in this case, you might review the results against any previous research studies to see if anything has changed or if this is something that comes up often. It might be a new category that you’ve just introduced, it might be an issue that hasn’t been resolved yet or it might just be limited to this one study. If you’re really not sure, it’s a good idea to run some in-person card sorts as well so you can ask questions and gain clarification around why your participants felt a card belonged in a particular category. If you’ve already done that great! Time to review those notes and recordings!You may also find yourself in a situation where no category is any more popular than the others for a particular card. This means there’s not much agreement among your participants about where that card actually belongs. In our example closed card sort study, the World Book Encyclopedia was placed into 9 of 10 categories. While it was placed in ‘History & geography’ 18 times, that’s still only 35% of the total placements for that card- it’s hardly conclusive.

A screenshot of the Results Matrix showing a card with a lack of agreement.
Results Matrix showing a card with a lack of agreement.

Sometimes this happens when the card label or image is quite general and could logically belong in many of the categories. In this case, an encyclopedia could easily fit into any of those categories and I suspect this happened because people may not be aware that encyclopedias make up a very large part of the category on the far left of the above matrix: ‘Computer science, information & general works’. You may also see this happening when a card is ambiguous and people have to guess where it might belong. Again - if you haven’t already - if in doubt, run some in-person card sorts so you can ask questions and get to the bottom of it!After reviewing the Results Matrix in OptimalSort, visit the Popular Placements Matrix to see which cards were most popular for each of your categories based on how your participants sorted them (see below 2 images).

A screenshot of the Popular Placements Matrix in OptimalSort, with the top half of the diagram showing.
Popular Placements Matrix in OptimalSort- top half of the diagram.

A screenshot of the Popular Placements Matrix in OptimalSort, with the top half of the diagram showing.
Popular Placements Matrix in OptimalSort- scrolled to show the bottom half of the diagram.

The diagram shades the most popular placements for each category in blue making it very easy to spot what belongs where in the eyes of your participants. It’s useful for quickly identifying clusters and also highlights the categories that didn’t get a lot of card sorting love. In our example study (2 images above) we can see that ‘Technology’ wasn’t a popular card category choice potentially indicating ambiguity around that particular category name. As someone familiar with the Dewey Decimal Classification system I know that ‘Technology’ is a bit of a tricky one because it contains a wide variety of content that includes topics on medicine and food science - sometimes it will appear as ‘Technology & applied sciences’. These results appear to support the case for exploring that alternative further!

Where to from here?

Now that we’ve looked at how to interpret your open, hybrid and closed card sorts, here are some next steps to help you turn those insights into action!Once you’ve analyzed your card sort results, it’s time to feed those insights into your design process and create your taxonomy which goes hand in hand with your information architecture. You can build your taxonomy out in Post-it notes before popping it into a spreadsheet for review. This is also a great time to identify any alternate labelling and placement options that came out of your card sorting process for further testing.From here, you might move into tree testing your new IA or you might run another card sort focussing on a specific area of your website. You can learn more about card sorting in general via our 101 guide.

When interpreting card sort results, don’t forget to have fun! It’s easy to get overwhelmed and bogged down in the results but don’t lose sight of the magic that is uncovering user insights.I’m going to leave you with this quote from Donna Spencer that summarizes the essence of card sort analysis quite nicely:Remember that you are the one who is doing the thinking, not the technique... you are the one who puts it all together into a great solution. Follow your instincts, take some risks, and try new approaches. - Donna Spencer

Further reading

  • Card Sorting 101 – Learn about the differences between open, closed and hybrid card sorts, and how to run your own using OptimalSort.

Learn more
1 min read

Decoding Taylor Swift: A data-driven deep dive into the Swiftie psyche 👱🏻‍♀️

Taylor Swift's music has captivated millions, but what do her fans really think about her extensive catalog? We've crunched the numbers, analyzed the data, and uncovered some fascinating insights into how Swifties perceive and categorize their favorite artist's work. Let's dive in!

The great debate: openers, encores, and everything in between ⋆.˚✮🎧✮˚.⋆

Our study asked fans to categorize Swift's songs into potential opening numbers, encores, and songs they'd rather not hear (affectionately dubbed "Nah" songs). The results? As diverse as Swift's discography itself!

Opening with a bang 💥

Swifties seem to agree that high-energy tracks make for the best concert openers, but the results are more nuanced than previously suggested. "Shake It Off" emerged as the clear favorite for opening a concert, with 17 votes. "Love Story" follows closely behind with 14 votes, showing that nostalgia indeed plays a significant role. Interestingly, both "Cruel Summer" and "Blank Space" tied for third place with 13 votes each.

This mix of songs from different eras of Swift's career suggests that fans appreciate both her newer hits and classic favorites when it comes to kicking off a show. The strong showing for "Love Story" does indeed speak to the power of nostalgia in concert experiences. It's worth noting that "...Ready for It?", while a popular song, received fewer votes (9) for the opening slot than might have been expected.

Encore extravaganza 🎤

When it comes to encores, fans seem to favor a diverse mix of Taylor Swift's discography, with a surprising tie at the top. "Slut!" (Taylor's Version), "exile", "Guilty as Sin?", and "Bad Blood (Remix)" all received the highest number of votes with 13 each. This variety showcases the breadth of Swift's career and the different aspects of her artistry that resonate with fans for a memorable show finale.

Close behind are "evermore", "Wildest Dreams", "ME!", "Love Story", and "Lavender Haze", each garnering 12 votes. It's particularly interesting to see both newer tracks and classic hits like "Love Story" maintaining strong popularity for the encore slot. This balance suggests that Swifties appreciate both nostalgia and Swift's artistic evolution when it comes to closing out a concert experience.

The "Nah" list 😒

Interestingly, some of Taylor Swift's tracks found themselves on the "Nah" list, indicating that fans might prefer not to hear them in a concert setting. "Clara Bow" tops this category with 13 votes, closely followed by "You're On Your Own, Kid", "You're Losing Me", and "Delicate", each receiving 12 votes.

This doesn't necessarily mean fans dislike these songs - they might just feel they're not well-suited for live performances or don't fit as well into a concert setlist. It's particularly surprising to see "Delicate" on this list, given its popularity. The presence of both newer tracks like "Clara Bow" and older ones like "Delicate" suggests that the "Nah" list isn't tied to a specific era of Swift's career, but rather to individual song preferences in a live concert context.

It's worth noting that even popular songs can end up on this list, highlighting the complex relationship fans have with different tracks in various contexts. This data provides an interesting insight into how Swifties perceive songs differently when considering them for a live performance versus general listening.

The Similarity Matrix: set list synergies ⚡

Our similarity matrix revealed fascinating insights into how fans envision Taylor Swift's songs fitting together in a concert set list:

1. The "Midnights" Connection: Songs from "Midnights" like "Midnight Rain", "The Black Dog", and "The Tortured Poets Department" showed high similarity in set list placement. This suggests fans see these tracks working well in similar parts of a concert, perhaps as a cohesive segment showcasing the album's distinct sound.

2. Cross-album transitions: There's an intriguing connection between "Guilty as Sin?" and "exile", with a high similarity percentage. This indicates fans see these songs from different albums as complementary in a live setting, potentially suggesting a smooth transition point in the set list that bridges different eras of Swift's career.

3. The show-stoppers: "Shake It Off" stands out as dissimilar to most other songs in terms of placement. This likely reflects its perceived role as a high-energy, statement piece that occupies a unique position in the set list, perhaps as an opener, closer, or peak moment.

4. Set list evolution: There's a noticeable pattern of higher similarity between songs from the same or adjacent eras, suggesting fans envision distinct segments for different periods of Swift's career within the concert. This could indicate a preference for a chronological journey through her discography or strategic placement of different styles throughout the show.

5. Thematic groupings: Some songs from different albums showed higher similarity, such as "Is It Over Now? (Taylor's Version)" and "You're On Your Own, Kid". This suggests fans see them working well together in the set list based on thematic or emotional connections rather than just album cohesion.

What does it all mean?! 💃🏼📊

This card sort data paints a picture of an artist who continually evolves while maintaining certain core elements that define her work. Swift's ability to create cohesive album experiences, make bold stylistic shifts, and maintain thematic threads throughout her career is reflected in how fans perceive and categorize her songs. Moreover, the diversity of opinions on song categorization - with 59 different songs suggested as potential openers - speaks to the depth and breadth of Swift's discography. It also highlights the personal nature of music appreciation; what one fan sees as the perfect opener, another might categorize as a "Nah".

In the end, this analysis gives us a fascinating glimpse into the complex web of associations in Swift's discography. It shows us not just how Swift has evolved as an artist, but how her fans have evolved with her, creating deep and sometimes unexpected connections between songs across her entire career. Whether you're a die-hard Swiftie or a casual listener, or a weirdo who just loves a good card sort, one thing is clear: Taylor Swift's music is rich, complex, and deeply meaningful to her fans. And with each new album, she continues to surprise, delight, and challenge our expectations.

Conclusion: shaking up our understanding 🥤🤔

This deep dive into the Swiftie psyche through a card sort reveals the complexity of Taylor Swift's discography and fans' relationship with it. From strategic song placement in a dream setlist to unexpected cross-era connections, we've uncovered layers of meaning that showcase Swift's artistry and her fans' engagement. The exercise demonstrates how a song can be a potential opener, mid-show energy boost, poignant closer, or a skip-worthy track, highlighting Swift's ability to create diverse, emotionally resonant music that serves various roles in the listening experience.

The analysis underscores Swift's evolving career, with distinct album clusters alongside surprising connections, painting a picture of an artist who reinvents herself while maintaining a core essence. It also demonstrates how fan-driven analyses like card sorting can be insightful and engaging, offering a unique window into music fandom and reminding us that in Swift's discography, there's always more to discover. This exercise proves valuable whether you're a die-hard Swiftie, casual listener, or someone who loves to analyze pop culture phenomena.

Learn more
1 min read

How to Spot and Destroy Evil Attractors in Your Tree (Part 1)

Usability guru Jared Spool has written extensively about the 'scent of information'. This term describes how users are always 'on the hunt' through a site, click by click, to find the content they’re looking for. Tree testing helps you deliver a strong scent by improving organisation (how you group your headings and subheadings) and labelling (what you call each of them).

Anyone who’s seen a spy film knows there are always false scents and red herrings to lead the hero astray. And anyone who’s run a few tree tests has probably seen the same thing — headings and labels that lure participants to the wrong answer. We call these 'evil attractors'.In Part 1 of this article, we’ll look at what evil attractors are, how to spot them at the answer end of your tree, and how to fix them. In Part 2, we’ll look at how to spot them in the higher levels of your tree.

The false scent — what it looks like in practice

One of my favourite examples of an evil attractor comes from a tree test we ran for consumer.org.nz, a New Zealand consumer-review website (similar to Consumer Reports in the USA). Their site listed a wide range of consumer products in a tree several levels deep, and they wanted to try out a few ideas to make things easier to find as the site grew bigger.We ran the tests and got some useful answers, but we also noticed there was one particular subheading (Home > Appliances > Personal) that got clicks from participants looking for very different things — mobile phones, vacuum cleaners, home-theatre systems, and so on:

pic1

The website intended the Personal appliance category to be for products like electric shavers and curling irons. But apparently, Personal meant many things to our participants: they also went there for 'personal' items like mobile phones and cordless drills that actually lived somewhere else.This is the false scent — the heading that attracts clicks when it shouldn’t, leading participants astray. Hence this definition: an evil attractor is a heading that draws unwanted traffic across several unrelated tasks.

Evil attractors lead your users astray

Attracting clicks isn’t a bad thing in itself. After all, that’s what a good heading does — it attracts clicks for the content it contains (and discourages clicks for everything else). Evil attractors, on the other hand, attract clicks for things they shouldn’t. These attractors lure users down the wrong path, and when users find themselves in the wrong place they'll either back up and try elsewhere (if they’re patient) or give up (if they’re not). Because these attractor topics are magnets for the user’s attention, they make it less likely that your user will get to the place you intended. The other evil part of these attractors is the way they hide in the shadows. Most of the time, they don’t get the lion’s share of traffic for a given task. Instead, they’ll poach 5–10% of the responses, luring away a fraction of users who might otherwise have found the right answer.

Find evil attractors easily in your data

The easiest attractors to spot are those at the answer end of your tree (where participants ended up for each task). If we can look across tasks for similar wrong answers, then we can see which of these might be evil attractors.In your Treejack results, the Destinations tab lets you do just that. Here’s more of the consumer.org.nz example:

Pic2

Normally, when you look at this view, you’re looking down a column for big hits and misses for a specific task. To look for evil attractors, however, you’re looking for patterns across rows. In other words, you’re looking horizontally, not vertically. If we do that here, we immediately notice the row for Personal (highlighted yellow). See all those hits along the row? Those hits indicate an attractor — steady traffic across many tasks that seem to have little in common. But remember, traffic alone is not enough. We’re looking for unwanted traffic across unrelated tasks. Do we see that here? Well, it looks like the tasks (about cameras, drills, laptops, vacuums, and so on) are not that closely related. We wouldn’t expect users to go to the same topic for each of these. And the answer they chose, Personal, certainly doesn’t seem to be the destination we intended. While we could rationalise why they chose this answer, it is definitely unwanted from an IA perspective. So yes, in this case, we seem to have caught an evil attractor red-handed. Here’s a heading that’s getting steady traffic where it shouldn’t.

Evil attractors are usually the result of ambiguity

It’s usually quite simple to figure out why an item in your tree is an evil attractor. In almost all cases, it’s because the item is vague or ambiguous — a word or phrase that could mean different things to different people. Look at our example above. In the context of a consumer-review site, Personal is too general to be a good heading. It could mean products you wear, or carry, or use in the bathroom, or a number of things. So, when those participants come along clutching a task, and they see Personal, a few of them think 'That looks like it might be what I’m looking for', and they go that way.Individually, those choices may be defensible, but as an information architect, are you really going to group mobile phones with vacuum cleaners? The 'personal' link between them is tenuous at best.

Destroy evil attractors by being specific

Just as it’s easy to see why most attractors attract, it’s usually easy to fix them. Evil attractors trade in vagueness and ambiguity, so the obvious remedy is to make those headings more concrete and specific. In the consumer-site example, we looked at the actual content under the Personal heading. It turned out to be items like shavers, curling irons, and hair dryers. A quick discussion yielded Personal care as a promising replacement — one that should deter people looking for mobile phones and jewellery and the like.In the second round of tree testing, among the other changes we made to the tree, we replaced Personal with Personal Care. A few days later, the results confirmed our thinking. Our former evil attractor was no longer luring participants away from the correct answers:

Pic3

Testing once is good, testing twice is magic

This brings up a final point about tree testing (and about any kind of user testing, really): you need to iterate your testing —  once is not enough.The first round of testing shows you where your tree is doing well (yay!) and where it needs more work so you can make some thoughtful revisions. Be careful though. Even if the problems you found seem to have obvious solutions, you still need to make sure your revisions actually work for users, and don’t cause further problems. The good news is, it’s dead easy to run a second test, because it’s just a small revision of the first. You already have the tasks and all the other bits worked out, so it’s just a matter of making a copy in Treejack, pasting in your revised tree, and hooking up the correct answers. In an hour or two, you’re ready to pilot it again (to err is human, remember) and send it off to a fresh batch of participants.

Two possible outcomes await.

  • Your fixes are spot-on, the participants find the correct answers more frequently and easily, and your overall score climbs. You could have skipped this second test, but confirming that your changes worked is both good practice and a good feeling. It’s also something concrete to show your boss.
  • Some of your fixes didn’t work, or (given the tangled nature of IA work) they worked for the problems you saw in Round 1, but now they’ve caused more problems of their own. Bad news, for sure. But better that you uncover them now in the design phase (when it takes a few days to revise and re-test) instead of further down the track when the IA has been signed off and changes become painful.

Stay tuned for more on evil attractors

In Part 1, we’ve covered what evil attractors are and how to spot them at the answer end of your tree: that is, evil attractors that participants chose as their destination when performing tasks. Hopefully, a future version of Treejack will be able to highlight these attractors to make your analysis that much easier.

In Part 2, we’ll look at how to spot evil attractors in the intermediate levels of your tree, where they lure participants into a section of the site that you didn’t intend. These are harder to spot, but we’ll see if we can ferret them out.Let us know if you've caught any evil attractors red-handed in your projects.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.