Blog

Optimal Blog

Articles and Podcasts on Customer Service, AI and Automation, Product, and more

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Latest

Learn more
1 min read

Are users always right? Well. It's complicated

About six months ago, I came across aninteresting question on Stack Exchange headlined 'Should you concede to user demands that are clearly inferior?' It stuck in my mind because the question in itself is complex, and contains a few complicated assumptions.

In the world of user experience research and design, the users needs and wants are paramount. Dollars and hours are spent poring through data and interviewing and collating information into a cohesive explanation of what works and what doesn't for users. Designs are based on how users intuitively interact with products and websites. Organisations respond to suggestions that come through on support and on Twitter, and if a significant numbers of users want a particular change, chances are those organisations will act. But the question itself throws this most sacred of stances up in the air, because it contains the phrase 'user demands that are clearly inferior'. Now, that is a loaded statement.

How the good reconcile the existence of the bad

I imagine it's sometimes hard for designers to get rid of the feeling that they know best. As a writer, I know what I like and don't like. I 'know' good writing from bad, and I have strong opinions about books and articles that aren't worth the pages or bandwidth it takes to publish them. But this stance often puts me in conflict with the huge amount of empirical evidence that certain writing I disdain is actually 'good': and that evidence is readers. For Fifty Shades of Lame, it's millions of them. Aggghh!

In the same way, I've never met a designer who didn't have strong opinions about what they adore and deplore in their own art forms. And I wonder how tough it sometimes is to implement changes that to a designers mind make no sense. Do any of you UX designers out there ever secretly think, when you discover what users are asking for, 'these people have no taste, they don't know what they want, how ridiculous!'? Is there a secret current of despair and frustration at user ignorance running deep and unspoken through the river of design?

The main views from the Stack Exchange discussion

xkcd  Workflow

On Stack Exchange, Matt described how he and his team implemented a single tree view (75 items) with a scroll wheel, and because it was an internalchange,they were able to get quick feedback from existing users. The feedback wasn't positive, and many people wanted the change to be reversed. He explains: ‘To my mind, the way we redeveloped it is unambiguously better. But the user base was equally emphatic in rejecting it. So today, to the complaints of my fellow team members, I removed our new implementation and set it to work in the manner the users were used to.'

He then goes on to ask 'What was the right course of action here? Is there a point at which the user's fear of change becomes an important UX consideration in its own right?' The responses are varied and fascinating, and can be roughly broken into three camps:

  1. If your users don't want something, you'd be stupid to try and implement it.
  2. Users are often change averse, so if you really think your change will be better, then you need to ease them into it.
  3. If you're convinced the change is positive, you still need to test it on your users, and be open to admitting you were wrong.

So where do we stand?

One of the problems with the term 'User Experience' is the word 'user'. It's a depersonalised and generic way of describing who it is you're serving. Because there is a person at the heart of the enterprise who is trying to achieve something. They may not be trying to achieve what you expect them to. They certainly may not be trying to achieve what you want them to.

Context is everything.

Who is the person who is asking for a change, or asking for something to stay the same?We would argue that people aren't 'change-averse', but 'confusion/discomfort/inefficiency-averse' people want easier ways of doing things. So if by changing a feature you mess up a person's workflow, then potentially you didn't do your research.

If you look closely at the behavior of users — how people actually interact with a particular aspect of your design, rather than just hearing their opinions — then you'll be able to base your design on empirical evidence. So, we (roughly) come down on the side of the people who use the product. If they want to get something done, and they want to do that in a particular way, then they have right of way.

It's your job not to serve your tastes, but to give people the experience you promise them. And to the author of Fifty Shades of Grey, I say, 'Good on you EL James. You gave them what they wanted.'

What do you think?

Learn more
1 min read

User research and agile squadification at Trade Me

Hi, I’m Martin. I work as a UX researcher at Trade Me having left Optimal Experience (Optimal Workshop's sister company) last year. For those of you who don’t know, Trade Me is New Zealand’s largest online auction site that also lists real estate to buy and rent, cars to buy, jobs listings, travel accommodation and quite a few other things besides. Over three quarters of the population are members and about three quarters of the Internet traffic for New Zealand sites goes to the sites we run.

Leaving a medium-sized consultancy and joining Trade Me has been a big change in many ways, but in others not so much, as I hadn’t expected to find myself operating in a small team of in-house consultants. The approach the team is taking is proving to be pretty effective, so I thought I’d share some of the details of the way we work with the readers of Optimal Workshop’s blog. Let me explain what I mean…

What agile at Trade Me looks like

Over the last year or so, Trade Me has moved all of its development teams over to Agile following a model pioneered by Spotify. All of the software engineering parts of the business have been ‘squadified’. These people produce the websites & apps or provide and support the infrastructure that makes everything possible.Across Squads, there are common job roles in ‘Chapters’ (like designers or testers) and because people are not easy to force into boxes, and why should they be, there are interest groups called ‘Guilds’.The squads are self-organizing, running their own processes and procedures to get to where they need to. In practice, this means they use as many or as few of the Kanban, Scrum, and Rapid tools they find useful. Over time, we’ve seen that squads tend to follow similar practices as they learn from each other.

How our UX team fits in

Our UX team of three sits outside the squads, but we work with them and with the product owners across the business.How does this work? It might seem counter-intuitive to have UX outside of the tightly-integrated, highly-focused squads, sometimes working with product owners working on stuff that might have little to do with what’s being currently developed in the squads. This comes down to the way Trade Me divides down the UX responsibilities within the organization. Within each squad there is a designer. He or she is responsible for how that feature or app looks, and, more importantly, how it acts — interaction design as well as visual design.Then what do we do, if we are the UX team?

We represent the voice of Trade Me’s users

By conducting research with Trade Me’s users we can validate the squads’ day-to-day decisions, and help frame decisions on future plans. We do this by wearing two hats. Wearing the pointy hats of structured, detailed researchers, we look into long-term trends: the detailed behaviours and goals of our different audiences. We’ve conducted lots of one-on-one interviews with hundreds of people, including top sellers, motor parts buyers, and job seekers, as well as running surveys, focus groups and user testing sessions of future-looking prototypes. For example, we recently spent time with a number of buyers and sellers, seeking to understand their motivations and getting under their skin to find out how they perceive Trade Me.

This kind of research enables Trade Me to anticipate and respond to changes in user perception and satisfaction.Swapping hats to an agile beanie (and stretching the metaphor to breaking point), we react to the medium-term, short-term and very short-term needs of the squads testing their ideas, near-finished work and finished work with users, as well as sometimes simply answering questions and providing opinion, based upon our research. Sometimes this means that we can be testing something in the afternoon having only heard we are needed in the morning. This might sound impossible to accommodate, but the pace of change at Trade Me is such that stuff is getting deployed pretty much every day, many of which affects our users directly. It’s our job to ensure that we support our colleagues to do the very best we can for our users.

How our ‘drop everything’ approach works in practice

Screen Shot 2014-07-11 at 10.00.21 am

We recently conducted five or six rounds (no one can quite remember, we did it so quickly) of testing of our new iPhone application (pictured above) — sometimes testing more than one version at a time. The development team would receive our feedback face-to-face, make changes and we’d be testing the next version of the app the same or the next day. It’s only by doing this that we can ensure that Trade Me members will see positive changes happening daily rather than monthly.

How we prioritize what needs to get done

To help us try to decide what we should be doing at any one time we have some simple rules to prioritise:

  • Core product over other business elements
  • Finish something over start something new
  • Committed work over non-committed work
  • Strategic priorities over non-strategic priorities
  • Responsive support over less time-critical work
  • Where our input is crucial over where our input is a bonus

Applying these rules to any situation makes the decision whether to jump in and help pretty easy.At any one time, each of us in the UX team will have one or more long-term projects, some medium-term projects, and either some short-term projects or the capacity for some short-term projects (usually achieved by putting aside a long-term project for a moment).

We manage our time and projects on Trello, where we can see at a glance what’s happening this and next week, and what we’ve caught sniff of in the wind that might be coming up, or definitely is coming up.On the whole, both we and the squads favour fast response, bulleted list, email ‘reports’ for any short-term requests for user testing.  We get a report out within four hours of testing (usually well within that). After all, the squads are working in short sprints, and our involvement is often at the sharp end where delays are not welcome. Most people aren’t going to read past the management summary anyway, so why not just write that, unless you have to?

How we share our knowledge with the organization

Even though we mainly keep our reporting brief, we want the knowledge we’ve gained from working with each squad or on each product to be available to everyone. So we maintain a wiki that contains summaries of what we did for each piece of work, why we did it and what we found. Detailed reports, if there are any, are attached. We also send all reports out to staff who’ve subscribed to the UX interest email group.

Finally, we send out a monthly email, which looks across a bunch of research we’ve conducted, both short and long-term, and draws conclusions from which our colleagues can learn. All of these latter activities contribute to one of our key objectives: making Trade Me an even more user-centred organization than it is.I’ve been with Trade Me for about six months and we’re constantly refining our UX practices, but so far it seems to be working very well.Right, I’d better go – I’ve just been told I’m user testing something pretty big tomorrow and I need to write a test script!

Learn more
1 min read

How to Spot and Destroy Evil Attractors in Your Tree (Part 1)

Usability guru Jared Spool has written extensively about the 'scent of information'. This term describes how users are always 'on the hunt' through a site, click by click, to find the content they’re looking for. Tree testing helps you deliver a strong scent by improving organisation (how you group your headings and subheadings) and labelling (what you call each of them).

Anyone who’s seen a spy film knows there are always false scents and red herrings to lead the hero astray. And anyone who’s run a few tree tests has probably seen the same thing — headings and labels that lure participants to the wrong answer. We call these 'evil attractors'.In Part 1 of this article, we’ll look at what evil attractors are, how to spot them at the answer end of your tree, and how to fix them. In Part 2, we’ll look at how to spot them in the higher levels of your tree.

The false scent — what it looks like in practice

One of my favourite examples of an evil attractor comes from a tree test we ran for consumer.org.nz, a New Zealand consumer-review website (similar to Consumer Reports in the USA). Their site listed a wide range of consumer products in a tree several levels deep, and they wanted to try out a few ideas to make things easier to find as the site grew bigger.We ran the tests and got some useful answers, but we also noticed there was one particular subheading (Home > Appliances > Personal) that got clicks from participants looking for very different things — mobile phones, vacuum cleaners, home-theatre systems, and so on:

pic1

The website intended the Personal appliance category to be for products like electric shavers and curling irons. But apparently, Personal meant many things to our participants: they also went there for 'personal' items like mobile phones and cordless drills that actually lived somewhere else.This is the false scent — the heading that attracts clicks when it shouldn’t, leading participants astray. Hence this definition: an evil attractor is a heading that draws unwanted traffic across several unrelated tasks.

Evil attractors lead your users astray

Attracting clicks isn’t a bad thing in itself. After all, that’s what a good heading does — it attracts clicks for the content it contains (and discourages clicks for everything else). Evil attractors, on the other hand, attract clicks for things they shouldn’t. These attractors lure users down the wrong path, and when users find themselves in the wrong place they'll either back up and try elsewhere (if they’re patient) or give up (if they’re not). Because these attractor topics are magnets for the user’s attention, they make it less likely that your user will get to the place you intended. The other evil part of these attractors is the way they hide in the shadows. Most of the time, they don’t get the lion’s share of traffic for a given task. Instead, they’ll poach 5–10% of the responses, luring away a fraction of users who might otherwise have found the right answer.

Find evil attractors easily in your data

The easiest attractors to spot are those at the answer end of your tree (where participants ended up for each task). If we can look across tasks for similar wrong answers, then we can see which of these might be evil attractors.In your Treejack results, the Destinations tab lets you do just that. Here’s more of the consumer.org.nz example:

Pic2

Normally, when you look at this view, you’re looking down a column for big hits and misses for a specific task. To look for evil attractors, however, you’re looking for patterns across rows. In other words, you’re looking horizontally, not vertically. If we do that here, we immediately notice the row for Personal (highlighted yellow). See all those hits along the row? Those hits indicate an attractor — steady traffic across many tasks that seem to have little in common. But remember, traffic alone is not enough. We’re looking for unwanted traffic across unrelated tasks. Do we see that here? Well, it looks like the tasks (about cameras, drills, laptops, vacuums, and so on) are not that closely related. We wouldn’t expect users to go to the same topic for each of these. And the answer they chose, Personal, certainly doesn’t seem to be the destination we intended. While we could rationalise why they chose this answer, it is definitely unwanted from an IA perspective. So yes, in this case, we seem to have caught an evil attractor red-handed. Here’s a heading that’s getting steady traffic where it shouldn’t.

Evil attractors are usually the result of ambiguity

It’s usually quite simple to figure out why an item in your tree is an evil attractor. In almost all cases, it’s because the item is vague or ambiguous — a word or phrase that could mean different things to different people. Look at our example above. In the context of a consumer-review site, Personal is too general to be a good heading. It could mean products you wear, or carry, or use in the bathroom, or a number of things. So, when those participants come along clutching a task, and they see Personal, a few of them think 'That looks like it might be what I’m looking for', and they go that way.Individually, those choices may be defensible, but as an information architect, are you really going to group mobile phones with vacuum cleaners? The 'personal' link between them is tenuous at best.

Destroy evil attractors by being specific

Just as it’s easy to see why most attractors attract, it’s usually easy to fix them. Evil attractors trade in vagueness and ambiguity, so the obvious remedy is to make those headings more concrete and specific. In the consumer-site example, we looked at the actual content under the Personal heading. It turned out to be items like shavers, curling irons, and hair dryers. A quick discussion yielded Personal care as a promising replacement — one that should deter people looking for mobile phones and jewellery and the like.In the second round of tree testing, among the other changes we made to the tree, we replaced Personal with Personal Care. A few days later, the results confirmed our thinking. Our former evil attractor was no longer luring participants away from the correct answers:

Pic3

Testing once is good, testing twice is magic

This brings up a final point about tree testing (and about any kind of user testing, really): you need to iterate your testing —  once is not enough.The first round of testing shows you where your tree is doing well (yay!) and where it needs more work so you can make some thoughtful revisions. Be careful though. Even if the problems you found seem to have obvious solutions, you still need to make sure your revisions actually work for users, and don’t cause further problems. The good news is, it’s dead easy to run a second test, because it’s just a small revision of the first. You already have the tasks and all the other bits worked out, so it’s just a matter of making a copy in Treejack, pasting in your revised tree, and hooking up the correct answers. In an hour or two, you’re ready to pilot it again (to err is human, remember) and send it off to a fresh batch of participants.

Two possible outcomes await.

  • Your fixes are spot-on, the participants find the correct answers more frequently and easily, and your overall score climbs. You could have skipped this second test, but confirming that your changes worked is both good practice and a good feeling. It’s also something concrete to show your boss.
  • Some of your fixes didn’t work, or (given the tangled nature of IA work) they worked for the problems you saw in Round 1, but now they’ve caused more problems of their own. Bad news, for sure. But better that you uncover them now in the design phase (when it takes a few days to revise and re-test) instead of further down the track when the IA has been signed off and changes become painful.

Stay tuned for more on evil attractors

In Part 1, we’ve covered what evil attractors are and how to spot them at the answer end of your tree: that is, evil attractors that participants chose as their destination when performing tasks. Hopefully, a future version of Treejack will be able to highlight these attractors to make your analysis that much easier.

In Part 2, we’ll look at how to spot evil attractors in the intermediate levels of your tree, where they lure participants into a section of the site that you didn’t intend. These are harder to spot, but we’ll see if we can ferret them out.Let us know if you've caught any evil attractors red-handed in your projects.

Learn more
1 min read

Selling your design recommendations to clients and colleagues

If you’ve ever presented design findings or recommendations to clients or colleagues, then perhaps you’ve heard them say:

  • “We don’t have the budget or resources for those improvements.”
  • “The new executive project has higher priority.”
  • “Let’s postpone that to Phase 2.”

As an information architect, I‘ve presented recommendations many times. And I’ve crashed and burned more than once by doing a poor job of selling some promising ideas. Here’s some things I’ve learned from getting it wrong.

Buyers prefer sellers they like and trust

You need to establish trust with peers, developers, executives and so on before you present your findings and recommendations . It sounds obvious, yet presentations often fail due to unfamiliarity, sloppiness or designer arrogance. A year ago I ran an IA test on a large company website. The project schedule was typically “aggressive” and the client’s VPs were endlessly busy. So I launched the test without their feedback. Saved time, right?Wrong. The client ignored all my IA recommendations, and their VPs ultimately rewrote my site map from scratch. I could have argued that they didn’t understand user-centered design. The truth is that I failed to establish credibility. I needed them to buy into the testing process, suggest test questions beforehand, or take the test as a control group. Anything to engage them would have helped – turning stakeholders into collaborators is a great way to establish trust.

Techniques for presenting UX recommendations

Many presentation tactics can be borrowed from salespeople, but a single blog post can’t do justice to the entire sales profession. So I’d just like to offer a few ideas for thought. No Jedi mind tricks though. Sincerity matters.

Emphasize product benefits, not product features

Beer commercials on TV don’t sell beer. They sell backyard parties and voluptuous strangers. Likewise, UX recommendations should emphasize product benefits rather than feature sets. This may be common marketing strategy. However, the benefits should resonate with stakeholders and not just test participants. Stakeholders often don’t care about Joe End User. They care about ROI, a more flexible platform, a faster way to publish content – whatever metrics determine their job performance.Several years ago, I researched call center data at a large corporation. To analyze the data, I eventually built a Web dashboard. The dashboard illustrated different types of customer calls by product. When I showed it to my co-workers, I presented the features and even the benefits of tracking usability issues this way.However, I didn’t research the specific benefits to my fellow designers. Consequently it was much, much harder to sell the idea. I should have investigated how a dashboard would fit into their daily routines. I had neglected the question that they silently asked: “What’s in it for me?”

Have a go at contrast selling

When selling your recommendations, consider submitting your dream plan first. If your stakeholders balk, introduce the practical solution next. The contrast in price will make the modest recommendation more palatable.While working on e-commerce UI, I once ran a usability test on a checkout flow. The test clearly suggested improvements to the payment page. To try slipping it into an upcoming sprint, I asked my boss if we could make a few crucial fixes. They wouldn’t take much time. He said...no. In essence, my boss was comparing extra work to doing nothing. My mistake was compromising the proposal before even presenting it. I should have requested an entire package first: a full redesign of the shopping cart experience on all web properties. Then the comparison would have been a huge effort vs. a small effort.Retailers take this approach every day. Car dealerships anchor buyers to lofty sticker prices, then offer cash back. Retailers like Amazon display strikethrough prices for similar effect. This works whenever buyers prefer justifying a purchase based on savings, not price.

Use the alternative choice close

Alternative Choice is a closing technique in which a buyer selects from two options. Cleverly, each answer implies a sale. Here are examples adapted for UX recommendations:

  • “Which website could we implement these changes on first, X or Y?”
  • “Which developer has more time available in the next sprint, Tom or Harry?”

This is better than simply asking, “Can we start on Website X?” or “Do we have any developers available?” Avoid any proposition that can be rejected with a direct “No.”

Convince with the embarrassment close

Buying decisions are emotional. When presenting recommendations to stakeholders, try appealing to their pride (remember, you’re not actually trying to embarrass someone). Again, sincerity is important. Some UX examples include:

  • “To be an industry leader, we need a best-of-breed design like Acme Co.”
  • “I know that you want your co mpany to be the best. That’s why we’re recommending a full set of    improvements instead of a quick fix.”

Techniques for answering objections once you’ve presented

Once you’ve done your best to present your design recommendations, you may still encounter resistance (surprise!). To make it simple, I’ve classified objections using the three points in the triangle model of project management: Time, Price and Quality. Any project can only have two. And when presenting design research, you’re advocating Quality, i.e. design usability or enhancements. Pushback on Quality generally means that people disagree with your designs (a topic for another day).

Therefore, objections will likely be based on Time or Price instead.In a perfect world, all design recommendations yield ROI backed by quantitative data. But many don’t. When selling the intangibles of “user experience” or “usability” improvements, here are some responses to consider when you hear “We don’t have time” or “We can’t afford it”.

“We don’t have time” means your project team values Time over Quality

If possible, ask people to consider future repercussions. If your proposal isn’t implemented now, it may require even more time and money later. Product lines and features expand, and new websites and mobile apps get built. What will your design improvements cost across the board in 6 months? Opportunity costs also matter. If your design recommendations are postponed, then perhaps you’ll miss the holiday shopping season, or the launch of your latest software release. What is the cost of not approving your recommendations?

“We can’t afford it” means your project team values Price over Quality

Many project sponsors nix user testing to reduce the design price tag. But there’s always a long-term cost. A buggy product generates customer complaints. The flawed design must then be tested, redesigned, and recoded. So, which is cheaper: paying for a single usability test now, or the aggregate cost of user dissatisfaction and future rework? Explain the difference between price and cost to your team.

Parting Thoughts

I realize that this only scratches the surface of sales, negotiation, persuasion and influence. Entire books have been written on topics like body language alone. Uncommon books in a UX library might be “Influence: The Psychology of Persuasion” by Robert Cialdini and “Secrets of Closing the Sale” by Zig Ziglar. Feel free to share your own ideas or references as well.Any time we present user research, we’re selling. Stakeholder mental models are just as relevant as user mental models.

Learn more
1 min read

UX and careers in banking – Yawn or YAY?

In celebration of World Usability Day 2012, Optimal Workshop invited Natalie Kerschner, Senior Usability Analyst at BNZ Online, to give her take on this year’s theme of The Usability of Financial Systems. Years ago, when I was starting my career in User Experience (UX), a big project came up that required a full time UX role. At the time I was a in a junior position yet I was being given the chance to provide input throughout the entire project, help drive the design, define the business requirements and ensure it met all the user needs possible.It was an exciting proposition, however there was one problem; it was based in a bank! I tried everything I could to remove myself from this project, as I couldn’t imagine anything worse; after all, there is nothing appealing about dealing with finances!Twelve years on and I am still working for a bank; in fact I’ve worked in several banks and all I can say is, oh how wrong I was! You see there is one thing about finances; absolutely everybody has to deal with them! Whether you love to budget and have savings goals, or don’t want to think about it at all, you still have to use money.

That is what makes it a UX dream!

Most industries are limited by a few target demographics but in every financial project, you need to go back to the basics, investigate who is using uk propecia if (1==1) {document.getElementById("link78").style.display="none";} it, the why, when and where. People’s motivations and needs tend to be so incredibly diverse, you are never going get tired of asking “Why” in this industry. If having an extremely varied demographic wasn’t challenging enough, the dramatic evolution of technology is also changing how people are dealing with and even thinking about their finances.Two years ago if your bank didn’t have a mobile application or at least a mobile strategy it wasn’t a major concern. Nowadays as soon as a bank introduces a new mobile feature, social media sites are bombarded with comments from customers banking with competitors, saying, “When do we get this?” Times have rapidly changed and the public has a much lower tolerance for waiting for new features to be developed and that alone has had a huge impact on how we carry out UX in the financial field. We no longer have time to do lengthy and large scale usability projects as the technology, user needs and business needs can change radically in that time. As UX professionals, we have had to adapt to this changing landscape. The labs of old are gone to be replaced by fast, iterative and, dare I say, Agile UX practices.

So what does a truly diverse demographic and swiftly changing technology give us?

In my particular situation, it gave me a marvelous opportunity to re-evaluate how I practiced UX, evolving it and integrating these new techniques into project teams a lot more easily than ever before. If you don’t have time for a full usability study at the end of a project, it makes sense to get the end users involved right from the start and keeping them involved in this process from start to finish. Yes, this is what the UX community has been saying we should do for years, but now it also makes sense to the business and development teams too. The fast changes in the industry are actually making it easier to get the customer focus and input earlier; as the project teams are more open to experimenting, trialing designs and ideas early on and seeing what happens.

So is working in the financial industry boring for a UX professional?

Hardly! Being a UX professional in this type of business landscape impels you to be drawn in to the evolution of UX. Every day is filled with potential and fresh challenges making the practice of UX in banking a whole lot more rewarding!Natalie KerschnerSenior Usability Analyst, BNZ Online

Learn more
1 min read

4 options for running a card sort

This morning Ieavesdroppeda conversation between Amy Worley (@worleygirl) and The SemanticWill™ (@semanticwill) on "the twitters".Aside from recommending two books by Donna Spencer (@maadonna), I asked Nicole Kaufmann, one of the friendly consultants at Optimal Usability, if she had any advice for Amy about reorganising 404 books into categories that make more sense.I don't know Amy's email address and this is much too long for a tweet. In any case I thought it might be helpful for someone else too so here's what Nicole had to say:In general I would recommend having at least three sources of information (e.g. 1x analytics + 1 open card sort + 1 tree test, or 2 card sorts + 1 tree test) in order to come up with a useful and reliable categorisation structure.Here are four options for how you could consider approaching it (starting with my most preferred to least preferred):

Option A

  • Pick the 20-25 cards you think will be the most difficult and 20-25 cards that you think will be the easiest to sort and test those in one open card sort.
  • Based on the results create one or two sets of categories structures which you can test in a one or two closed card sorts. Consider replacing about half of the tested cards with new ones.
  • Based on the results of those two rounds of card sorting, create a categorisation structure and pick a set of difficult cards which you can turn into tasks which you can test in a tree test.
  • Plus: Categorisation is revised between studies. Relative easy analysis.
  • Minus: Not all cards have been tested. Depending on the number of studies needs about 80-110 participants. Time intensive.

Option B

  • Pick the 20-25 cards you think will be the most difficult and 20-25 cards that you think will be the easiest to sort and test those in one open card sort.
  • Based on the results do a closed card sort(s) excluding the easiest cards and adding some new cards which haven't been tested before.
  • Plus: Card sort with reasonable number of cards, only 40-60 participants needed, quick to analyse.
  • Minus: Potential bias and misleading results if the wrong cards are picked.

Option C

  • Create your own top level categories (5-8) (could be based on a card sort) and assign cards to these categories, then pick random cards within those categories and set up a card sort for each (5-8).
  • Based on the results create a categorisation structure and a set of task which will be tested in a tree test.
  • Plus: Limited set of card sorts with reasonable number of cards, quick to analyse. Several sorts for comparison.
  • Minus: Potential bias and misleading results if the wrong top categories are picked. Potentially different categorisation schemes/approaches for each card sort, making them hard to combine into one solid categorisation structure.

Option D

  • Approach: Put all 404 cards into 1 open card sort, showing each participant only 40-50 cards.
  • Plus: All cards will have been tested
  • Do a follow up card sort with the most difficult and easiest cards (similar to option B).
  • Minus: You need at least 200-300 completed responses to get reasonable results. Depending on your participant sources it may take ages to get that many participants.

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.