Blog

Optimal Blog

Articles and Podcasts on Customer Service, AI and Automation, Product, and more

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Latest

Learn more
1 min read

From Exposition to Resolution: Looking at User Experience as a Narrative Arc

“If storymapping could unearth patterns and bring together a cohesive story that engages audiences in the world of entertainment and film, why couldn’t we use a similar approach to engage our audiences?’Donna Lichaw and Lis Hubert

User Experience work makes the most sense to me in the context of storytelling. So when I saw Donna Lichaw and Lis Hubert’s presentation on storymapping at edUi recently, it resonated. A user’s path through a website can be likened to the traditional storytelling structure of crisis or conflict, exposition — and even a climax or two.

The narrative arc and the user experience

So just how can the same structure that suits fairytales help us to design a compelling experience for our customers? Well, storyboarding is an obvious example of how UX design and storytelling mesh. A traditional storyboard for a movie or TV episode lays out sequential images to help visualize what the final production will show. Similarly, we map out users' needs and journeys via wireframes, sketches, and journey maps, all the while picturing how people will actually interact with the product.

But the connection between storytelling and the user experience design process goes even deeper than that. Every time a user interacts with our website or product, we get to tell them a story. And a traditional literary storytelling structure maps fairly well to just how users interact with the digital stories we’re telling.Hence Donna and Lis’ conception of storymapping as ‘a diagram that maps out a story using a traditional narrative structure called a narrative arc.’ They concede that while ‘using stories in UX design...is nothing new’, a ‘narrative-arc diagram could also help us to rapidly assess content strengths, weaknesses, and opportunities.’

Storytelling was a common theme at edUI

The edUi conference in Richmond, Virginia brought together an assembly of people who produce websites or web content for large institutions. I met people from libraries, universities, museums, various levels of government, and many other places. The theme of storytelling was present throughout, both explicitly and implicitly.Keynote speaker Matt Novak from Paleofuture talked about how futurists of the past tried to predict the future, and what we can learn from the stories they told. Matthew Edgar discussed what stories our failed content tell — what story does a 404 page tell? Or a page telling users they have zero search results? Two great presentations that got me thinking about storytelling in a different way.

Ultimately, it all clicked for me when I attended Donna and Lis’ presentation ‘Storymapping: A Macguyver Approach to Content Strategy’ (and yes, it was as compelling as the title suggests). They presented a case study of how they applied a traditional narrative structure to a website redesign process. The basic story structure we all learned in school usually includes a pretty standard list of elements. Donna and Lis had tweaked the definitions a bit, and applied them to the process of how users interact with web content.

Points on the Narrative Arc (from their presentation)

narrative arc UX

Exposition — provides crucial background information and often ends with ‘inciting incident’ kicking off the rest of the story

Donna and Lis pointed out that in the context of doing content strategy work, the inciting incident could be the problem that kicks off a development process. I think it can also be the need that brings users to a website to begin with.

Rising Action — Building toward the climax, users explore a website using different approaches

Here I think the analogy is a little looser. While a story can sometimes be well-served by a long and winding rising action, it’s best to keep this part of the process a bit more straightforward in web work. If there’s too much opportunity for wandering, users may get lost or never come back.

Crisis / Climax — The turning point in a story, and then when the conflict comes to a peak

The crisis is what leads users to your site in the first place — a problem to solve, an answer to find, a purchase to make. And to me the climax sounds like the aha! moment that we all aspire to provide, when the user answers their question, makes a purchase, or otherwise feels satisfied from using the site. If a user never gets to this point, their story just peters out unresolved. They’re forced to either begin the entire process again on your site (now feeling frustrated, no doubt), or turn to a competitor.

Falling Action — The story or user interaction starts to wind down and loose ends are tied up

A confirmation of purchase is sent, or maybe the user signs up for a newsletter.

Denouement / Resolution — The end of the story, the main conflict is resolved

The user goes away with a hopefully positive experience, having been able to meet their information or product needs. If we’re lucky, they spread the word to others!Check out Part 2 of Donna and Lis' three-part article on storymapping.  I definitely recommend exploring their ideas in more depth, and having a go at mapping your own UX projects to the above structure.

A word about crises. The idea of a ‘crisis’ is at the heart of the narrative arc. As we know from watching films and reading novels, the main character always has a problem to overcome. So crisis and conflict show up a few times through this process.While the word ‘crisis’ carries some negative connotations (and that clearly applies to visiting a terribly designed site!), I think it can be viewed more generally when we apply the term to user experience. Did your user have a crisis that brought them to your site? What are they trying to resolve by visiting it? Their central purpose can be the crisis that gives rise to all the other parts of their story.

Why storymapping to a narrative arc is good for your design

Mapping a user interaction along the narrative arc makes it easy to spot potential points of frustration, and also serves to keep the inciting incident or fundamental user need in the forefront of our thinking. Those points of frustration and interaction are natural fits for testing and further development.

For example, if your site has a low conversion rate, that translates to users never hitting the climactic point of their story. It might be helpful to look at their interactions from the earlier phases of their story before they get to the climax. Maybe your site doesn’t clearly establish its reason for existing (exposition), or it might be too hard for users to search and explore your content (rising action).Guiding the user through each phase of the structure described above makes it more difficult to skip an important part of how our content is found and used.

We can ask questions like:

  • How does each user task fit into a narrative structure?
  • Are we dumping them into the climax without any context?
  • Does the site lack a resolution or falling action?
  • How would it feel to be a user in those situations?

These questions bring up great objectives for qualitative testing — sitting down with a user and asking them to show us their story.

What to do before mapping to narrative arc

Many sessions at edUi also touched on analytics or user testing. In crafting a new story, we can’t ignore what’s already in place — especially if some of it is appreciated by users. So before we can start storymapping the user journey, we need to analyze our site analytics, and run quantitative and qualitative user tests. This user research will give us insights into what story we’re already telling (whether it’s on purpose or not).

What’s working about the narrative, and what isn’t? Even if a project is starting from scratch on a new site, your potential visitors will bring stories of their own. It might be useful to check stats to see if users leave early on in the process, during the exposition phase. A high bounce rate might mean a page doesn't supply that expositional content in a way that's clear and engaging to encourage further interaction.Looking at analytics and user testing data can be like a movie's trial advance screening — you can establish how the audience/users actually want to experience the site's content.

How mapping to the narrative arc is playing out in my UX practice

Since I returned from edUi, I've been thinking about the narrative structure constantly. I find it helps me frame user interactions in a new way, and I've already spotted gaps in storytelling that can be easily filled in. My attention instantly went to the many forms on our site. What’s the Rising Action like at that point? Streamlining our forms and using friendly language can help keep the user’s story focused and moving forward toward clicking that submit button as a climax.

I’m also trying to remember that every user is the protagonist of their own story, and that what works for one narrative might not work for another. I’d like to experiment with ways to provide different kinds of exposition to different users. I think it’s possible to balance telling multiple stories on one site, but maybe it’s not the best idea to mix exposition for multiple stories on the same page.And I also wonder if we could provide cues to a user that direct them to exposition for their own inciting incident...a topic for another article perhaps.What stories are you telling your users? Do they follow a clear arc, or are there rough transitions? These are great questions to ask yourself as you design experiences and analyze existing ones. The edUi conference was a great opportunity to investigate these ideas, and I can’t wait to return next year.

Learn more
1 min read

Moderated Card Sorts VS Online Card Sorts — why you need both

Have you ever suggested doing an online card sort and been told no 'because the conversation in the room is the most valuable part of a card sort'? I have.

Repeatedly.

I decided it was time someone actually tested that theory. So when the opportunity came up at work, I jumped on the chance to run this experiment. My research task was to determine the information architecture (IA) for a business line’s area of the workplace intranet. I ran an A/B test with five face-to-face moderated card sorts, each with 2-3 users, and I ran twenty-five online card sorts using OptimalSort. I chose OptimalSort because I’d never used it before, and since I enjoyed using Treejack so much I thought I’d try it out. There were forty-five cards in total. I conducted both tests using only the resources available, mostly of the human variety.

In this piece, I examine the benefits and challenges of both techniques.

Benefits of moderated card sorts — the gold is in the conversation

2ashlea

The opportunity to speak with users in person

I love meeting users. It reminds me of why I do what I do and motivates me to continuously improve.

The qualitative gold that came from listening to people think aloud as they worked through the activity

All five groups of 2-3 people worked well together and nobody had any reservations about sharing their thoughts. Each session was productive. I listened carefully and asked questions to help me understand why decisions were being made.

Working with paper

Working with paper. There’s something satisfying about moving pieces of paper around on a table and being able to cross things out and add new cards. The overall picture is so much clearer when the cards are all spread out in front of you. Users are more inclined to criticise the work at this early stage when it’s on paper and looks unresolved. It’s also inexpensive. Moderated card sorts allow you to spread all the cards out on the table in front of you and narrow it down from there.

Challenges of moderated card sorts — oh, the time, the time it took!

image2ashlea

I can sum this one up in two words: cat herding

Recruiting and organising users for the face to face card sort sessions took almost three days to complete! It was not easy trying to organise fifteen people into groups of three let alone book session times that everyone could agree upon! Even after all that, a few of the sessions still had no shows. I can forgive people their busy lives, but it’s still frustrating.

Can’t see the forest

No matter how carefully and patiently I explained to the users that we were just grouping like things together, many felt the need to construct a tree structure. I chose to run this study with a flat hierarchy, for the purposes of understanding what belongs together and how users would name each high level group of information. It’s like as soon as users hear the word ‘website’ they have trouble detaching from what they know. Ultimately I solved this problem by sketching instructions on the whiteboard for each session. This gave users something visual to refer to and kept them focussed on what we all needed to achieve during the session.

My office was fresh out of barcode scanners

I would have loved to have tried the moderated card sort the OptimalSort way with the barcode scanner, but unfortunately I just didn’t have access to one. As a result of this, I had to manually input the cards retrospectively from the moderated sorts into OptimalSort to take advantage of the amazing results graphs. That took a few hours. I know you can pick them up pretty cheap, so I’ll be prepared for next time.

Benefits of online card sorting  — the fire and forget factor

7ASHLEA

Positive comments left by users

This study received really positive comments from users that showed that they liked the activity were well and truly on board with the coming changes. Presenting positive feedback to executive staff is pretty powerful.

 — 'This was an interesting exercise, thanks! I'm glad I got to do this individually via the online approach, rather than having to discuss it in a group: I much prefer solo activities to group ones, as it usually takes less time.'— 'Logical grouping is extremely important in finding information. I'm pleased you are looking at this.'

The fire and forget factor

While it is entertaining to refresh the browser every five seconds to see if it has changed, OptimalSort really does take care of itself. The provided instructions are very useful and I did not receive a single phone call or email asking for help. This gave me time to start putting together the report and work on other projects, which saved both time and money.

The presentation of the results

You really can’t go past the beautiful yet useful way OptimalSort presents the results. These are charts that carry serious thud value when presented to management or executives because they back up your findings with actual numbers. The charts also make it incredibly easy to interpret the results and start iterating the next phase of the design. My personal favourite is the PCA (Participant-Centric Analysis) tab on the results dashboard. It provides advice on what you could do next when building your IA.

Basically, if OptimalSort had to pick a winning user submission, the first one would be it. It makes taking the next step in the design process that much easier.


Challenges of online card sorting — keeping the people going...

1Ashlea

The high abandonment rate seen in this study

This study closed after one week with twenty-five completed responses and thirty abandoned responses. This is quite high; however I honestly don’t believe the tool itself to be the culprit. Of the thirty abandoned responses received, twenty-one of those participants ended the activity having sorted less than 5% of the cards. Of that number, twelve participants ended the task not having sorted any cards at all. This tells me that they may have been overwhelmed by the size of the task and felt unable to complete it, especially since they were at work after all and had competing priorities. Drawing on this experience, next time I will keep the survey short and sweet to avoid overwhelming the user.

I was unable to ask questions or seek further clarification around decisions made

I have a rule around online testing activities. All recorded responses are anonymous — even from me. I do this because I want users to feel comfortable and be willing to participate in future testing activities. I also feel that it preserves the integrity of the results and doesn’t allow for assumptions to come into play. Because of this, I don’t know who responded with what and I can’t ask questions if I’m not clear on something. Had I included some targeted post survey questions, this issue would have been avoided.

Our brand colour and the submit button were too similar

I always try to use softer colours to avoid scaring people on the opening screen, but you have to be careful with this one. The background colour is [Ed: was!] also the colour of the submit button on the card sorting screen and it appears against a black background. Choosing a colour that looks nice on your opening screen will not do you any favours when that same colour appears on the submit button and does not contrast well against the black background. Beyond the obvious accessibility issue, you also risk committing the crime of playing ‘Where’s Wally?’ with the user when they can’t find the button!This challenge does however have a happy ending. I mentioned this issue to Optimal Workshop and they fixed it! How awesome is that?!

So, are the two techniques best friends or mere acquaintances?

They complemented each other! Despite the differences in delivery methods, both sets of card sort results told a similar story and each enriched the overall picture. There were no show-stopping differences or contradictions between the two. The themes of comments left in the online version also matched those overheard during the conversations in the moderated sorts.

— 'I was unsure what a couple of these actually meant; I would rename them to make their meaning explicit.' Comment left by a user from the online card sort— 'There’s too much jargon! Make it real and use language that people understand.' Comment made by a user during a moderated card sort

The biggest finding overall was that the user was grouping content by keywords and task related subjects. Not entirely groundbreaking information on its own, however it does break the current model which groups content by organisational structure and product owner. This study indicated that the users don’t see the organisational structure; they just want to solve a problem or find information without having to think about where it lives or who owns it. This research is valuable because we can now back up our design decisions with evidence. We can use this information to construct an IA that will actually work. It has also provided insights into user workflows and their understanding of the organisation as a whole.So, there you have it: mystery solved! But which card sorting activity wins?

My recommendation: Get the best of both worlds

Conduct a moderated card sort the OptimalSort way! This study has shown that on their own, moderated card sorts and online card sorts are both valuable in their own way. When combined, they join forces to create a powerful hybrid and it’s really easy to do. You still type your card labels into OptimalSort, but the difference is you print them out and each card has a barcode on it. The next step is to run your moderated card sort as you normally would. Then using a common barcode scanner, you would scan the cards back into OptimalSort and reap all the benefits of the result graphs and analysis tools. This approach gives you the qualitative face time with your users and the quantitative results to back up your thinking.

I really enjoyed running this experiment and I’m huge fan of A/B testing. I would love to hear your thoughts, and look forward to sharing my next discovery with you.

Learn more
1 min read

Are users always right? Well. It's complicated

About six months ago, I came across aninteresting question on Stack Exchange headlined 'Should you concede to user demands that are clearly inferior?' It stuck in my mind because the question in itself is complex, and contains a few complicated assumptions.

In the world of user experience research and design, the users needs and wants are paramount. Dollars and hours are spent poring through data and interviewing and collating information into a cohesive explanation of what works and what doesn't for users. Designs are based on how users intuitively interact with products and websites. Organisations respond to suggestions that come through on support and on Twitter, and if a significant numbers of users want a particular change, chances are those organisations will act. But the question itself throws this most sacred of stances up in the air, because it contains the phrase 'user demands that are clearly inferior'. Now, that is a loaded statement.

How the good reconcile the existence of the bad

I imagine it's sometimes hard for designers to get rid of the feeling that they know best. As a writer, I know what I like and don't like. I 'know' good writing from bad, and I have strong opinions about books and articles that aren't worth the pages or bandwidth it takes to publish them. But this stance often puts me in conflict with the huge amount of empirical evidence that certain writing I disdain is actually 'good': and that evidence is readers. For Fifty Shades of Lame, it's millions of them. Aggghh!

In the same way, I've never met a designer who didn't have strong opinions about what they adore and deplore in their own art forms. And I wonder how tough it sometimes is to implement changes that to a designers mind make no sense. Do any of you UX designers out there ever secretly think, when you discover what users are asking for, 'these people have no taste, they don't know what they want, how ridiculous!'? Is there a secret current of despair and frustration at user ignorance running deep and unspoken through the river of design?

The main views from the Stack Exchange discussion

xkcd  Workflow

On Stack Exchange, Matt described how he and his team implemented a single tree view (75 items) with a scroll wheel, and because it was an internalchange,they were able to get quick feedback from existing users. The feedback wasn't positive, and many people wanted the change to be reversed. He explains: ‘To my mind, the way we redeveloped it is unambiguously better. But the user base was equally emphatic in rejecting it. So today, to the complaints of my fellow team members, I removed our new implementation and set it to work in the manner the users were used to.'

He then goes on to ask 'What was the right course of action here? Is there a point at which the user's fear of change becomes an important UX consideration in its own right?' The responses are varied and fascinating, and can be roughly broken into three camps:

  1. If your users don't want something, you'd be stupid to try and implement it.
  2. Users are often change averse, so if you really think your change will be better, then you need to ease them into it.
  3. If you're convinced the change is positive, you still need to test it on your users, and be open to admitting you were wrong.

So where do we stand?

One of the problems with the term 'User Experience' is the word 'user'. It's a depersonalised and generic way of describing who it is you're serving. Because there is a person at the heart of the enterprise who is trying to achieve something. They may not be trying to achieve what you expect them to. They certainly may not be trying to achieve what you want them to.

Context is everything.

Who is the person who is asking for a change, or asking for something to stay the same?We would argue that people aren't 'change-averse', but 'confusion/discomfort/inefficiency-averse' people want easier ways of doing things. So if by changing a feature you mess up a person's workflow, then potentially you didn't do your research.

If you look closely at the behavior of users — how people actually interact with a particular aspect of your design, rather than just hearing their opinions — then you'll be able to base your design on empirical evidence. So, we (roughly) come down on the side of the people who use the product. If they want to get something done, and they want to do that in a particular way, then they have right of way.

It's your job not to serve your tastes, but to give people the experience you promise them. And to the author of Fifty Shades of Grey, I say, 'Good on you EL James. You gave them what they wanted.'

What do you think?

Learn more
1 min read

User research and agile squadification at Trade Me

Hi, I’m Martin. I work as a UX researcher at Trade Me having left Optimal Experience (Optimal Workshop's sister company) last year. For those of you who don’t know, Trade Me is New Zealand’s largest online auction site that also lists real estate to buy and rent, cars to buy, jobs listings, travel accommodation and quite a few other things besides. Over three quarters of the population are members and about three quarters of the Internet traffic for New Zealand sites goes to the sites we run.

Leaving a medium-sized consultancy and joining Trade Me has been a big change in many ways, but in others not so much, as I hadn’t expected to find myself operating in a small team of in-house consultants. The approach the team is taking is proving to be pretty effective, so I thought I’d share some of the details of the way we work with the readers of Optimal Workshop’s blog. Let me explain what I mean…

What agile at Trade Me looks like

Over the last year or so, Trade Me has moved all of its development teams over to Agile following a model pioneered by Spotify. All of the software engineering parts of the business have been ‘squadified’. These people produce the websites & apps or provide and support the infrastructure that makes everything possible.Across Squads, there are common job roles in ‘Chapters’ (like designers or testers) and because people are not easy to force into boxes, and why should they be, there are interest groups called ‘Guilds’.The squads are self-organizing, running their own processes and procedures to get to where they need to. In practice, this means they use as many or as few of the Kanban, Scrum, and Rapid tools they find useful. Over time, we’ve seen that squads tend to follow similar practices as they learn from each other.

How our UX team fits in

Our UX team of three sits outside the squads, but we work with them and with the product owners across the business.How does this work? It might seem counter-intuitive to have UX outside of the tightly-integrated, highly-focused squads, sometimes working with product owners working on stuff that might have little to do with what’s being currently developed in the squads. This comes down to the way Trade Me divides down the UX responsibilities within the organization. Within each squad there is a designer. He or she is responsible for how that feature or app looks, and, more importantly, how it acts — interaction design as well as visual design.Then what do we do, if we are the UX team?

We represent the voice of Trade Me’s users

By conducting research with Trade Me’s users we can validate the squads’ day-to-day decisions, and help frame decisions on future plans. We do this by wearing two hats. Wearing the pointy hats of structured, detailed researchers, we look into long-term trends: the detailed behaviours and goals of our different audiences. We’ve conducted lots of one-on-one interviews with hundreds of people, including top sellers, motor parts buyers, and job seekers, as well as running surveys, focus groups and user testing sessions of future-looking prototypes. For example, we recently spent time with a number of buyers and sellers, seeking to understand their motivations and getting under their skin to find out how they perceive Trade Me.

This kind of research enables Trade Me to anticipate and respond to changes in user perception and satisfaction.Swapping hats to an agile beanie (and stretching the metaphor to breaking point), we react to the medium-term, short-term and very short-term needs of the squads testing their ideas, near-finished work and finished work with users, as well as sometimes simply answering questions and providing opinion, based upon our research. Sometimes this means that we can be testing something in the afternoon having only heard we are needed in the morning. This might sound impossible to accommodate, but the pace of change at Trade Me is such that stuff is getting deployed pretty much every day, many of which affects our users directly. It’s our job to ensure that we support our colleagues to do the very best we can for our users.

How our ‘drop everything’ approach works in practice

Screen Shot 2014-07-11 at 10.00.21 am

We recently conducted five or six rounds (no one can quite remember, we did it so quickly) of testing of our new iPhone application (pictured above) — sometimes testing more than one version at a time. The development team would receive our feedback face-to-face, make changes and we’d be testing the next version of the app the same or the next day. It’s only by doing this that we can ensure that Trade Me members will see positive changes happening daily rather than monthly.

How we prioritize what needs to get done

To help us try to decide what we should be doing at any one time we have some simple rules to prioritise:

  • Core product over other business elements
  • Finish something over start something new
  • Committed work over non-committed work
  • Strategic priorities over non-strategic priorities
  • Responsive support over less time-critical work
  • Where our input is crucial over where our input is a bonus

Applying these rules to any situation makes the decision whether to jump in and help pretty easy.At any one time, each of us in the UX team will have one or more long-term projects, some medium-term projects, and either some short-term projects or the capacity for some short-term projects (usually achieved by putting aside a long-term project for a moment).

We manage our time and projects on Trello, where we can see at a glance what’s happening this and next week, and what we’ve caught sniff of in the wind that might be coming up, or definitely is coming up.On the whole, both we and the squads favour fast response, bulleted list, email ‘reports’ for any short-term requests for user testing.  We get a report out within four hours of testing (usually well within that). After all, the squads are working in short sprints, and our involvement is often at the sharp end where delays are not welcome. Most people aren’t going to read past the management summary anyway, so why not just write that, unless you have to?

How we share our knowledge with the organization

Even though we mainly keep our reporting brief, we want the knowledge we’ve gained from working with each squad or on each product to be available to everyone. So we maintain a wiki that contains summaries of what we did for each piece of work, why we did it and what we found. Detailed reports, if there are any, are attached. We also send all reports out to staff who’ve subscribed to the UX interest email group.

Finally, we send out a monthly email, which looks across a bunch of research we’ve conducted, both short and long-term, and draws conclusions from which our colleagues can learn. All of these latter activities contribute to one of our key objectives: making Trade Me an even more user-centred organization than it is.I’ve been with Trade Me for about six months and we’re constantly refining our UX practices, but so far it seems to be working very well.Right, I’d better go – I’ve just been told I’m user testing something pretty big tomorrow and I need to write a test script!

Learn more
1 min read

How to Spot and Destroy Evil Attractors in Your Tree (Part 1)

Usability guru Jared Spool has written extensively about the 'scent of information'. This term describes how users are always 'on the hunt' through a site, click by click, to find the content they’re looking for. Tree testing helps you deliver a strong scent by improving organisation (how you group your headings and subheadings) and labelling (what you call each of them).

Anyone who’s seen a spy film knows there are always false scents and red herrings to lead the hero astray. And anyone who’s run a few tree tests has probably seen the same thing — headings and labels that lure participants to the wrong answer. We call these 'evil attractors'.In Part 1 of this article, we’ll look at what evil attractors are, how to spot them at the answer end of your tree, and how to fix them. In Part 2, we’ll look at how to spot them in the higher levels of your tree.

The false scent — what it looks like in practice

One of my favourite examples of an evil attractor comes from a tree test we ran for consumer.org.nz, a New Zealand consumer-review website (similar to Consumer Reports in the USA). Their site listed a wide range of consumer products in a tree several levels deep, and they wanted to try out a few ideas to make things easier to find as the site grew bigger.We ran the tests and got some useful answers, but we also noticed there was one particular subheading (Home > Appliances > Personal) that got clicks from participants looking for very different things — mobile phones, vacuum cleaners, home-theatre systems, and so on:

pic1

The website intended the Personal appliance category to be for products like electric shavers and curling irons. But apparently, Personal meant many things to our participants: they also went there for 'personal' items like mobile phones and cordless drills that actually lived somewhere else.This is the false scent — the heading that attracts clicks when it shouldn’t, leading participants astray. Hence this definition: an evil attractor is a heading that draws unwanted traffic across several unrelated tasks.

Evil attractors lead your users astray

Attracting clicks isn’t a bad thing in itself. After all, that’s what a good heading does — it attracts clicks for the content it contains (and discourages clicks for everything else). Evil attractors, on the other hand, attract clicks for things they shouldn’t. These attractors lure users down the wrong path, and when users find themselves in the wrong place they'll either back up and try elsewhere (if they’re patient) or give up (if they’re not). Because these attractor topics are magnets for the user’s attention, they make it less likely that your user will get to the place you intended. The other evil part of these attractors is the way they hide in the shadows. Most of the time, they don’t get the lion’s share of traffic for a given task. Instead, they’ll poach 5–10% of the responses, luring away a fraction of users who might otherwise have found the right answer.

Find evil attractors easily in your data

The easiest attractors to spot are those at the answer end of your tree (where participants ended up for each task). If we can look across tasks for similar wrong answers, then we can see which of these might be evil attractors.In your Treejack results, the Destinations tab lets you do just that. Here’s more of the consumer.org.nz example:

Pic2

Normally, when you look at this view, you’re looking down a column for big hits and misses for a specific task. To look for evil attractors, however, you’re looking for patterns across rows. In other words, you’re looking horizontally, not vertically. If we do that here, we immediately notice the row for Personal (highlighted yellow). See all those hits along the row? Those hits indicate an attractor — steady traffic across many tasks that seem to have little in common. But remember, traffic alone is not enough. We’re looking for unwanted traffic across unrelated tasks. Do we see that here? Well, it looks like the tasks (about cameras, drills, laptops, vacuums, and so on) are not that closely related. We wouldn’t expect users to go to the same topic for each of these. And the answer they chose, Personal, certainly doesn’t seem to be the destination we intended. While we could rationalise why they chose this answer, it is definitely unwanted from an IA perspective. So yes, in this case, we seem to have caught an evil attractor red-handed. Here’s a heading that’s getting steady traffic where it shouldn’t.

Evil attractors are usually the result of ambiguity

It’s usually quite simple to figure out why an item in your tree is an evil attractor. In almost all cases, it’s because the item is vague or ambiguous — a word or phrase that could mean different things to different people. Look at our example above. In the context of a consumer-review site, Personal is too general to be a good heading. It could mean products you wear, or carry, or use in the bathroom, or a number of things. So, when those participants come along clutching a task, and they see Personal, a few of them think 'That looks like it might be what I’m looking for', and they go that way.Individually, those choices may be defensible, but as an information architect, are you really going to group mobile phones with vacuum cleaners? The 'personal' link between them is tenuous at best.

Destroy evil attractors by being specific

Just as it’s easy to see why most attractors attract, it’s usually easy to fix them. Evil attractors trade in vagueness and ambiguity, so the obvious remedy is to make those headings more concrete and specific. In the consumer-site example, we looked at the actual content under the Personal heading. It turned out to be items like shavers, curling irons, and hair dryers. A quick discussion yielded Personal care as a promising replacement — one that should deter people looking for mobile phones and jewellery and the like.In the second round of tree testing, among the other changes we made to the tree, we replaced Personal with Personal Care. A few days later, the results confirmed our thinking. Our former evil attractor was no longer luring participants away from the correct answers:

Pic3

Testing once is good, testing twice is magic

This brings up a final point about tree testing (and about any kind of user testing, really): you need to iterate your testing —  once is not enough.The first round of testing shows you where your tree is doing well (yay!) and where it needs more work so you can make some thoughtful revisions. Be careful though. Even if the problems you found seem to have obvious solutions, you still need to make sure your revisions actually work for users, and don’t cause further problems. The good news is, it’s dead easy to run a second test, because it’s just a small revision of the first. You already have the tasks and all the other bits worked out, so it’s just a matter of making a copy in Treejack, pasting in your revised tree, and hooking up the correct answers. In an hour or two, you’re ready to pilot it again (to err is human, remember) and send it off to a fresh batch of participants.

Two possible outcomes await.

  • Your fixes are spot-on, the participants find the correct answers more frequently and easily, and your overall score climbs. You could have skipped this second test, but confirming that your changes worked is both good practice and a good feeling. It’s also something concrete to show your boss.
  • Some of your fixes didn’t work, or (given the tangled nature of IA work) they worked for the problems you saw in Round 1, but now they’ve caused more problems of their own. Bad news, for sure. But better that you uncover them now in the design phase (when it takes a few days to revise and re-test) instead of further down the track when the IA has been signed off and changes become painful.

Stay tuned for more on evil attractors

In Part 1, we’ve covered what evil attractors are and how to spot them at the answer end of your tree: that is, evil attractors that participants chose as their destination when performing tasks. Hopefully, a future version of Treejack will be able to highlight these attractors to make your analysis that much easier.

In Part 2, we’ll look at how to spot evil attractors in the intermediate levels of your tree, where they lure participants into a section of the site that you didn’t intend. These are harder to spot, but we’ll see if we can ferret them out.Let us know if you've caught any evil attractors red-handed in your projects.

Learn more
1 min read

Selling your design recommendations to clients and colleagues

If you’ve ever presented design findings or recommendations to clients or colleagues, then perhaps you’ve heard them say:

  • “We don’t have the budget or resources for those improvements.”
  • “The new executive project has higher priority.”
  • “Let’s postpone that to Phase 2.”

As an information architect, I‘ve presented recommendations many times. And I’ve crashed and burned more than once by doing a poor job of selling some promising ideas. Here’s some things I’ve learned from getting it wrong.

Buyers prefer sellers they like and trust

You need to establish trust with peers, developers, executives and so on before you present your findings and recommendations . It sounds obvious, yet presentations often fail due to unfamiliarity, sloppiness or designer arrogance. A year ago I ran an IA test on a large company website. The project schedule was typically “aggressive” and the client’s VPs were endlessly busy. So I launched the test without their feedback. Saved time, right?Wrong. The client ignored all my IA recommendations, and their VPs ultimately rewrote my site map from scratch. I could have argued that they didn’t understand user-centered design. The truth is that I failed to establish credibility. I needed them to buy into the testing process, suggest test questions beforehand, or take the test as a control group. Anything to engage them would have helped – turning stakeholders into collaborators is a great way to establish trust.

Techniques for presenting UX recommendations

Many presentation tactics can be borrowed from salespeople, but a single blog post can’t do justice to the entire sales profession. So I’d just like to offer a few ideas for thought. No Jedi mind tricks though. Sincerity matters.

Emphasize product benefits, not product features

Beer commercials on TV don’t sell beer. They sell backyard parties and voluptuous strangers. Likewise, UX recommendations should emphasize product benefits rather than feature sets. This may be common marketing strategy. However, the benefits should resonate with stakeholders and not just test participants. Stakeholders often don’t care about Joe End User. They care about ROI, a more flexible platform, a faster way to publish content – whatever metrics determine their job performance.Several years ago, I researched call center data at a large corporation. To analyze the data, I eventually built a Web dashboard. The dashboard illustrated different types of customer calls by product. When I showed it to my co-workers, I presented the features and even the benefits of tracking usability issues this way.However, I didn’t research the specific benefits to my fellow designers. Consequently it was much, much harder to sell the idea. I should have investigated how a dashboard would fit into their daily routines. I had neglected the question that they silently asked: “What’s in it for me?”

Have a go at contrast selling

When selling your recommendations, consider submitting your dream plan first. If your stakeholders balk, introduce the practical solution next. The contrast in price will make the modest recommendation more palatable.While working on e-commerce UI, I once ran a usability test on a checkout flow. The test clearly suggested improvements to the payment page. To try slipping it into an upcoming sprint, I asked my boss if we could make a few crucial fixes. They wouldn’t take much time. He said...no. In essence, my boss was comparing extra work to doing nothing. My mistake was compromising the proposal before even presenting it. I should have requested an entire package first: a full redesign of the shopping cart experience on all web properties. Then the comparison would have been a huge effort vs. a small effort.Retailers take this approach every day. Car dealerships anchor buyers to lofty sticker prices, then offer cash back. Retailers like Amazon display strikethrough prices for similar effect. This works whenever buyers prefer justifying a purchase based on savings, not price.

Use the alternative choice close

Alternative Choice is a closing technique in which a buyer selects from two options. Cleverly, each answer implies a sale. Here are examples adapted for UX recommendations:

  • “Which website could we implement these changes on first, X or Y?”
  • “Which developer has more time available in the next sprint, Tom or Harry?”

This is better than simply asking, “Can we start on Website X?” or “Do we have any developers available?” Avoid any proposition that can be rejected with a direct “No.”

Convince with the embarrassment close

Buying decisions are emotional. When presenting recommendations to stakeholders, try appealing to their pride (remember, you’re not actually trying to embarrass someone). Again, sincerity is important. Some UX examples include:

  • “To be an industry leader, we need a best-of-breed design like Acme Co.”
  • “I know that you want your co mpany to be the best. That’s why we’re recommending a full set of    improvements instead of a quick fix.”

Techniques for answering objections once you’ve presented

Once you’ve done your best to present your design recommendations, you may still encounter resistance (surprise!). To make it simple, I’ve classified objections using the three points in the triangle model of project management: Time, Price and Quality. Any project can only have two. And when presenting design research, you’re advocating Quality, i.e. design usability or enhancements. Pushback on Quality generally means that people disagree with your designs (a topic for another day).

Therefore, objections will likely be based on Time or Price instead.In a perfect world, all design recommendations yield ROI backed by quantitative data. But many don’t. When selling the intangibles of “user experience” or “usability” improvements, here are some responses to consider when you hear “We don’t have time” or “We can’t afford it”.

“We don’t have time” means your project team values Time over Quality

If possible, ask people to consider future repercussions. If your proposal isn’t implemented now, it may require even more time and money later. Product lines and features expand, and new websites and mobile apps get built. What will your design improvements cost across the board in 6 months? Opportunity costs also matter. If your design recommendations are postponed, then perhaps you’ll miss the holiday shopping season, or the launch of your latest software release. What is the cost of not approving your recommendations?

“We can’t afford it” means your project team values Price over Quality

Many project sponsors nix user testing to reduce the design price tag. But there’s always a long-term cost. A buggy product generates customer complaints. The flawed design must then be tested, redesigned, and recoded. So, which is cheaper: paying for a single usability test now, or the aggregate cost of user dissatisfaction and future rework? Explain the difference between price and cost to your team.

Parting Thoughts

I realize that this only scratches the surface of sales, negotiation, persuasion and influence. Entire books have been written on topics like body language alone. Uncommon books in a UX library might be “Influence: The Psychology of Persuasion” by Robert Cialdini and “Secrets of Closing the Sale” by Zig Ziglar. Feel free to share your own ideas or references as well.Any time we present user research, we’re selling. Stakeholder mental models are just as relevant as user mental models.

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.