February 14, 2016
1 min read

Around the world in 80 burgers—when First-click testing met McDonald’s

It requires a certain kind of mind to see beauty in a hamburger bun—Ray Kroc

Maccas. Mickey D’s. The golden arches. Whatever you call it, you know I’m talking about none other than fast-food giant McDonald’s. A survey of 7000 people across six countries 20 years ago by Sponsorship Research International found that more people recognized the golden arches symbol (88%) than the Christian cross (54%). With more than 35,000 restaurants in 118 countries and territories around the world, McDonald’s has come a long way since multi-mixer salesman Ray Kroc happened upon a small fast-food restaurant in 1954.

For an organization of this size and reach, consistency and strong branding are certainly key ingredients in its marketing mix. McDonald’s restaurants all over the world are easily recognised and while the menu does differ slightly between countries, users know what kind of experience to expect. With this in mind, I wondered if the same is true for McDonald’s web presence? How successful is a large organization like McDonald’s at delivering a consistent online user experience tailored to suit diverse audiences worldwide without losing its core meaning? I decided to investigate and gave McDonald’s a good grilling by testing ten of its country-specific websites’ home pages in one Chalkmark study.

Preparation time 🥒

First-click testing reveals the first impressions your users have of your designs. This information is useful in determining whether users are heading down the right path when they first arrive at your site. When considering the best way to measure and compare ten of McDonald’s websites from around the world, I choose first-click testing because I wanted to be able to test the visual designs of each website and I wanted to do it all in one research study.My first job in the setup process was to decide which McDonald’s websites would make the cut.

The approach was to divide the planet up by continent, combined with the requirement that the sites selected be available in my native language (English) in order to interpret the results. I chose: Australia, Canada, Fiji, India, Malaysia, New Zealand, Singapore, South Africa, the UK, and the US. The next task was to figure out how to test this. Ten tasks is ideal for a Chalkmark study, so I made it one task per website; however, determining what those tasks would be was tricky. Serving up the same task for all ten ran the risk of participants tiring of the repetition, but a level of consistency was necessary in order to compare the sites. I decided that all tasks would be different, but tied together with one common theme: burgers.

After all, you don’t win friends with salad.

Launching and sourcing participants 👱🏻👩👩🏻👧👧🏾

When sourcing participants for my research, I often hand the recruitment responsibilities over to Optimal Workshop because it’s super quick and easy; however, this time I decided to do something a bit different. Because McDonald’s is such a large and well-known organization visited by hundreds of millions of people every year, I decided to recruit entirely via Twitter by simply tweeting the link out. Am I three fries short of a happy meal for thinking this would work? Apparently not. In just under a week I had the 30+ completed responses needed to peel back the wrapper on McDonald’s.

Imagine what could have happened if it had been McDonald’s tweeting that out to the burger-loving masses? Ideally when recruiting for a first-click testing study the more participants you can get the more sure you can be of your results, but aiming for 30-50 completed responses will still provide viable results. Conducting user research doesn’t have to be expensive; you can achieve quality results that cut the mustard for free. It’s a great way to connect with your customers, and you could easily reward participants with, say, a burger voucher by redirecting them somewhere after they do the activity—ooh, there’s an idea!

Reading the results menu 🍽️

Interpreting the results from a Chalkmark study is quick and easy.

Analysis tabs in the Chalkmark dashboard
Analysis tabs in the Chalkmark dashboard

Everything you need presented under a series of tabs under ‘Analysis’ in the results section of the dashboard:

  • Participants: this tab allows you to review details about every participant that started your Chalkmark study and also contains handy filtering options for including, excluding and segmenting.
  • Questionnaire: if you included any pre or post study questionnaires, you will find the results here.
  • Task Results: this tab provides a detailed statistical overview of each task in your study based on the correct areas as defined by you during setup. This functionality allows you to structure your results and speeds up your analysis time because everything you need to know about each task is contained in one diagram. Chalkmark also allows you to edit and define the correct areas retrospectively, so if you forget or make a mistake you can always fix it.

task 6_example of correct areas chart thing
  • Clickmaps: under this tab you will find three different types of visual clickmaps for each task showing you exactly where your participants clicked: heatmaps, grid and selection. Heatmaps show the hotspots of where participants clicked and can be switched to a greyscale view for greater readability and grid maps show a larger block of colour over the sections that were clicked and includes the option to show the individual clicks. The selection map just shows the individual clicks represented by black dots.

The heatmap for Task 1 in this study shown in greyscale for improved readability
The heatmap for Task 1 in this study shown in greyscale for improved readability

What the deep fryer gave me 🍟🎁

McDonald’s tested ridiculously well right across the board in the Chalkmark study. Country by country in alphabetical order, here’s what I discovered:

  • Australia: 91% of participants successfully identified where to go to view the different types of chicken burgers
  • Canada: all participants in this study correctly identified the first click needed to locate the nutritional information of a cheeseburger
  • Fiji: 63% of participants were able to correctly locate information on where McDonald’s sources their beef
  • India (West and South India site): Were this the real thing, 88% of participants in this study would have been able to order food for home delivery from the very first click, including the 16% who understood that the menu item ‘Convenience’ connected them to this service
  • Malaysia: 94% of participants were able to find out how many beef patties are on a Big Mac
  • New Zealand: 91% of participants in this study were able to locate information on the Almighty Angus™ burger from the first click
  • Singapore: 66% of participants were able to correctly identify the first click needed to locate the reduced-calorie dinner menu
  • South Africa: 94% of participants had no trouble locating the first click that would enable them to learn how burgers are prepared
  • UK: 63% of participants in this study correctly identified the first click for locating the Saver Menu
  • US: 75% of participants were able to find out if burger buns contain the same chemicals used to make yoga mats based on where their first clicks landed

USA_HEATMAP

Three reasons why McDonald’s nailed it 🍔 🚀

This study clearly shows that McDonald’s are kicking serious goals in the online stakes but before we call it quits and go home, let’s look at why that may be the case. Approaching this the way any UXer worth their salt on their fries would, I stuck all the screens together on a wall, broke out the Sharpies and the Tesla Amazing magnetic notes (the best invention since Post-it notes), and embarked on the hunt for patterns and similarities—and wow did I find them!

The worldwide wall of McDonald’s
The worldwide wall of McDonald’s

Navigation pattern use

Across the ten websites, I observed just two distinct navigation patterns: navigation menus at the top and to the left. The sites with a top navigation menu could also be broken down into two further groups: those with three labels (Australia, New Zealand, and Singapore) and those with more than three labels (Fiji, India, Malaysia, and South Africa). Australia and New Zealand shared the exact same labelling of ‘eat’, ‘learn’, and ‘play’ (despite being distinct countries), whereas the others had their own unique labels but with some subject matter crossover; for example, ‘People’ versus ‘Our People’.

McDonald’s New Zealand website with its three-label top navigation bar.
McDonald’s New Zealand website with its three-label top navigation bar.

Canada, the UK, and the US all had the same look and feel with their left side navigation bar, but each with different labels. All three still had navigation elements at the top of the page, but the main content that the other seven countries had in their top navigation bars was located in that left sidebar.

Left to right: Canada, the UK, and the US all have left side navigation bars but with their own unique labelling.
Left to right: Canada, the UK, and the US all have left side navigation bars but with their own unique labelling.

These patterns ensure that each site is tailored to its unique audience while still maintaining some consistency so that it’s clear they belong to the same entity.

Logo lovin’ it

If there’s one aspect that screams McDonald’s, it’s the iconic golden arches on the logo. Across the ten sites, the logo does vary slightly in size, color, and composition, but it’s always in the same place and the golden arches are always there. Logo consistently is a no-brainer, and in this case McDonald’s clearly recognizes the strengths of its logo and understands which pieces it can add or remove without losing its identity.

McDonald’s logos from left to right: Australia, Canada, Fiji, India (West and South India site), Malaysia, New Zealand, Singapore, South Africa, the UK, and the US as they appeared on the websites at the time of testing. How many different shades of red can you spot?
McDonald’s logos from left to right: Australia, Canada, Fiji, India (West and South India site), Malaysia, New Zealand, Singapore, South Africa, the UK, and the US as they appeared on the websites at the time of testing. How many different shades of red can you spot?

Subtle consistencies in the page layouts

Navigation and logo placement weren’t the only connections one can draw from looking at my wall of McDonald’s. There were also some very interesting but subtle similarities in the page layouts. The middle of the page is always used for images and advertising content, including videos and animated GIFs. The US version featured a particularly memorable advertisement for its all-day breakfast menu, complete with animated maple syrup slowly drizzling its way over a stack of hotcakes.

The McDonald’s US website and its animated maple syrup.
The McDonald’s US website and its animated maple syrup.

The bottom of the page is consistently used on most sites to house more advertising content in the form of tiles. The sites without the tiles left this space blank.

Familiarity breeds … usability?

Looking at these results, it is quite clear that the same level of consistency and recognition between McDonald’s restaurants is also present between the different country websites. This did make me wonder what role does familiarity play in determining usability? In investigating I found a few interesting articles on the subject. This article by Colleen Roller on UXmatters discusses the connection between cognitive fluency and familiarity, and the impact this has on decision-making. Colleen writes:Because familiarity enables easy mental processing, it feels fluent. So people often equate the feeling of fluency with familiarity. That is, people often infer familiarity when a stimulus feels easy to process. If we’re familiar with an item, we don’t have to think too hard about it and this reduction in performance load can make it feel easier to use. I also found this fascinating read on Smashing Magazine by Charles Hannon that explores why Apple were able to claim ‘You already know how to use it’ when launching the iPad. It’s well worth a look!Oh and about those yoga mats … the answer is yes.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Behind the scenes of UX work on Trade Me's CRM system

We love getting stuck into scary, hairy problems to make things better here at Trade Me. One challenge for us in particular is how best to navigate customer reaction to any change we make to the site, the app, the terms and conditions, and so on. Our customers are passionate both about the service we provide — an online auction and marketplace — and its place in their lives, and are rightly forthcoming when they're displeased or frustrated. We therefore rely on our Customer Service (CS) team to give customers a voice, and to respond with patience and skill to customer problems ranging from incorrectly listed items to reports of abusive behavior.

The CS team uses a Customer Relationship Management (CRM) system, Trade Me Admin, to monitor support requests and manage customer accounts. As the spectrum of Trade Me's services and the complexity of the public website have grown rapidly, the CRM system has, to be blunt, been updated in ways which have not always been the prettiest. Links for new tools and reports have simply been added to existing pages, and old tools for services we no longer operate have not always been removed. Thus, our latest focus has been to improve the user experience of the CRM system for our CS team.

And though on the surface it looks like we're working on a product with only 90 internal users, our changes will have flow on effects to tens of thousands of our members at any given time (from a total number of around 3.6 million members).

The challenges of designing customer service systems

We face unique challenges designing customer service systems. Robert Schumacher from GfK summarizes these problems well. I’ve paraphrased him here and added an issue of my own:

1. Customer service centres are high volume environments — Our CS team has thousands of customer interactions every day, and and each team member travels similar paths in the CRM system.

2. Wrong turns are amplified — With so many similar interactions, a system change that adds a minute more to processing customer queries could slow down the whole team and result in delays for customers.

3. Two people relying on the same system — When the CS team takes a phone call from a customer, the CRM system is serving both people: the CS person who is interacting with it, and the caller who directs the interaction. Trouble is, the caller can't see the paths the system is forcing the CS person to take. For example, in a previous job a client’s CS team would always ask callers two or three extra security questions — not to confirm identites, but to cover up the delay between answering the call and the right page loading in the system.

4. Desktop clutter — As a result of the plethora of tools and reports and systems, the desktop of the average CS team member is crowded with open windows and tabs. They have to remember where things are and also how to interact with the different tools and reports, all of which may have been created independently (ie. work differently). This presents quite the cognitive load.

5. CS team members are expert users — They use the system every day, and will all have their own techniques for interacting with it quickly and accurately. They've also probably come up with their own solutions to system problems, which they might be very comfortable with. As Schumacher says, 'A critical mistake is to discount the expert and design for the novice. In contact centers, novices become experts very quickly.'

6. Co-design is risky — Co-design workshops, where the users become the designers,  are all the rage, and are usually pretty effective at getting great ideas quickly into systems. But expert users almost always end up regurgitating the system they're familiar with, as they've been trained by repeated use of systems to think in fixed ways.

7. Training is expensive — Complex systems require more training so if your call centre has high churn (ours doesn’t – most staff stick around for years) then you’ll be spending a lot of money. …and the one I’ve added:

8. Powerful does not mean easy to learn — The ‘it must be easy to use and intuitive’ design rationale is often the cause of badly designed CRM systems. Designers mistakenly design something simple when they should be designing something powerful. Powerful is complicated, dense, and often less easy to learn, but once mastered lets staff really motor.

Our project focus

Our improvement of Trade Me Admin is focused on fixing the shattered IA and restructuring the key pages to make them perform even better, bringing them into a new code framework. We're not redesigning the reports, tools, code or even the interaction for most of the reports, as this will be many years of effort. Watching our own staff use Trade Me Admin is like watching someone juggling six or seven things.

The system requires them to visit multiple pages, hold multiple facts in their head, pattern and problem-match across those pages, and follow their professional intuition to get to the heart of a problem. Where the system works well is on some key, densely detailed hub pages. Where it works badly, staff have to navigate click farms with arbitrary link names, have to type across the URL to get to hidden reports, and generally expend more effort on finding the answer than on comprehending the answer.

Groundwork

The first thing that we did was to sit with CS and watch them work and get to know the common actions they perform. The random nature of the IA and the plethora of dead links and superseded reports became apparent. We surveyed teams, providing them with screen printouts and three highlighter pens to colour things as green (use heaps), orange (use sometimes) and red (never use). From this, we were able to immediately remove a lot of noise from the new IA. We also saw that specific teams used certain links but that everyone used a core set. Initially focussing on the core set, we set about understanding the tasks under those links.

The complexity of the job soon became apparent – with a complex system like Trade Me Admin, it is possible to do the same thing in many different ways. Most CRM systems are complex and detailed enough for there to be more than one way to achieve the same end and often, it’s not possible to get a definitive answer, only possible to ‘build a picture’. There’s no one-to-one mapping of task to link. Links were also often arbitrarily named: ‘SQL Lookup’ being an example. The highly-trained user base are dependent on muscle memory in finding these links. This meant that when asked something like: “What and where is the policing enquiry function?”, many couldn’t tell us what or where it was, but when they needed the report it contained they found it straight away.

Sort of difficult

Therefore, it came as little surprise that staff found the subsequent card sort task quite hard. We renamed the links to better describe their associated actions, and of course, they weren't in the same location as in Trade Me Admin. So instead of taking the predicted 20 minutes, the sort was taking upwards of 40 minutes. Not great when staff are supposed to be answering customer enquiries!

We noticed some strong trends in the results, with links clustering around some of the key pages and tasks (like 'member', 'listing', 'review member financials', and so on). The results also confirmed something that we had observed — that there is a strong split between two types of information: emails/tickets/notes and member info/listing info/reports.

We built and tested two IAs

pietree results tree testing

After card sorting, we created two new IAs, and then customized one of the IAs for each of the three CS teams, giving us IAs to test. Each team was then asked to complete two tree tests, with 50% doing one first and 50% doing the other first. At first glance, the results of the tree test were okay — around 61% — but 'Could try harder'. We saw very little overall difference between the success of the two structures, but definitely some differences in task success. And we also came across an interesting quirk in the results.

Closer analysis of the pie charts with an expert in Trade Me Admin showed that some ‘wrong’ answers would give part of the picture required. In some cases so much so that I reclassified answers as ‘correct’ as they were more right than wrong. Typically, in a real world situation, staff might check several reports in order to build a picture. This ambiguous nature is hard to replicate in a tree test which wants definitive yes or no answers. Keeping the tasks both simple to follow and comprehensive proved harder than we expected.

For example, we set a task that asked participants to investigate whether two customers had been bidding on each other's auctions. When we looked at the pietree (see screenshot below), we noticed some participants had clicked on 'Search Members', thinking they needed to locate the customer accounts, when the task had presumed that the customers had already been found. This is a useful insight into writing more comprehensive tasks that we can take with us into our next tests.  

What’s clear from analysis is that although it’s possible to provide definitive answers for a typical site’s IAs, for a CRM like Trade Me Admin this is a lot harder. Devising and testing the structure of a CRM has proved a challenge for our highly trained audience, who are used to the current system and naturally find it difficult to see and do things differently. Once we had reclassified some of the answers as ‘correct’ one of the two trees was a clear winner — it had gone from 61% to 69%. The other tree had only improved slightly, from 61% to 63%.

There were still elements with it that were performing sub-optimally in our winning structure, though. Generally, the problems were to do with labelling, where, in some cases, we had attempted to disambiguate those ‘SQL lookup’-type labels but in the process, confused the team. We were left with the dilemma of whether to go with the new labels and make the system initially harder to use for staff but easier to learn for new staff, or stick with the old labels, which are harder to learn. My view is that any new system is going to see an initial performance dip, so we might as well change the labels now and make it better.

The importance of carefully structuring questions in a tree test has been highlighted, particularly in light of the ‘start anywhere/go anywhere’ nature of a CRM. The diffuse but powerful nature of a CRM means that careful consideration of tree test answer options needs to be made, in order to decide ‘how close to 100% correct answer’ you want to get.

Development work has begun so watch this space

It's great to see that our research is influencing the next stage of the CRM system, and we're looking forward to seeing it go live. Of course, our work isn't over— and nor would we want it to be! Alongside the redevelopment of the IA, I've been redesigning the key pages from Trade Me Admin, and continuing to conduct user research, including first click testing using Chalkmark.

This project has been governed by a steadily developing set of design principles, focused on complex CRM systems and the specific needs of their audience. Two of these principles are to reduce navigation and to design for experts, not novices, which means creating dense, detailed pages. It's intense, complex, and rewarding design work, and we'll be exploring this exciting space in more depth in upcoming posts.

Learn more
1 min read

A short guide to personas

The word “persona” has many meanings. Sometimes the term refers to a part that an actor plays, other times it can mean a famous person, or even a character in a fictional play or book. But in the field of UX, persona has its own special meaning.

Before you get started with creating personas of your own, learn what they are and the process to create one. We'll even let you in on a great, little tip — how to use Chalkmark to refine and validate your personas.

What is a persona?

In the UX field, a persona is created using research and observations of your users, which is analyzed and then depicted in the form of a person’s profile. This individual is completely fictional, but is created based on the research you’ve conducted into your own users. It’s a form of segmentation, which Angus Jenkinson noted in his article “Beyond Segmentation” is a “better intellectual and practical tool for dealing with the interaction between the concept of the ‘individual’ and the concept of ‘group’”.

Typical user personas include very specific information in order to paint an in-depth and memorable picture for the people using them (e.g., designers, marketers etc).

The user personas you create don’t just represent a single individual either; they’ll actually represent a whole group. This allows you to condense your users into just a few segments, while giving you a much smaller set of groups to target.

There are many benefits of using personas. Here are just a few:

     
  • You can understand your clients better by seeing their pain points, what they want, and what they need
  •  
  • You can narrow your focus to a small number of groups that matter, rather than trying to design for everybody
  •  
  • They’re useful for other teams too, from product management to design and marketing
  •  
  • They can help you clarify your business or brand
  •  
  • They can help you create a language for your brand
  •  
  • You can market your products in a better, more targeted way

How do I create a persona?

There’s no right or wrong way to create a persona; the way you make them can depend on many things, such as your own internal resources, and the type of persona you want.

The average persona that you’ve probably seen before in textbooks, online or in templates isn’t always the best kind to use (picture the common and overused types like ‘Busy Barry’). In fact, the way user personas are constructed is a highly debated topic in the UX industry.

Creating good user personas

Good user personas are meaningful descriptions — not just a list of demographics and a fake name that allows researchers to simply make assumptions.

Indi Young, an independent consultant and founder of Adaptive Path, is an advocate of creating personas that aren’t just a list of demographics. In an article she penned on medium.com, Indi states: “To actually bring a description to life, to actually develop empathy, you need the deeper, underlying reasoning behind the preferences and statements-of-fact. You need the reasoning, reactions, and guiding principles.”

One issue that can stem from traditional types of personas is they can be based on stereotypes, or even reinforce them. Things like gender, age, ethnicity, culture, and location can all play a part in doing this.

In a study by Phil Turner and Susan Turner titled “Is stereotyping inevitable when designing with personas?” the authors noted: “Stereotyped user representations appear to constrain both design and use in many aspects of everyday life, and those who advocate universal design recognise that stereotyping is an obstacle to achieving design for all.”

So it makes sense to scrap the stereotypes and, in many instances, irrelevant demographic data. Instead, include information that accurately describes the persona’s struggles, goals, thoughts and feelings — all bits of meaningful data.

Creating user personas involves a lot of research and analyzing. Here are a few tips to get you started:

1) Do your research

When you’re creating personas for UX, it’s absolutely crucial you start with research; after all, you can’t just pull this information out of thin air by making assumptions! Ensure you use a mixture of both qualitative and quantitative research here in order to cast your net wide and get results that are really valuable. A great research method that falls into the realms of both qualitative and quantitative is user interviews.

When you conduct your interviews, drill down into the types of behaviors, attitudes and goals your users have. It’s also important to mention that you can’t just examine what your users are saying to you — you need to tap into what they’re thinking and how they behave too.

2) Analyze and organize your data into segments

Once you’ve conducted your research, it’s time to analyze it. Look for trends in your results — can you see any similarities among your participants? Can you begin to group some of your participants together based on shared goals, attitudes and behaviors?

After you have sorted your participants into groups, you can create your segments. These segments will become your draft personas. Try to limit the number of personas you create. Having too many can defeat the purpose of creating them in the first place.

Don’t forget the little things! Give your personas a memorable title or name and maybe even assign an image or photo — it all helps to create a “real” person that your team can focus on and remember.

3) Review and test

After you’ve finalized your personas, it’s time to review them. Take another look at the responses you received from your initial user interviews and see if they match the personas you created. It’s also important you spend some time reviewing your finalized personas to see if any of them are too similar or overlap with one another. If they do, you might want to jump back a step and segment your data again.

This is also a great time to test your personas. Conduct another set of user interviews and research to validate your personas.

User persona templates and examples

Creating your personas using data from your user interviews can be a fun task — but make sure you don’t go too crazy. Your personas need to be relevant, not overly complex and a true representation of your users.

A great way to ensure your personas don’t get too out of hand is to use a template. There are many of these available online in a number of different formats and of varying quality.

This example from UX Lady contains a number of helpful bits of information you should include, such as user experience goals, tech expertise and the types of devices used. The accompany article also provides a fair bit of guidance on how to fill in your templates too. While this template is good, skip the demographics portion and read Indi Young’s article and books for better quality persona creation.

Using Chalkmark to refine personas

Now it’s time to let you in on a little tip. Did you know Chalkmark can be used to refine and validate your personas?

One of the trickiest parts of creating personas is actually figuring out which ones are a true representation of your users — so this usually means lots of testing and refining to ensure you’re on the right track. Fortunately, Chalkmark makes the refinement and validation part pretty easy.

First, you need to have your personas finalized or at least drafted. Take your results from your persona software or template you filled in. Create a survey for each segment so that you can see if your participants’ perceptions of themselves matches each of your personas.

Second, create your test. This is a pretty simple demo we made when we were testing our own personas a few years ago at Optimal Workshop. Keep in mind this was a while ago and not a true representation of our current personas — they’ve definitely changed over time! During this step, it’s also quite helpful to include some post-test questions to drill down into your participants’ profiles.

After that, send these tests out to your identified segments (e.g., if you had a retail clothing store, some of your segments might be women of a certain age, and men of a certain age. Each segment would receive its own test). Our test involved three segments: “the aware”, “the informed”, and “the experienced” — again, this has changed over time and you’ll find your personas will change too.

Finally, analyze the results. If you created separate tests for each segment, you will now have filtered data for each segment. This is the real meaty information you use to validate each persona. For example, our three persona tests all contained the questions: “What’s your experience with user research?” And “How much of your job description relates directly to user experience work?”

Persona2 results
   Some of the questionnaire results for Persona #2

A

bove, you’ll see the results for Persona #2. This tells us that 34% of respondents identified that their job involves a lot of UX work (75-100%, in fact). In addition, 31% of this segment considered themselves “Confident” with remote user research, while a further 9% and 6% of this segment said they were “Experienced” and “Expert”.

Persona #2’s results for Task 1
   Persona #2’s results for Task 1

These results all aligned with the persona we associated with that segment: “the informed”.

When you’re running your own tests, you’ll analyze the data in a very similar way. If the results from each of your segments’ Chalkmark tests don’t match up with the personas you created, it’s likely you need to adjust your personas. However, if each segment’s results happen to match up with your personas (like our example above), consider them validated!

For a bit more info on our very own Chalkmark persona test, check out this article.

Further reading

 

Learn more
1 min read

"So, what do we get for our money?" Quantifying the ROI of UX

"Dear Optimal Workshop
How do I quantify the ROI [return on investment] of investing in user experience?"
— Brian

Dear Brian,

I'm going to answer your question with a resounding 'It depends'. I believe we all differ in what we're willing to invest, and what we expect to receive in return. So to start with, and if  you haven’t already, it's worth grabbing your stationery tools of choice and brainstorming your way to a definition of ROI that works for you, or for the people you work for.

I personally define investment in UX as time given, money spent, and people utilized. And I define return on UX as time saved, money made, and people engaged. Oh, would you look at that — they’re the same! All three (time, money, and humans) exist on both sides of the ROI fence and are intrinsically linked. You can’t engage people if you don’t first devote time and money to utilizing your people in the best possible way! Does that make sense?

That’s just my definition — you might have a completely different way of counting those beans, and the organizations you work for may think differently again.

I'll share my thoughts on the things that are worth quantifying (that you could start measuring today if you were so inclined) and a few tips for doing so. And I'll point you towards useful resources to help with the nitty-gritty, dollars-and-cents calculations.

5 things worth quantifying for digital design projects

Here are five things I think are worthy of your attention when it comes to measuring the ROI of user experience, but there are plenty of others. And different projects will most likely call for different things.

(A quick note: There's a lot more to UX than just digital experiences, but because I don't know your specifics Brian, the ideas I share below apply mainly to digital products.)

1. What’s happening in the call centre?

A surefire way to get a feel for the lay of the land is to look at customer support — and if measuring support metrics isn't on your UX table yet, it's time to invite it to dinner. These general metrics are an important part of an ongoing, iterative design process, but getting specific about the best data to gather for individual projects will give you the most usable data.

Improving an application process on your website? Get hard numbers from the previous month on how many customers are asking for help with it, go away and do your magic, get the same numbers a month after launch, and you've got yourself compelling ROI data.

Are your support teams bombarded with calls and emails? Has the volume of requests increased or decreased since you released that new tool, product, or feature? Are there patterns within those requests — multiple people with the same issues? These are just a few questions you can get answers to.

You'll find a few great resources on this topic online, including this piece by Marko Nemberg that gives you an idea of the effects a big change in your product can have on support activity.

2. Navigation vs. Search

This is a good one: check your analytics to see if your users are searching or navigating. I’ve heard plenty of users say to me upfront that they'll always just type in the search bar and that they’d never ever navigate. Funny thing is, ten minutes later I see the same users naturally navigating their way to those gorgeous red patent leather pumps. Why?

Because as Zoltán Gócza explains in UX Myth #16, people do tend to scan for trigger words to help them navigate, and resort to problem solving behaviour (like searching) when they can’t find what they need. Cue frustration, and the potential for a pretty poor user experience that might just send customers running for the hills — or to your competitors. This research is worth exploring in more depth, so check out this article by Jared Spool, and this one by Jakob Nielsen (you know you can't go wrong with those two).

3. Are people actually completing tasks?

Task completion really is a fundamental UX metric, otherwise why are we sitting here?! We definitely need to find out if people who visit our website are able to do what they came for.

For ideas on measuring this, I've found the Government Service Design Manual by GOV.UK to be an excellent resource regardless of where you are or where you work, and in relation to task completion they say:

"When users are unable to complete a digital transaction, they can be pushed to use other channels. This leads to low levels of digital take-up and customer satisfaction, and a higher cost per transaction."

That 'higher cost per transaction' is your kicker when it comes to ROI.

So, how does GOV.UK suggest we quantify task completion? They offer a simple (ish) recommendation to measure the completion rate of the end-to-end process by going into your analytics and dividing the number of completed processes by the number of started processes.

While you're at it, check the time it takes for people to complete tasks as well. It could help you to uncover a whole host of other issues that may have gone unnoticed. To quantify this, start looking into what Kim Oslob on UXMatters calls 'Effectiveness and Efficiency ratios'. Effectiveness ratios can be determined by looking at success, error, abandonment, and timeout rates. And Efficiency ratios can be determined by looking at average clicks per task, average time taken per task, and unique page views per task.

You do need to be careful not to make assumptions based on this kind of data— it can't tell you what people were intending to do. If a task is taking people too long, it may be because it’s too complicated ... or because a few people made themselves coffee in between clicks. So supplement these metrics with other research that does tell you about intentions.

4. Where are they clicking first?

A good user experience is one that gets out of bed on the right side. First clicks matter for a good user experience.

A 2009 study showed that in task-based user tests, people who got their first click right were around twice as likely to complete the task successfully than if they got their first click wrong. This year, researchers at Optimal Workshop followed this up by analyzing data from millions of completed Treejack tasks, and found that people who got their first click right were around three times as likely to get the task right.

That's data worth paying attention to.

So, how to measure? You can use software that records mouse clicks first clicks from analytics on your page, but it difficult to measure a visitor's intention without asking them outright, so I'd say task-based user tests are your best bet.

For in-person research sessions, make gathering first-click data a priority, and come up with a consistent way to measure it (a column on a spreadsheet, for example). For remote research, check out Chalkmark (a tool devoted exclusively to gathering quantitative, first-click data on screenshots and wireframes of your designs) and UserTesting.com (for videos of people completing tasks on your live website).

5. Resources to help you with the number crunching

Here's a great piece on uxmastery.com about calculating the ROI of UX.

Here's Jakob Nielsen in 1999 with a simple 'Assumptions for Productivity Calculation', and here's an overview of what's in the 4th edition of NN/G's Return on Investment for Usability report (worth the money for sure).

Here's a calculator from Write Limited on measuring the cost of unclear communication within organizations (which could quite easily be applied to UX).

And here's a unique take on what numbers to crunch from Harvard Business Review.

I hope you find this as a helpful starting point Brian, and please do have a think about what I said about defining ROI. I’m curious to know how everyone else defines and measures ROI — let me know!

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.