February 14, 2016
4 min

Around the world in 80 burgers—when First-click testing met McDonald’s

Optimal Workshop
It requires a certain kind of mind to see beauty in a hamburger bun—Ray Kroc

Maccas. Mickey D’s. The golden arches. Whatever you call it, you know I’m talking about none other than fast-food giant McDonald’s. A survey of 7000 people across six countries 20 years ago by Sponsorship Research International found that more people recognized the golden arches symbol (88%) than the Christian cross (54%). With more than 35,000 restaurants in 118 countries and territories around the world, McDonald’s has come a long way since multi-mixer salesman Ray Kroc happened upon a small fast-food restaurant in 1954.

For an organization of this size and reach, consistency and strong branding are certainly key ingredients in its marketing mix. McDonald’s restaurants all over the world are easily recognised and while the menu does differ slightly between countries, users know what kind of experience to expect. With this in mind, I wondered if the same is true for McDonald’s web presence? How successful is a large organization like McDonald’s at delivering a consistent online user experience tailored to suit diverse audiences worldwide without losing its core meaning? I decided to investigate and gave McDonald’s a good grilling by testing ten of its country-specific websites’ home pages in one Chalkmark study.

Preparation time 🥒

First-click testing reveals the first impressions your users have of your designs. This information is useful in determining whether users are heading down the right path when they first arrive at your site. When considering the best way to measure and compare ten of McDonald’s websites from around the world, I choose first-click testing because I wanted to be able to test the visual designs of each website and I wanted to do it all in one research study.My first job in the setup process was to decide which McDonald’s websites would make the cut.

The approach was to divide the planet up by continent, combined with the requirement that the sites selected be available in my native language (English) in order to interpret the results. I chose: Australia, Canada, Fiji, India, Malaysia, New Zealand, Singapore, South Africa, the UK, and the US. The next task was to figure out how to test this. Ten tasks is ideal for a Chalkmark study, so I made it one task per website; however, determining what those tasks would be was tricky. Serving up the same task for all ten ran the risk of participants tiring of the repetition, but a level of consistency was necessary in order to compare the sites. I decided that all tasks would be different, but tied together with one common theme: burgers.

After all, you don’t win friends with salad.

Launching and sourcing participants 👱🏻👩👩🏻👧👧🏾

When sourcing participants for my research, I often hand the recruitment responsibilities over to Optimal Workshop because it’s super quick and easy; however, this time I decided to do something a bit different. Because McDonald’s is such a large and well-known organization visited by hundreds of millions of people every year, I decided to recruit entirely via Twitter by simply tweeting the link out. Am I three fries short of a happy meal for thinking this would work? Apparently not. In just under a week I had the 30+ completed responses needed to peel back the wrapper on McDonald’s.

Imagine what could have happened if it had been McDonald’s tweeting that out to the burger-loving masses? Ideally when recruiting for a first-click testing study the more participants you can get the more sure you can be of your results, but aiming for 30-50 completed responses will still provide viable results. Conducting user research doesn’t have to be expensive; you can achieve quality results that cut the mustard for free. It’s a great way to connect with your customers, and you could easily reward participants with, say, a burger voucher by redirecting them somewhere after they do the activity—ooh, there’s an idea!

Reading the results menu 🍽️

Interpreting the results from a Chalkmark study is quick and easy.

Analysis tabs in the Chalkmark dashboard
Analysis tabs in the Chalkmark dashboard

Everything you need presented under a series of tabs under ‘Analysis’ in the results section of the dashboard:

  • Participants: this tab allows you to review details about every participant that started your Chalkmark study and also contains handy filtering options for including, excluding and segmenting.
  • Questionnaire: if you included any pre or post study questionnaires, you will find the results here.
  • Task Results: this tab provides a detailed statistical overview of each task in your study based on the correct areas as defined by you during setup. This functionality allows you to structure your results and speeds up your analysis time because everything you need to know about each task is contained in one diagram. Chalkmark also allows you to edit and define the correct areas retrospectively, so if you forget or make a mistake you can always fix it.

task 6_example of correct areas chart thing
  • Clickmaps: under this tab you will find three different types of visual clickmaps for each task showing you exactly where your participants clicked: heatmaps, grid and selection. Heatmaps show the hotspots of where participants clicked and can be switched to a greyscale view for greater readability and grid maps show a larger block of colour over the sections that were clicked and includes the option to show the individual clicks. The selection map just shows the individual clicks represented by black dots.

The heatmap for Task 1 in this study shown in greyscale for improved readability
The heatmap for Task 1 in this study shown in greyscale for improved readability

What the deep fryer gave me 🍟🎁

McDonald’s tested ridiculously well right across the board in the Chalkmark study. Country by country in alphabetical order, here’s what I discovered:

  • Australia: 91% of participants successfully identified where to go to view the different types of chicken burgers
  • Canada: all participants in this study correctly identified the first click needed to locate the nutritional information of a cheeseburger
  • Fiji: 63% of participants were able to correctly locate information on where McDonald’s sources their beef
  • India (West and South India site): Were this the real thing, 88% of participants in this study would have been able to order food for home delivery from the very first click, including the 16% who understood that the menu item ‘Convenience’ connected them to this service
  • Malaysia: 94% of participants were able to find out how many beef patties are on a Big Mac
  • New Zealand: 91% of participants in this study were able to locate information on the Almighty Angus™ burger from the first click
  • Singapore: 66% of participants were able to correctly identify the first click needed to locate the reduced-calorie dinner menu
  • South Africa: 94% of participants had no trouble locating the first click that would enable them to learn how burgers are prepared
  • UK: 63% of participants in this study correctly identified the first click for locating the Saver Menu
  • US: 75% of participants were able to find out if burger buns contain the same chemicals used to make yoga mats based on where their first clicks landed

USA_HEATMAP

Three reasons why McDonald’s nailed it 🍔 🚀

This study clearly shows that McDonald’s are kicking serious goals in the online stakes but before we call it quits and go home, let’s look at why that may be the case. Approaching this the way any UXer worth their salt on their fries would, I stuck all the screens together on a wall, broke out the Sharpies and the Tesla Amazing magnetic notes (the best invention since Post-it notes), and embarked on the hunt for patterns and similarities—and wow did I find them!

The worldwide wall of McDonald’s
The worldwide wall of McDonald’s

Navigation pattern use

Across the ten websites, I observed just two distinct navigation patterns: navigation menus at the top and to the left. The sites with a top navigation menu could also be broken down into two further groups: those with three labels (Australia, New Zealand, and Singapore) and those with more than three labels (Fiji, India, Malaysia, and South Africa). Australia and New Zealand shared the exact same labelling of ‘eat’, ‘learn’, and ‘play’ (despite being distinct countries), whereas the others had their own unique labels but with some subject matter crossover; for example, ‘People’ versus ‘Our People’.

McDonald’s New Zealand website with its three-label top navigation bar.
McDonald’s New Zealand website with its three-label top navigation bar.

Canada, the UK, and the US all had the same look and feel with their left side navigation bar, but each with different labels. All three still had navigation elements at the top of the page, but the main content that the other seven countries had in their top navigation bars was located in that left sidebar.

Left to right: Canada, the UK, and the US all have left side navigation bars but with their own unique labelling.
Left to right: Canada, the UK, and the US all have left side navigation bars but with their own unique labelling.

These patterns ensure that each site is tailored to its unique audience while still maintaining some consistency so that it’s clear they belong to the same entity.

Logo lovin’ it

If there’s one aspect that screams McDonald’s, it’s the iconic golden arches on the logo. Across the ten sites, the logo does vary slightly in size, color, and composition, but it’s always in the same place and the golden arches are always there. Logo consistently is a no-brainer, and in this case McDonald’s clearly recognizes the strengths of its logo and understands which pieces it can add or remove without losing its identity.

McDonald’s logos from left to right: Australia, Canada, Fiji, India (West and South India site), Malaysia, New Zealand, Singapore, South Africa, the UK, and the US as they appeared on the websites at the time of testing. How many different shades of red can you spot?
McDonald’s logos from left to right: Australia, Canada, Fiji, India (West and South India site), Malaysia, New Zealand, Singapore, South Africa, the UK, and the US as they appeared on the websites at the time of testing. How many different shades of red can you spot?

Subtle consistencies in the page layouts

Navigation and logo placement weren’t the only connections one can draw from looking at my wall of McDonald’s. There were also some very interesting but subtle similarities in the page layouts. The middle of the page is always used for images and advertising content, including videos and animated GIFs. The US version featured a particularly memorable advertisement for its all-day breakfast menu, complete with animated maple syrup slowly drizzling its way over a stack of hotcakes.

The McDonald’s US website and its animated maple syrup.
The McDonald’s US website and its animated maple syrup.

The bottom of the page is consistently used on most sites to house more advertising content in the form of tiles. The sites without the tiles left this space blank.

Familiarity breeds … usability?

Looking at these results, it is quite clear that the same level of consistency and recognition between McDonald’s restaurants is also present between the different country websites. This did make me wonder what role does familiarity play in determining usability? In investigating I found a few interesting articles on the subject. This article by Colleen Roller on UXmatters discusses the connection between cognitive fluency and familiarity, and the impact this has on decision-making. Colleen writes:Because familiarity enables easy mental processing, it feels fluent. So people often equate the feeling of fluency with familiarity. That is, people often infer familiarity when a stimulus feels easy to process. If we’re familiar with an item, we don’t have to think too hard about it and this reduction in performance load can make it feel easier to use. I also found this fascinating read on Smashing Magazine by Charles Hannon that explores why Apple were able to claim ‘You already know how to use it’ when launching the iPad. It’s well worth a look!Oh and about those yoga mats … the answer is yes.

Publishing date
February 14, 2016
Share this article

Related articles

min read
3 ways you can combine OptimalSort and Chalkmark in your design process

As UX professionals we know the value of card sorting when building an IA or making sense of our content and we know that first clicks and first impressions of our designs matter. Tools like OptimalSort and Chalkmark are two of our wonderful design partners in crime, but did you also know that they work really well with each other? They have a lot in common and they also complement each other through their different strengths and abilities. Here are 3 ways that you can make the most of this wonderful team up in your design process.

1. Test the viability of your concepts and find out which one your users prefer most

Imagine you’re at a point in your design process where you’ve done some research and you’ve fed all those juicy insights into your design process and have come up with a bunch of initial visual design concepts that you’d love to test.

You might approach this by following this 3 step process:

  • Test the viability of your concepts in Chalkmark before investing in interaction design work
  • Iterate your design based on your findings in Step 1
  • Finish by running a preference test with a closed image based card sort in OptimalSort to find out which of your concepts is most preferred by your users

There are two ways you could run this approach: remotely or in person. The remote option is great for when you’re short on time and budget or for when your users are all over the world or otherwise challenging to reach quickly and cheaply. If you’re running it remotely, you would start by popping images of your concepts in whatever state of fidelity they are up to into Chalkmark and coming up with some scenario based tasks for your participants to complete against those flat designs. Chalkmark is super nifty in the way that it gets people to just click on an image to indicate where they would start out when completing a task. That image can be a rough sketch or a screenshot of a high fidelity prototype or live product — it could be anything! Chalkmark studies are quick and painless for participants and great for designers because the results will show if your design is setting your users up for success from the word go. Just choose the most common tasks a user would need to complete on your website or app and send it out.

Next, you would review your Chalkmark results and make any changes or iterations to your designs based on your findings. Choose a maximum of 3 designs to move forward with for the last part of this study. The point of this is to narrow your options down and figure out through research, which design concept you should focus on. Create images of your chosen 3 designs and build a closed card sort in OptimalSort with image based cards by selecting the checkbox for ‘Add card images’ in the tool (see below).


How to add card images
Turn your cards into image based cards in OptimalSort by selecting the ‘Add card images’ checkbox on the right hand side of the screen.


The reason why you want a closed card sort is because that’s how your participants will indicate their preference for or against each concept to you. When creating the study in OptimalSort, name your categories something along the lines of ‘Most preferred’, ‘Least preferred’ and ‘Neutral’. Totally up to you what you call them — if you’re able to, I’d encourage you to have some fun with it and make your study as engaging as possible for your participants!

Naming your categories for preference testing
Naming your card categories for preference testing with an image based closed card sort study in OptimalSort

Limit the number of cards that can be sorted into each category to 1 and uncheck the box labelled ‘Randomize category order’ so that you know exactly how they’re appearing to participants — it’s best if the negative one doesn’t appear first because we’re mostly trying to figure out what people do prefer and the only way to stop that is to switch the randomization off. You could put the neutral option at the end or in the middle to balance it out — totally up to you.

It’s also really important that you include a post study questionnaire to dig into why they made the choices they did. It’s one thing to know what people do and don’t prefer, but it’s also really important to additionally capture the reasoning behind their thinking. It could be something as simple as “Why did you chose that particular option as your most preferred?” and given how important this context is, I would set that question to ‘required’. You may still end up with not-so helpful responses like ‘Because I like the colors’ but it’s still better than nothing — especially if your users are on the other side of the world or you’re being squeezed by some other constraint! It’s something to be mindful of and remember that studies like these contribute to the large amount of research that goes on throughout a project and are not the only piece of research you’ll be running. You’re not pinning all your design’s hopes and dreams on this one study! You’re just trying to quickly find out what people prefer at this point in time and as your process continues, your design will evolve and grow.

You might also ask the same context gathering question for the least preferred option and consider also including an optional question that allows them to share any other thoughts they might have on the activity they just completed — you never know what you might uncover!

If you were running this in person, you could use it to form the basis for a moderated codesign session. You would start your session by running the Chalkmark study to gauge their first impressions and find out where those first clicks are landing and also have a conversation about what your participants are thinking and feeling while they’re completing those tasks with your concepts. Next, you could work with your participants to iterate and refine your concepts together. You could do it digitally or you could just draw them out on paper — it doesn't have to be perfect! Lastly, you could complete your codesign session by running that closed card sort preference test as a moderated study using barcodes printed from OptimalSort (found under the ‘Cards’ tab during the build process) giving you the best of both worlds — conversations with your participants plus analysis made easy! The moderated approach will also allow you to dig deeper into the reasoning behind their preferences.

2. Test your IA through two different lenses: non visual and visual

Your information architecture (IA) is the skeleton structure of your website or app and it can be really valuable to evaluate it from two different angles: non-visual and visual. The non-visual elements of an IA are: language, content, categories and labelling and these are great because they provide a clear and clean starting point. There’s no visual distractions and getting that content right is rightfully so a high priority. The visual elements come along later and build upon that picture and help provide context and bring your design to life. It's a good idea to test your IA through both lenses throughout your design process to ensure that nothing is getting lost or muddied as your design evolves and grows.

Let’s say you’ve already run an open card sort to find out how your users expect your content to be organised and you’ve created your draft IA. You may have also tested and iterated that IA in reverse through a tree test in Treejack and are now starting to sketch up some concepts for the beginnings of the interaction design stages of your work.

At this point in the process, you might run a closed card sort with OptimalSort on your growing IA to ensure that those top level category labels are aligning to user expectations while also running a Chalkmark study on your early visual designs to see how the results from both approaches compare.

When building your closed card sort study, you would set your predetermined categories to match your IA’s top level labels and would then have your participants sort the content that lies beneath into those groups. For your Chalkmark study, think about the most common tasks your users will need to complete using your website or app when it eventually gets released out into the world and base your testing tasks around those. Keep it simple and don’t stress if you think this may change in the future — just go with what you know today.

Once you’ve completed your studies, have a look at your results and ask yourself questions like: Are both your non-visual and visual IA lenses telling the same story? Is the extra context of visual elements supporting your IA or is it distracting and/or unhelpful? Are people sorting your content into the same places that they’re going looking for it during first-click testing? Are they on the same page as you when it’s just words on an actual page but are getting lost in the visual design by not correctly identifying their first click? Has your Chalkmark study unearthed any issues with your IA? Have a look at the Results matrix and the Popular placements matrix in OptimalSort and see how they stack up against your clickmaps in Chalkmark.

Bananacom ppm
Clickmaps in Chalkmark and closed card sorting results in OptimalSort — are these two saying the same thing?

3. Find out if your labels and their matching icons make sense to users

A great way to find out if your top level labels and their matching icons are communicating coherently and consistently is to test them by using both OptimalSort and Chalkmark. Icons aren’t the most helpful or useful things if they don’t make sense to your users — especially in cases where label names drop off and your website or app homepage relies solely on that image to communicate what content lives below each one e.g., sticky menus, mobile sites and more.

This approach could be useful when you’re at a point in your design process where you have already defined your IA and are now moving into bringing it to life through interaction design. To do this, you might start by running a closed card sort in OptimalSort as a final check to see if the top level labels that you intend to make icons for are making sense to users. When building the study in OptimalSort, do exactly what we talked about earlier in our non-visual vs visual lens study and set your predetermined categories in the tool to match your level 1 labels. Ask your participants to sort the content that lies beneath into those groups — it’s the next part that’s different for this approach.

Once you’ve reviewed your findings and are confident your labels are resonating with people, you can then develop their accompanying icons for concept testing. You might pop these icons into some wireframes or a prototype of your current design to provide context for your participants or you might just test the icons on their own as they would appear on your future design (e.g., in a row, as a block or something else!) but without any of the other page elements. It’s totally up to you and depends entirely upon what stage you’re at in your project and the thing you’re actually designing — there might be cases where you want to zero in on just the icons and maybe the website header e.g., a sticky menu that sits above a long scrolling, dynamic social feed. In an example taken from a study we recently ran on Airbnb and TripAdvisor’s mobile apps, you might use the below screen on the left but without the icon labels or you might use the screen on the right that shows the smaller sticky menu version of it that appears on scroll.


Screenshots taken from TripAdvisor’s mobile app in 2019 showing the different ways icons present.


The main thing here is to test the icons without their accompanying text labels to see if they align with user expectations. Choose the visual presentation approach that you think is best but lose the labels!

When crafting your Chalkmark tasks, it’s also a good idea to avoid using the label language in the task itself. Even though the labels aren’t appearing in the study, just using that language still has the potential to lead your participants. Treat it the same way you would a Treejack task — explain what participants have to do without giving the game away e.g., instead of using the word ‘flights’ try ‘airfares’ or ‘plane tickets’ instead.

Choose one scenario based task question for each level 1 label that has an icon and consider including post study questions to gather further context from your participants — e.g., did they have any comments about the activity they completed? Was anything confusing or unclear and if so, what and why?

Once you’ve completed your Chalkmark study and have analysed the results, have a look at how well your icons tested. Did your participants get it right? If not, where did they go instead? Are any of your icons really similar to each other and is it possible this similarity may have led people down the wrong path?

Alternatively, if you’ve already done extensive work on your IA and are feeling pretty confident in it, you might instead test your icons by running an image card sort in OptimalSort. You could use an open card sort and limit the cards per category to just one — effectively asking participants to name each card rather than a group of cards. An open card sort will allow you to learn more about the language they use while also uncovering what they associate with each one without leading them. You’d need to tweak the default instructions slightly to make this work but it’s super easy to do! You might try something like:

Part 1:

Step 1

  • Take a quick look at the images to the left.
  • We'd like you to tell us what you associate with each image.
  • There is no right or wrong answer.

Step 2

  • Drag an image from the left into this area to give it a name.

Part 2:

Step 3

  • Click the title to give the image a name that you feel best describes what you associate that image with.

Step 4

  • Repeat step 3 for all the images by dropping them in unused spaces.
  • When you're done, click "Finished" at the top right. Have fun!

Test out your new instructions in preview mode on a colleague from outside of your design team just to be sure it makes sense!

So there’s 3 ideas for ways you might use OptimalSort and Chalkmark together in your design process. Optimal Workshop’s suite of tools are flexible, scalable and work really well with each other — the possibilities of that are huge!

Further reading

min read
Which comes first: card sorting or tree testing?
“Dear Optimal Workshop,I want to test the structure of a university website (well certain sections anyway). My gut instinct is that it's pretty 'broken'. Lots of sections feel like they're in the wrong place. I want to test my hypotheses before proposing a new structure. I'm definitely going to do some card sorting, and was planning a mixture of online and offline. My question is about when to bring in tree testing. Should I do this first to test the existing IA? Or is card sorting sufficient? I do intend to tree test my new proposed IA in order to validate it, but is it worth doing it upfront too?" — Matt

Dear Matt,

Ah, the classic chicken or the egg scenario: Which should come first — tree testing or card sorting?

It’s a question that many researchers often ask themselves, but I’m here to help clear the air!You should always use both methods when changing up your information architecture (IA) in order to capture the most information.

Tree testing and card sorting, when used together, can give you fantastic insight into the way your users interact with your site. First of all, I’ll run through some of the benefits of each testing method.

What is card sorting and why should I use it?

Card sorting is a great method to gauge the way in which your users organize the content on your site. It helps you figure out which things go together and which things don’t. There are two main types of card sorting: open and closed.

Closed card sorting involves providing participants with pre-defined categories into which they sort their cards. For example, you might be reorganizing the categories for your online clothing store for women. Your cards would have all the names of your products (e.g., “socks”, “skirts” and “singlets”) and you also provide the categories (e.g.,“outerwear”, “tops” and “bottoms”).

Open card sorting involves providing participants with cards and leaving them to organize the content in a way that makes sense to them. It’s the opposite to closed card sorting, in that participants dictate the categories themselves and also label them. This means you’d provide them with the cards only — no categories.

Card sorting, whether open or closed, is very user focused. It involves a lot of thought, input, and evaluation from each participant, helping you to form the structure of your new IA.

What is tree testing and why should I use it?

Tree testing is a fantastic way to determine how your users are navigating your site and how they’re finding information. Your site is organised into a tree structure, sorted into topics and subtopics, and participants are provided with some tasks that they need to perform. The results will show you how your participants performed those tasks, if they were successful or unsuccessful, and which route they took to complete the tasks. This data is extremely useful for creating a new and improved IA.

Tree testing is an activity that requires participants to seek information, which is quite the contrast to card sorting — an activity that requires participants to sort and organize information. Each activity requires users to behave in different ways, so each method will give its own valuable results.

Should you run a card or tree test first?

In this scenario, I’d recommend running a tree test first in order to find out how your existing IA currently performs. You said your gut instinct is telling you that your existing IA is pretty “broken”, but it’s good to have the data that proves this and shows you where your users get lost.

An initial tree test will give you a benchmark to work with — after all, how will you know your shiny, new IA is performing better if you don’t have any stats to compare it with? Your results from your first tree test will also show you which parts of your current IA are the biggest pain points and from there you can work on fixing them. Make sure you keep these tasks on hand — you’ll need them later!

Once your initial tree test is done, you can start your card sort, based on the results from your tree test. Here, I recommend conducting an open card sort so you can understand how your users organize the content in a way that makes sense to them. This will also show you the language your participants use to name categories, which will help you when you’re creating your new IA.

Finally, once your card sort is done you can conduct another tree test on your new, proposed IA. By using the same (or very similar) tasks from your initial tree test, you will be able to see that any changes in the results can be directly attributed to your new and improved IA.

Once your test has concluded, you can use this data to compare the performance from the tree test for your original information architecture — hopefully it is much better now!

Seeing is believing

Dive into our platform, explore our tools, and discover how easy it can be to conduct effective UX research.