There’s no doubt usability is a key element of all great user experiences, how do we apply and test usability principles for a website? This article looks at usability principles in web design, how to test it, practical tips for success and a look at our remote testing tool, Treejack.
A definition of usability for websites 🧐📖
Web usability is defined as the extent to which a website can be used to achieve a specific task or goal by a user. It refers to the quality of the user experience and can be broken down into five key usability principles:
Ease of use: How easy is the website to use? How easily are users able to complete their goals and tasks? How much effort is required from the user?
Learnability: How easily are users able to complete their goals and tasks the first time they use the website?
Efficiency: How quickly can users perform tasks while using your website?
User satisfaction: How satisfied are users with the experience the website provides? Is the experience a pleasant one?
Impact of errors: Are users making errors when using the website and if so, how serious are the consequences of those errors? Is the design forgiving enough make it easy for errors to be corrected?
Why is web usability important? 👀
Aside from the obvious desire to improve the experience for the people who use our websites, web usability is crucial to your website’s survival. If your website is difficult to use, people will simply go somewhere else. In the cases where users do not have the option to go somewhere else, for example government services, poor web usability can lead to serious issues. How do we know if our website is well-designed? We test it with users.
Testing usability: What are the common methods? 🖊️📖✏️📚
There are many ways to evaluate web usability and here are the common methods:
Moderated usability testing: Moderated usability testing refers to testing that is conducted in-person with a participant. You might do this in a specialised usability testing lab or perhaps in the user’s contextual environment such as their home or place of business. This method allows you to test just about anything from a low fidelity paper prototype all the way up to an interactive high fidelity prototype that closely resembles the end product.
Moderated remote usability testing: Moderated remote usability testing is very similar to the previous method but with one key difference- the facilitator and the participant/s are not in the same location. The session is still a moderated two-way conversation just over skype or via a webinar platform instead of in person. This method is particularly useful if you are short on time or unable to travel to where your users are located, e.g. overseas.
Unmoderated remote usability testing: As the name suggests, unmoderated remote usability testing is conducted without a facilitator present. This is usually done online and provides the flexibility for your participants to complete the activity at a time that suits them. There are several remote testing tools available ( including our suite of tools ) and once a study is launched these tools take care of themselves collating the results for you and surfacing key findings using powerful visual aids.
Guerilla testing: Guerilla testing is a powerful, quick and low cost way of obtaining user feedback on the usability of your website. Usually conducted in public spaces with large amounts of foot traffic, guerilla testing gets its name from its ‘in the wild’ nature. It is a scaled back usability testing method that usually only involves a few minutes for each test but allows you to reach large amounts of people and has very few costs associated with it.
Heuristic evaluation: A heuristic evaluation is conducted by usability experts to assess a website against recognized usability standards and rules of thumb (heuristics). This method evaluates usability without involving the user and works best when done in conjunction with other usability testing methods eg Moderated usability testing to ensure the voice of the user is heard during the design process.
Tree testing: Also known as a reverse card sort, tree testing is used to evaluate the findability of information on a website. This method allows you to work backwards through your information architecture and test that thinking against real world scenarios with users.
First click testing: Research has found that 87% of users who start out on the right path from the very first click will be able to successfully complete their task while less than half ( 46%) who start down the wrong path will succeed. First click testing is used to evaluate how well a website is supporting users and also provides insights into design elements that are being noticed and those that are being ignored.
Hallway testing: Hallway testing is a usability testing method used to gain insights from anyone nearby who is unfamiliar with your project. These might be your friends, family or the people who work in another department down the hall from you. Similar to guerilla testing but less ‘wild’. This method works best at picking up issues early in the design process before moving on to testing a more refined product with your intended audience.
Online usability testing tool: Tree testing 🌲🌳🌿
Tree testing is a remote usability testing tool that uses tree testing to help you discover exactly where your users are getting lost in the structure of your website. Treejack uses a simplified text-based version of your website structure removing distractions such as navigation and visual design allowing you to test the design from its most basic level.
Like any other tree test, it uses task based scenarios and includes the opportunity to ask participants pre and post study questions that can be used to gain further insights. Tree testing is a useful tool for testing those five key usability principles mentioned earlier with powerful inbuilt features that do most of the heavy lifting for you. Tree testing records and presents the following for each task:
complete details of the pathways followed by each participant
the time taken to complete each task
first click data
the directness of each result
visibility on when and where participants skipped a task
Participant paths data in our tree testing tool 🛣️
The level of detail recorded on the pathways followed by your participants makes it easy for you to determine the ease of use, learnability, efficiency and impact of errors of your website. The time taken to complete each task and the directness of each result also provide insights in relation to those four principles and user satisfaction can be measured through the results to your pre and post survey questions.
The first click data brings in the added benefits of first click testing and knowing when and where your participants gave up and moved on can help you identify any issues.Another thing tree testing does well is the way it brings all data for each task together into one comprehensive overview that tells you everything you need to know at a glance. Tree testing's task overview- all the key information in one placeIn addition to this, tree testing also generates comprehensive pathway maps called pietrees.
Each junction in the pathway is a piechart showing a statistical breakdown of participant activity at that point in the site structure including details about: how many were on the right track, how many were following the incorrect path and how many turned around and went back. These beautiful diagrams tell the story of your usability testing and are useful for communicating the results to your stakeholders.
Test early and often: Usability testing isn’t something that only happens at the end of the project. Start your testing as soon as possible and iterate your design based on findings. There are so many different ways to test an idea with users and you have the flexibility to scale it back to suit your needs.
Try testing with paper prototypes: Just like there are many usability testing methods, there are also several ways to present your designs to your participant during testing. Fully functioning high fidelity prototypes are amazing but they’re not always feasible (especially if you followed the previous tip of test early and often). Paper prototypes work well for usability testing because your participant can draw on them and their own ideas- they’re also more likely to feel comfortable providing feedback on work that is less resolved! You could also use paper prototypes to form the basis for collaborative design sessions with your users by showing them your idea and asking them to redesign or design the next page/screen.
Run a benchmarking round of testing: Test the current state of the design to understand how your users feel about it. This is especially useful if you are planning to redesign an existing product or service and will save you time in the problem identification stages.
Bring stakeholders and clients into the testing process: Hearing how a product or service is performing direct from a user can be quite a powerful experience for a stakeholder or client. If you are running your usability testing in a lab with an observation room, invite them to attend as observers and also include them in your post session debriefs. They’ll gain feedback straight from the source and you’ll gain an extra pair of eyes and ears in the observation room. If you’re not using a lab or doing a different type of testing, try to find ways to include them as observers in some way. Also, don’t forget to remind them that as observers they will need to stay silent for the entire session beyond introducing themselves so as not to influence the participant - unless you’ve allocated time for questions.
Make the most of available resources: Given all the usability testing options out there, there’s really no excuse for not testing a design with users. Whether it’s time, money, human resources or all of the above making it difficult for you, there’s always something you can do. Think creatively about ways to engage users in the process and consider combining elements of different methods or scaling down to something like hallway testing or guerilla testing. It is far better to have a less than perfect testing method than to not test at all.
Never analyse your findings alone: Always analyse your usability testing results as a team or with at least one other person. Making sense of the results can be quite a big task and it is easy to miss or forget key insights. Bring the team together and affinity diagram your observations and notes after each usability testing session to ensure everything is captured. You could also use Reframer to record your observations live during each session because it does most of the analysis work for you by surfacing common themes and patterns as they emerge. Your whole team can use it too saving you time.
Engage your stakeholders by presenting your findings in creative ways: No one reads thirty page reports anymore. Help your stakeholders and clients feel engaged and included in the process by delivering the usability testing results in an easily digestible format that has a lasting impact. You might create an A4 size one page summary, or maybe an A0 size wall poster to tell everyone in the office the story of your usability testing or you could create a short video with snippets taken from your usability testing sessions (with participant permission of course) to communicate your findings. Remember you’re also providing an experience for your clients and stakeholders so make sure your results are as usable as what you just tested.
Knowing and understanding why and how your users use your product can be invaluable for getting to the nitty gritty of usability. Where they get stuck and where they fly through. Delving deep with probing questions into motivation or skimming over looking for issues can equally be informative.
Usability testing can be done in several ways, each way has its benefits. Put super simply, usability testing literally is testing how useable your product is for your users. If your product isn't useable users will not stick around or very often complete their task, let alone come back for more.
What is usability testing? 🔦
Usability testing is a research method used to evaluate how easy something is to use by testing it with representative users.
These tests typically involve observing a participant as they work through a series of tasks involving the product being tested. Having conducted several usability tests, you can analyze your observations to identify the most common issues.
We go into the three main methods of usability testing:
Moderated and unmoderated
Remote or in person
Explorative, assessment or comparative
1. Moderated or unmoderated usability testing 👉👩🏻💻
Moderated usability testing is done in-person or remotely by a researcher who introduces the test to participants, answers their queries, and asks follow-up questions. Often these tests are done in real time with participants and can involve other research stakeholders. Moderated testing usually produces more in-depth results thanks to the direct interaction between researchers and test participants. However, this can be expensive to organize and run.
Top tip: Use moderated testing to investigate the reasoning behind user behavior.
Unmoderated usability testing is done without direct supervision; likely participants are in their own homes and/or using their own devices to browse the website that is being tested. And often at their own pace. The cost of unmoderated testing is lower, though participant answers can remain superficial and making follow-up questions can be difficult.
Top tip: Useunmoderated testing to test a very specific question or observe and measure behavior patterns.
2. Research or in-person usability testing 🕵
Remote usability testing is done over the internet or by phone. Allowing the participants to have the time and space to work in their own environment and at their own pace. This however doesn’t give the researcher much in the way of contextual data because you’re unable to ask questions around intention or probe deeper if the participant makes a particular decision. Remote testing doesn’t go as deep into a participant’s reasoning, but it allows you to test large numbers of people in different geographical areas using fewer resources.
Top tip: Use remote testing when a large group of participants are needed and the questions asked can be direct and unambiguous.
In-person usability testing, as the name suggests, is done in the presence of a researcher. In-person testing does provide contextual data as researchers can observe and analyze body language and facial expressions. You’re also often able to converse with participants and find out more about why they do something. However, in-person testing can be expensive and time-consuming: you have to find a suitable space, block out a specific date, and recruit (and often pay) participants.
Top tip: In-person testing gives researchers more time and insight into motivation for decisions.
3. Explorative, Assessment or comparative testing 🔍
These three usability testing methods generate different types of information:
Explorative testing is open-ended. Participants are asked to brainstorm, give opinions, and express emotional impressions about ideas and concepts. The information is typically collected in the early stages of product development and helps researchers pinpoint gaps in the market, identify potential new features, and workshop new ideas.
Assessment research is used to test a user's satisfaction with a product and how well they are able to use it. It's used to evaluate general functionality.
Comparative research methods involve asking users to choose which of two solutions they prefer, and they may be used to compare a product with its competitors.
Top tip: Depending on what research is being done, and how much qualitative or quantitative data is wanted.
Which method is right for you? 🧐
Whether the testing is done in-person, remote, moderated or unmoderated will depend on your purpose, what you want out of the testing, and to some extent your budget.
Depending on what you are testing, each of the usability testing methods we explored here can offer an answer. If you are at the development stage of a product it can be useful to conduct a usability test on the entire product. Checking the intuitive usability of your website, to ensure users can make the best decisions, quickly. Or adding, changing or upgrading a product can also be the moment to check on a specific question around usability. Planning and understanding your objectives are key to selecting the right usability testing option for your project.
Let's take a look at a couple of examples of usability testing.
Imagine you have a website that sells sports equipment. Over time your site has become cluttered and disorganized, much like a bricks and mortar store may. You’ve noticed a drop in sales in certain areas. How do you find out what is going wrong or where users are getting lost? Having an in-person, lab (or other controlled environment), moderated usability test with users you can set tasks, watch (and record) what they do.
The researcher can literally be standing or sitting next to the participant throughout, recording contextual information such as how they interacted with the mouse, laptop or even the seat. Watching for cues as to the comfort of the participant and asking questions about why they make decisions can provide richer insights. Maybe they wanted purple yoga pants, but couldn’t find the ‘yoga’ section which was listed under gym rather than a clothing section.
Meaning you can look at how your stock is organised, or even investigate undertaking a card sort. This provides robust and fully rounded feedback on users behaviours, expectations and experiences. Providing data that can directly be turned into actionable directives when redeveloping the website.
2. Remote, moderated assessment testing - app product development
You are looking at launching an app for parents to access for information and updates for the school. It’s still in development stage and at this point you want to know how easy the app is to use. Setting some very specific set tasks for participants to complete the app can be sent to them and they can be left to complete (or not). Providing feedback and comments around the usability.
The next step may be to use first click testing to see how and where the interface is clicked and where participants may be spending time, or becoming lost. Whilst the feedback and data gathered from this testing can be light, it will be very direct to the questions asked. And will provide data to back up (or possibly not) what assumptions were made.
3. Moderated, In-person, explorative testing - new product development
You’re right at the start of the development process. The idea is new and fresh and the basics are being considered. What better way to get an understanding of what your users’ truly want than an explorative study.
Open-ended questions with participants in a one-on-one environment (or possibly in groups) can provide rich data and insights for the development team. Imagine you have an exciting new promotional app that you are developing for a client. There are similar apps on the market but none as exciting as what your team has dreamt up. By putting it (and possibly the competitors) to participants they can give direct feedback on what they like, love and loathe.
They can also help brainstorm ideas or better ways to make the app work, or improve the interface. All of this done, before there is money sunk in development.
Wrap up 🌯
Key objectives will dictate which usability testing method will deliver the answers to your questions.
Whether it’s in-person, remote, moderated or comparative with a bit of planning you can gather data around your users very real experience of your product. Identify issues, successes and failures. Addressing your user experience with real data, and knowledge can but lead to a more intuitive product.
Usability testing is one of the best ways to measure how easy and intuitive to use something is by testing it with real people. You can read about the basics of usability testing here.
Earlier this year, a small team within Optimal Workshop completely redesigned the company blog. More than anything, we wanted to create something that was user-friendly for our readers and would give them a reason to return. I was part of that team, and we ran numerous sessions interviewing regular readers as well as people unfamiliar with our blog. We also ran card sorts, tree tests and other studies to find out all we could about how people search for UX content. Unsurprisingly, one of the most valuable activities we did was usability testing – sitting down with representative users and watching them as they worked through a series of tasks we provided. We asked general questions like “Where would you go to find information about card sorting”, and we also observed them as they searched through our website for learning content.
By stripping away any barriers between ourselves and our users and observing them as they navigated through our website and learning resources, as well as those of other companies, we were able to build a blog with these people’s behaviors and motivations in mind.
Usability testing is an invaluable research method, and every user researcher should be able to run sessions effectively. Here are 5 tips for doing so, in no particular order.
1. Clarify your goals with stakeholders
Never go into a usability test blind. Before you ever sit down with a participant, make sure you know exactly what you want to get out of the session by writing down your research goals. This will help to keep you focused, essentially giving you a guiding light that you can refer back as you go about the various logistical tasks of your research. But you also need to take this a step further. It’s important to make sure that the people who will utilize the results of your research – your stakeholders – have an opportunity to give you their input on the goals as early as possible.
If you’re running usability tests with the aim of creating marketing personas, for example, meet with your organization’s marketing team and figure out the types of information they need to create these personas. In some cases, it’s also helpful to clarify how you plan to gather this data, which can involve explaining some of the techniques you’re going to use.
Lastly, find out how your stakeholders plan to use your findings. If there are a lot of objectives, organize your usability test so you ask the most important questions first. That way, if you end up going off track or you run out of time you’ll have already gathered the most important data for your stakeholders.
2. Be flexible with your questions
A list of pre-prepared questions will help significantly when it comes time to sit down and run your usability testing sessions. But while a list is essential, sometimes it can also pay to ‘follow your nose’ and steer the conversation in a (potentially) more fruitful direction.
How many times have you been having a conversation with a friend over a drink or dinner, only for you both to completely lose track of time and find yourselves discussing something completely unrelated? While it’s not good practice to let your usability testing sessions get off track to this extent, you can surface some very interesting insights by paying close attention to a user’s behavior and answers during a testing session and following interesting leads.
Ideally, and with enough practice, you’ll be able to answer your core (prepared) questions and ask a number of other questions that spring to mind during the session. This is a skill that takes time to master, however.
3. Write a script for your sessions
While a usability test script may sound like a fancy name for your research questions, it’s actually a document that’s much more comprehensive. If you prepare it correctly (we’ll explain how below), you’ll have a document that you can use to capture in-depth insights from your participants.
Here are some of the key things to keep in mind when putting together your script:
Write a friendly introduction – It may sound obvious, but taking the time to come up with a friendly, warm introduction will get your sessions off to a much better start. The bonus of writing it down is that you’re far less likely to forget it!
Ask to record the session – It’s important to record your session (whether through video or audio), as you’ll want to go back later and analyze any details you may have missed. This means asking for explicit permission to record participants. In addition to making them feel more comfortable, it’s just good practice to do so.
Allocate time for the basics – Don’t dive into the complex questions first, use the first few minutes to gather basic data. This could be things like where they work and their familiarity with your organization and/or product.
Encourage them to explain their thought process – “I’d like you to explain what you’re doing as you make your way through the task”. This simple request will give you an opportunity to ask follow-up questions that you otherwise may not have thought to ask.
Let participants know that they’re not being tested – Whenever a participant steps into the room for a test, they’re naturally going to feel like they’re being tested. Explain that you’re testing the product, not them. It’s also helpful to let them know that there are no right or wrong answers. This is an important step if you want to keep them relaxed.
It’s often easiest to have a document with your script printed out and ready to go for each usability test.
4. Take advantage of software
You’d never see a builder without a toolbox full of a useful assortment of tools. Likewise, software can make the life of a user research that much easier. The paper-based ways of recording information are still perfectly valid, but introducing custom tools can make both the logistics of user research and the actual sessions themselves much easier to manage.
Take a tool like Calendly, for example. This is a powerful piece of scheduling software that almost completely takes over the endless back and forth of scheduling usability tests. Calendly acts as a middle man between you and your participants, allowing you to set the times you’re free to host usability tests, and then allowing participants to choose a session that suits them from these times.
Our very own Reframer makes the task of running usability tests and analyzing insights that much easier. During your sessions, you can use Reframer to take comprehensive notes and apply tags like “positive” or “struggled” to different observations. Then, after you’ve concluded your tests, Reframer’s analysis function will help you understand wider themes that are present across your participants.
There’s another benefit to using a tool like Reframer. Keeping all of your notes in place will mean you easily pull up data from past research sessions whenever you need to.
5. Involve others
Usability tests (and user interviews, for that matter) are a great opportunity to open up research to your wider organization. Whether it’s stakeholders, other members of your immediate team or even members of entirely different departments, giving them the chance to sit down with users will show them how their products are really being used. If nothing else, these sessions will help those within your organization build empathy with the people they’re building products for.
There are quite a few ways to bring others in, such as:
To help you set up the research – This can be a helpful exercise for both you (the researcher) and the people you’re bringing in. Collaborate on the overarching research objectives, ask them what types of results they’d like to see and what sort of tasks they think could be used to gather these results.
As notetakers – Having a dedicated notetaker will make your life as a researcher significantly easier. This means you’ll have someone to record any interesting observations while you focus on running the session. Just let them know what types of notes you’d like to see.
To help you analyze the data – Once you’ve wrapped up your usability testing sessions, bring others in to help analyze the findings. There’s a good chance that an outside perspective will catch something you may miss. Also, if you’re bringing stakeholders into the analysis stage, they'll get a clearer picture of what it means and where the data came from.
There are myriad other tips and best practices to keep in mind when usability testing, many of which we cover in our introductory page. Important considerations include taking good quality notes, carefully managing participants during the session (not giving them too much guidance) and remaining neutral throughout when answering their questions. If you feel like we’ve missed any really important points, feel free to leave a comment!
Read more
Usability testing 101 – About to run your first usability test? Check out our handy 101 guide. You’ll also learn how you can use our qualitative research tool Reframer most effectively.
What is usability testing? – Learn about the concepts of usability testing and how you can develop a plan for your own testing sessions.
Are your visitors really getting the most out of your website? Tree testing (or sometimes referred to as reverse card sorting) takes away the guesswork by telling you how easily, or not, people can find information on your website. Discover why Treejack is the tool of choice for website architects.
What’s tree testing and why does it matter? 🌲 👀
Whether you’re building a website from scratch or improving an existing website, tree testing helps you design your website architecture with confidence. How? Tools like Treejack use analysis to help assess how findable your content is for people visiting your website.
It helps answer burning questions like:
Do my labels make sense?
Is my content grouped logically?
Can people find what they want easily and quickly? If not, why not?
Treejack provides invaluable intel for any Information Architect. Why? Knowing where and why people get lost trying to find your content, gives you a much better chance of fixing the actual problem. And the more easily people can find what they’re looking for, the better their experience which is ultimately better for everyone.
How’s tree testing work? 🌲🌳🌿
Tree testing can be broken down into two main parts:
The Tree - Your tree is essentially your site map – a text-only version of your website structure.
The Task - Your task is the activity you ask participants to complete by clicking through your tree and choosing the information they think is right. Tools like Treejack analyse the data generated from doing the task to build a picture of how people actually navigated your content in order to try and achieve your task. It tells you if they got it right or wrong, the path they took and the time it took them.
Whether you’re new to tree testing or already a convert, effective tree testing using Treejack has some key steps.
Step 1. The ‘ Why’: Purpose and goals of tree testing
Ask yourself what part of your information architecture needs improvement – is it your whole website or just parts of it? Also think about your audience, they’re the ones you’re trying to improve the website for so the more you know about their needs the better.
Tip: Make the most of what tree testing offers to improve your website by building it into your overall design project plan
Step 2. The ‘How’: Build your tree
You can build your tree using two main approaches:
Create your tree in spreadsheet and import it into Treejack or
Build your tree in Treejack itself, using the labels and structure of your website.
Tip: Your category labels are known as ‘parent nodes’. Your information labels are known as ‘child nodes’.
Step 3. The ‘What’: Write your tasks
The quality of your tasks will be reflected in the usefulness of your data so it’s worth making sure you create tasks that really test what you want to improve.
Tip: Use plain language that feels natural and try to write your tasks in a way that reflects the way people who visit your website might actually think when they are trying to find information on your site.
Step 4. The ‘Who’: Recruit participants
The quality of your data will largely depend on the quality of your participants. You want people who are as close to your target audience as possible and with the right attitude - willing and committed to being involved.
Tip: Consider offering some kind of incentive to participants – it shows you value their involvement.
Step 5. The ‘insights’: Interpret your results
Now for the fun part – making sense of the results. Treejack presents the data from your tree testing as a series of tables and visualizations. You can download them in a spreadsheet in their raw format or customized to your needs.
Tip: Use the results to gain quick, practical insights you can act on right away or as a starter to dive deeper into the data.
When should I use tree testing? ⌛
Tree testing is useful whenever you want to find out if your website content is labelled and organised in a way that’s easy to understand. What’s more it can be applied for any website, big (10+ levels with 10000s of labels) or small (3 levels and 22 labels) and any size in between. Our advice for using Treejack is simply this: test big, test small, test often.
Summary:User researcher Ashlea McKay runs through some of her top tips for carrying out advanced analysis in tree testing tool Treejack.
Tree testing your information architecture (IA) with Treejack is a fantastic way to find out how easy it is for people to find information on your website and pinpoint exactly where they’re getting lost. A quick glance at the results visualization features within the tool will give you an excellent starting point, however your Treejack data holds a much deeper story that you may not be aware of or may be having trouble pinning down. It’s great to be able to identify a sticking point that’s holding your IA back, but you also want to see where that fits into the rest of the story and also not just where people are getting lost in the woods, but why.
Thankfully, this is something that is super quick and easy to find — you just have to know where to look. To help you gain a fuller picture of your tree testing data, I’ve pulled together this handy guide of my top tips for running advanced analysis in Treejack.
Setting yourself up for success in the Participants tab
Treejack results are exciting and it can be all too easy to breeze past the Participants tab to get to those juicy insights as quickly as possible, but stopping for a moment to take a look is worth it. You need to ensure that everyone who has been included in your study results belongs there. Take some time to flick through each participant one by one and see if there’s anyone you’d need to exclude.
Keep an eye out for any of the following potential red flags:
People who skipped all or most of their tasks directly: their individual tasks will be labeled as ‘Direct Skipped’ and this means they selected the skip task button without attempting to complete the task at all.
People who completed all their tasks too quickly: those who were much faster than the median completion time listed in the Overview tab may have rushed their way through the activity and may not have given it much thought.
People who took a very long time to complete the study: it’s possible they left it open on their computer while they completed other tasks not related to your study and may not have completed the whole tree test in one sitting and therefore may not have been as focused on it.
Treejack also automatically excludes incomplete responses and marks them as ‘abandoned’, but you have full control over who is and isn’t included and you might like to reintroduce some of these results if you feel they’re useful. For example, you might like to bring back someone who completed 9 out of a total of 10 tasks before abandoning it as this might mean that they were interrupted or may have accidentally closed their browser tab or window before reaching the end.
You can add, remove or filter participant data from your overall results pool at any time during your analysis, but at a minimum deciding who does and doesn’t belong at the very beginning will save you a lot of time and effort that I certainly learned about the hard way.
Once you’re happy with the responses that will be included in your results, you’re good to go. If you made any changes, all you have to do is reload your results which you can do from the Participants tab and all your data on the other tabs will be updated to reflect your new participant pool.
Getting the most out of your pietrees
Pietrees are the heart and soul of Treejack. They bring all the data Treejack collected on your participants’ journeys for a single task during your study together into one interactive and holistic view. After gaining an overall feel for your results by reviewing the task by task statistics under the Task Results tab, pietrees are your next stop in advanced analysis in Treejack.
How big does a pietree grow?
Start by reviewing the overall size of the pietree. Is it big and scattered with small circles representing each node (also called a ‘pie’ or a ‘branch’)? Or is it small with large circular nodes? Or is it somewhere in between? The overall size of the pietree can provide insight into how long and complex your participants’ pathways to their nominated correct answer were.
Smaller pietrees with bigger circular nodes like the one shown in the example below taken from a study I ran in 2018 testing IKEA’s US website, happen when participants follow more direct pathways to their destination — meaning they didn’t stray from the path that you set as correct when you built the study.
Example of a smaller and more direct pietree taken from a study I ran on IKEA’s US website in 2018.
This is a good thing! You want your participants to be able to reach their goal quickly and directly without clicking off into other areas but when they can’t and you end up with a much larger and more scattered pietree, the trail of breadcrumbs they leave behind them will show you exactly where you’re going wrong — also a good thing! Larger and more scattered pietrees happen when indirect and winding pathways were followed and sometimes you’ll come across a pietree like the one shown below where just about every second and third level node has been clicked on.
Example of a large scattered pietree from the previously mentioned study I ran on IKEA’s US website in 2018.
This can indicate that people felt quite lost in general while trying to complete their task because bigger pietrees tend to show large amounts of people clicking into the wrong nodes and immediately turning back. This is shown with red (incorrect path) and blue (back from here) color coding on the nodes of the tree and you can view exactly how many people did this along with the rest of that node’s activity by hovering over each one (see below image).
Information that appears in Treejack on hover over a node called ‘Products’ in the previously mentioned IKEA US website study from 2018 showing the full breakdown of its activity including how many people clicked on it and how many turned back from this point.
In this case people were looking for an electric screwdriver and while ‘Products’ was the right location for that content, there was something about the labels underneath it that made 28.1% of its total visitors think they were in the wrong place and turn back. It could be that the labels need a bit of work or more likely that the placement of that content might not be right — ‘Secondary Storage’ and ‘Kitchens’ (hidden by the hover window in the image above) aren’t exactly the most intuitive locations for a power tool.
Labels that might unintentionally misdirect your users
When analyzing your pietree keep an eye out for any labels that might be potentially leading your users astray. Were there large numbers of people starting on or visiting the same incorrect node of your IA? In the example shown below, participants were attempting to replace lost furniture assembly instructions and the pietree for this task shows that the 2 very similar labels of ‘Assembly instructions’ (correct location) and ‘Assembly’ (incorrect location) were likely tripping people up as almost half the participants in the study were in the right place (‘Services’), but took a wrong turn and ultimately chose the wrong destination node.
Example of a potential label trap found on a pietree from the previously mentioned study I ran on IKEA’s US website in 2018.
There’s no node like home
Have a look at your pietree to see the number of times ‘Home’ was clicked. If that number is more than twice that of your participants, this can be a big indicator that people were lost in your IA tree overall. I remember a project where I was running an intranet benchmarking tree test that had around 80 participants and ‘Home’ had been clicked on a whopping 648 times and the pietrees were very large and scattered. When people are feeling really lost in an IA, they’ll often click on ‘Home’ as a way to clear the slate and start their journey over again. The Paths tab — which we’re going to talk about next — will allow you to dig deeper into findings like this in your own studies.
Breaking down individual participant journeys in the Paths tab
While the pietrees bring all your participants’ task journeys together into one visualization, the Paths tab separates them out so you can see exactly what each individual got up to during each task in your study.
How many people took the scenic route?
As we discussed earlier, you want your IA to support your users and enable them to follow the most direct pathway to the content that will help them achieve their goal. The paths table will help show you if your IA is there yet or if it needs more work. Path types are color coded by directness and also use arrows to communicate which direction participants were traveling in at each point of their journey so you can see where in the IA that they were moving forward and where they were turning back. You can also filter by path type by checking/unchecking the boxes next to the colours and their text-based label names at the top of the table.
Here’s what those types mean:
Direct success: Participants went directly to their nominated response without backtracking and chose the correct option - awesome!
Indirect success: Participants clicked into a few different areas of the IA tree and turned around and went back while trying to complete their task, but still reached the correct location in the end
Direct failure: Participants went directly to their nominated response without backtracking but unfortunately did not find the correct location
Indirect failure: Participants clicked into a few different areas of the IA tree and some backtracking occurred, but they still weren’t able to find the correct location
Direct skip: Participants instantly skipped the task without clicking on any of your IA tree nodes
Indirect skip: Participants attempted to complete the task but ultimately gave up after clicking into at least one of your IA tree’s nodes.
Example of a long path in Treejack taken from a study I ran on the footer of Sephora’s US website in late 2017.
It’s also important to note that while some tasks may appear to be successful on the surface — e.g., your participants correctly identified the location of that content — if they took a convoluted path to get to that correct answer, something isn’t quite right with your tree and it still needs work. Success isn’t always the end of the story and failed tasks aren’t the only ones you should be checking for lengthy paths. Look at the lengths of all your paths to gain a full picture of how your participants experienced your IA.
Take a closer look at the failed attempts
If you’re seeing large numbers of people failing tasks — either directly or indirectly — it’s worth taking a closer look at the paths table to find out exactly what they did and where they went. Did multiple people select the same wrong node? When people clicked into the wrong node, did they immediately turn back or did they keep going further down? And if they kept going, which label or labels made them think they were on the right track?
In the Sephora study example on that task I mentioned earlier where no one was successful in finding the correct answer, 22% of participants (7 people) started their journey on the wrong first node of ‘Help & FAQs’ and not one of those participants turned back beyond that particular Level 1 starting point (ie clicked on ‘Home’ to try another path). Some did backtrack during their journey but only as far back as the ‘Help & FAQs’ node that they started on indicating that it was likely the label that made them think they were on the right track. We’ll also take a closer look at the importance of accurate first clicks later on in this guide.
Task 1 paths table taken from a study I ran on the footer of Sephora’s US website in late 2017 — full table not shown.
How many people skipped the task and where?
Treejack allows participants to skip tasks either before attempting a task or during one. The paths table will show you which node the skip occurred at, how many other nodes were clicked before they threw in the towel and how close (or not) they were to successfully completing their task. People skipping tasks in the real world affects conversion rates and more, but if you can find out where it’s happening in the IA during a tree test, you can improve it and better support your users and in turn meet your business goals.
Coming back to that Sephora study, when participants were looking to book an in-store beauty consultation, Participant 14 (see below image) was in the right area of the IA a total of 5 times during their journey (‘About Sephora’ and ‘Ways to Shop’). Each time they were just 2-3 clicks away from finding the right location for that content, but ultimately ended up skipping the task. It’s possible that the labels on the next layer down didn’t give this participant what they needed to feel confident they were still on the right track.
Example of an indirect skip in Treejack taken from a study I ran on the footer of Sephora’s US website in late 2017.
Finding out if participants started out on the right foot in the First clicks tab
Borrowing a little functionality from Chalkmark, the first clicks tab in Treejack will help you to understand if your participants started their journey on the right foot because that first click matters! Research has shown that people are 2-3 times as likely to successfully complete their task if they start out on the right first click.
This is a really cool feature to have in Treejack because Chalkmark is image based, but when you’re tree testing you don’t always have a visual thing to test. And besides, a huge part of getting the bones of an IA right is to be deliberately visual distraction-free! Having this functionality in Treejack means you can start finding out if people are on the right track from much earlier stages in your project saving you a lot of time and messy guesswork.
Under the First clicks tab you will find a table with 2 columns. The first column shows which nodes of your tree were clicked first and the percentage of your participants that did that, and the second column shows the percentage of participants that visited that node during the task overall. The first column will tell you how many participants got their first click right (the correct first click nodes are shown in bold text ) and the second will tell you how many found their way there at some point during their journey overall including those who went there first.
Have a look at how many participants got their first click right and how many didn’t. For those who didn’t, where did they go instead?
Also look at how the percentage of correct first clicks compares to the percentage of participants who made it there eventually but didn’t go there first — is the number in the second column the same or is it bigger? How much bigger? Are people missing the first click but still making it there in the end? Not the greatest experience, but better than nothing! Besides that task’s paths table and pietree will help you pinpoint the exact location of the issues anyway so you can fix them.
When considering first-click data in your own Treejack study, just like you would with the pietrees, use the data under the Task results tab as a starting point to identify which tasks you’d like to take a closer look at. For example, in that Sephora study I mentioned, Task 5 showed some room for improvement. Participants were tasked with finding out if Sephora ships to PO boxes and only 44% of participants were able to do this as shown in the image below.
Task 5’s overview taken from a study I ran on the footer of Sephora’s US website in late 2017.
Looking at the first click table for this task (below), we can see that only 53% of participants overall started on the right first click which was ‘Help & FAQs’ (as shown in bold text).
Task 5’s first click table taken from a study I ran on the footer of Sephora’s US website in late 2017.
Almost half the participants who completed this task started off on the wrong foot and a quarter overall clicked on ‘About Sephora’ first. We also know that 69% of participants visited that correct first node during the task which shows that some people were able to get back on track, but almost a third of participants still didn’t go anywhere near the correct location for that content. In this particular case, it’s possible that the correct first click of ‘Help & FAQs’ didn’t quite connect with participants as the place where postage options can be found.
Discovering the end of the road in the Destinations tab
As we near the end of this advanced Treejack analysis guide, our last stop is the Destinations tab. Under here you’ll find a detailed matrix showing where your participants ended their journeys for each task across your entire study. It’s a great way to quickly see how accurate those final destinations were and if they weren’t, where people went instead. It’s also useful for tasks that have multiple correct answers because it can tell you which one was most popular with participants and potentially highlight opportunities to streamline your IA by removing unnecessary duplication.
Along the vertical axis of the grid you’ll find your entire IA tree expanded out and along the horizontal axis, you’ll see your tasks shown by number. For a refresher on which task is which, just hover over the task number on the very handy sticky horizontal axis. Where these 2 meet in the grid, the number of participants who selected that node of the tree for that task will be displayed. If there isn’t a number in the box — regardless of shading — no one selected that node as their nominated correct answer for that task.
The boxes corresponding to the correct nodes for each task are shaded in green. Numberless green boxes can tell you in one quick glance if people aren’t ending up where they should be and if you scroll up and down the table, you’ll be able to see where they went instead.
Red boxes with numbers indicate that more than 20% of people incorrectly chose that node as well as how many did that. Orange boxes with numbers do the same but for nodes where between 10% and 20% of people selected it. And finally, boxes with numbers and no shading, indicate that less than 10% selected that node.
In the below example taken from that Sephora study we’ve been talking about in this guide, we can see that ‘Services’ was one of the correct answers for Task 4 and no one selected it.
The Destinations table is as long as the IA when it’s fully expanded and when we scroll all the way down through it (see below), we can see that there were a total of 3 correct answers for Task 4. For this task, 8 participants were successful and their responses were split across the 2 locations for the more specific ‘Beauty Services’ with the one under ‘Book a Reservation’ being the most popular and potentially best placed because it was chosen by 7 out of the 8 participants.
When viewed in isolation, each tab in Treejack offers a different and valuable perspective on your tree test data and when combined, they come together to build a much richer picture of your study results overall. The more you use Treejack, the better you’ll get at picking up on patterns and journey pathways in your data and you’ll be mastering that IA in no time at all!