May 4, 2015
4 min

Collating your user testing notes

It’s been a long day. Scratch that - it’s been a long week! Admit it. You loved every second of it.

Twelve hour days, the mad scramble to get the prototype ready in time, the stakeholders poking their heads in occasionally, dealing with no-show participants and the excitement around the opportunity to speak to real life human beings about product or service XYZ. Your mind is exhausted but you are buzzing with ideas and processing what you just saw. You find yourself sitting in your war room with several pages of handwritten notes and with your fellow observers you start popping open individually wrapped lollies leftover from the day’s sessions. Someone starts a conversation around what their favourite flavour is and then the real fun begins. Sound familiar? Welcome to the post user testing debrief meeting.

How do you turn those scribbled notes and everything rushing through your mind into a meaningful picture of the user experience you just witnessed? And then when you have that picture, what do you do next? Pull up a bean bag, grab another handful of those lollies we feed our participants and get comfy because I’m going to share my idiot-proof, step by step guide for turning your user testing notes into something useful.

Let’s talk

Get the ball rolling by holding a post session debrief meeting while it’s all still fresh your collective minds. This can be done as one meeting at the end of the day’s testing or you could have multiple quick debriefs in between testing sessions. Choose whichever options works best for you but keep in mind this needs to be done at least once and before everyone goes home and forgets everything. Get all observers and facilitators together in any meeting space that has a wall like surface that you can stick post its to - you can even use a window! And make sure you use real post its - the fake ones fall off!

Mark your findings (Tagging)

Before you put sharpie to post it, it’s essential to agree as a group on how you will tag your observations. Tagging the observations now will make the analysis work much easier and help you to spot patterns and themes. Colour coding the post its is by far the simplest and most effective option and how you assign the colours is entirely up to you. You could have a different colour for each participant or testing session, you could have different colours to denote participant attributes that are relevant to your study eg senior staff and junior staff, or you could use different colours to denote specific testing scenarios that were used. There’s many ways you could carve this up and there’s no right or wrong way. Just choose the option that suits you and your team best because you’re the ones who have to look at it and understand it. If you only have one colour post it eg yellow, you could colour code the pen colours you use to write on the notes or include some kind of symbol to help you track them.

Processing the paper (Collating)

That pile of paper is not going to process itself! Your next job as a group is to work through the task of transposing your observations to post it notes. For now, just stick them to the wall in any old way that suits you. If you’re the organising type, you could group them by screen or testing scenario. The positioning will all change further down the process, so at this stage it’s important to just keep it simple. For issues that occur repeatedly across sessions, just write them down on their own post its- doubles will be useful to see further down the track.In addition to  holding a debrief meetings, you also need to round up everything that was used to capture the testing session/s. And I mean EVERYTHING.

Handwritten notes, typed notes, video footage and any audio recordings need to be reviewed just in case something was missed. Any handwritten notes should be typed to assist you with the completion of the report. Don’t feel that you have to wait until the testing is completed before you start typing up your notes because you will find they pile up very quickly and if your handwriting is anything like mine…. Well let’s just say my short term memory is often required to pick up the slack and even that has it’s limits. Type them up in between sessions where possible and save each session as it’s own document. I’ll often use the testing questions or scenario based tasks to structure my typed notes and I find that makes it really easy to refer back to.Now that you’ve processed all the observations, it’s time to start sorting your observations to surface behavioural patterns and make sense of it all.

Spotting patterns and themes through affinity diagramming

Affinity diagramming is a fantastic tool for making sense of user testing observations. In fact it’s just about my favourite way to make sense of any large mass of information. It’s an engaging and visual process that grows and evolves like a living creature taking on a life of its own. It also builds on the work you’ve just done which is a real plus!By now, testing is over and all of your observations should all be stuck to a wall somewhere. Get everyone together again as a group and step back and take it all in. Just let it sit with you for a moment before you dive in. Just let it breathe. Have you done that? Ok now as individuals working at the same time, start by grouping things that you think belong together. It’s important to just focus on the content of the labels and try to ignore the colour coded tagging at this stage, so if session one was blue post its don’t group all the blue ones together just because they’re all blue! If you get stuck, try grouping by topic or create two groups eg issues and wins and then chunk the information up from there.

You will find that the groups will change several times over the course of the process  and that’s ok because that’s what it needs to do.While you do this, everyone else will be doing the same thing - grouping things that make sense to them.  Trust me, it’s nowhere near as chaotic as it sounds! You may start working as individuals but it won’t be long before curiosity kicks in and the room is buzzing with naturally occurring conversation.Make sure you take a step back regularly and observe what everyone else is doing and don’t be afraid to ask questions and move other people’s post its around- no one owns it! No matter how silly something may seem just put it there because it can be moved again. Have a look at where your tagged observations have ended up. Are there clusters of colour? Or is it more spread out? What that means will depend largely on how you decided to tag your findings. For example if you assigned each testing session its own colour and you have groups with lot’s of different colours in them you’ll find that the same issue was experienced by multiple people.Next, start looking at each group and see if you can break them down into smaller groups and at the same time consider the overall picture for bigger groups eg can the wall be split into say three high level groups.Remember, you can still change your groups at anytime.

Thinning the herd (Merging)

Once you and your team are happy with the groups, it’s time to start condensing the size of this beast. Look for doubled up findings and stack those post its on top of each other to cut the groups down- just make sure you can still see how many there were. The point of merging is to condense without losing anything so don’t remove something just because it only happened once. That one issue could be incredibly serious. Continue to evaluate and discuss as a group until you are happy. By now clear and distinct groups of your observations should have emerged and at a glance you should be able to identify the key findings from your study.

A catastrophe or a cosmetic flaw? (Scoring)

Scoring relates to how serious the issues are and how bad the consequences of not fixing them are. There are arguments for and against the use of scoring and it’s important to recognise that it is just one way to communicate your findings.I personally rarely use scoring systems. It’s not really something I think about when I’m analysing the observations. I rarely rank one problem or finding over another. Why? Because all data is good data and it all adds to the overall picture.I’ve always been a huge advocate for presenting the whole story and I will never diminish the significance of a finding by boosting another. That said, I do understand the perspective of those who place metrics around their findings. Other designers have told me they feel that it allows them to quantify the seriousness of each issue and help their client/designer/boss make decisions about what to do next.We’ve all got our own way of doing things, so I’ll leave it up to you to choose whether or not you score the issues. If you decide to score your findings there are a number of scoring systems you can use and if I had to choose one, I quite like Jakob Nielsen’s methodology for the simple way it takes into consideration multiple factors. Ultimately you should choose the one that suits your working style best.

Let’s say you did decide to score the issues. Start by writing down each key finding on it’s own post it and move to a clean wall/ window. Leave your affinity diagram where it is. Divide the new wall in half: one side for wins eg findings that indicate things that tested well and the other for issues. You don’t need to score the wins but you do need to acknowledge what went well because knowing what you’re doing well is just as important as knowing where you need to improve. As a group (wow you must be getting sick of each other! Make sure you go out for air from time to time!) score the issues based on your chosen methodology.Once you have completed this entire process you will have everything you need to write a kick ass report.

What could possibly go wrong? (and how to deal with it)

No process is perfect and there are a few potential dramas to be aware of:

People jumping into solution mode too early

In the middle of the debrief meeting, someone has an epiphany. Shouts of We should move the help button! or We should make the yellow button smaller! ring out and the meeting goes off the rails.I’m not going to point fingers and blame any particular role because we’ve all done it, but it’s important to recognise that’s not why we’re sitting here. The debrief meeting is about digesting and sharing what you and the other observers just saw. Observing and facilitating user testing is a privilege. It’s a precious thing that deserves respect and if you jump into solution mode too soon, you may miss something. Keep the conversation on track by appointing a team member to facilitate the debrief meeting.

Storage problems

Handwritten notes taken by multiple observers over several days of testing adds up to an enormous pile of paper. Not only is it a ridiculous waste of paper but they have to be securely stored for three months following the release of the report. It’s not pretty. Typing them up can solve that issue but it comes with it’s own set of storage related hurdles. Just like the handwritten notes, they need to be stored securely. They don’t belong on SharePoint or in the share drive or any other shared storage environment that can be accessed by people outside your observer group. User testing notes are confidential and are not light reading for anyone and everyone no matter how much they complain. Store any typed notes in a limited access storage solution that only the observers have access to and if anyone who shouldn’t be reading them asks, tell them that they are confidential and the integrity of the research must be preserved and respected.

Time issues

Before the storage dramas begin, you have to actually pick through the mountain of paper. Not to mention the video footage, and the audio and you have to chase up that sneaky observer who disappeared when the clock struck 5. All of this takes up a lot of time. Another time related issue comes in the form of too much time passing in between testing sessions and debrief meetings. The best way to deal with both of these issues  is to be super organised and hold multiple smaller debriefs in between sessions where possible. As a group, work out your time commitments before testing begins and have a clear plan in place for when you will meet.  This will prevent everything piling up and overwhelming you at the end.

Disagreements over scoring

At the end of that long day/week we’re all tired and discussions around scoring the issues can get a little heated. One person’s showstopper may be another person’s mild issue. Many of the ranking systems use words as well as numbers to measure the level of severity and it’s easy to get caught up in the meaning of the words and ultimately get sidetracked from the task at hand. Be proactive and as a group set ground rules upfront for all discussions. Determine how long you’ll spend discussing an issue and what you will do in the event that agreement cannot be reached. People want to feel heard and they want to feel like their contributions are valued. Given that we are talking about an iterative process, sometimes it’s best just to write everything down to keep people happy and merge and cull the list in the next iteration. By then they’ve likely had time to reevaluate their own thinking.

And finally...

We all have our own ways of making sense of our user testing observations and there really is no right or wrong way to go about it. The one thing I would like to reiterate is the importance of collaboration and teamwork. You cannot do this alone, so please don’t try. If you’re a UX team of one, you probably already have a trusted person that you bounce ideas off. They would be a fantastic person to do this with. How do you approach this process? What sort of challenges have you faced? Let me know in the comments below.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Does the first click really matter? Treejack says yes

In 2009, Bob Bailey and Cari Wolfson published apaper entitled “FirstClick Usability Testing: A new methodology for predicting users’ success on tasks”. They’d analyzed 12 scenario-based user tests and concluded that the first click people make is a strong leading indicator of their ultimate success on a given task. Their results were so compelling that we got all excited and created Chalkmark, a tool especially for first click usability testing. It occurred to me recently that we’ve never revisited the original premise for ourselves in any meaningful way.

And then one day I realized that, as if by magic, we’re sitting on quite possibly the world’s biggest database of tree test results. I wondered: can we use these results to back up Bob and Cari’s findings (and thus the relevanceof Chalkmark)?Hell yes we can.So we’ve analyzed tree testing data from millions of responses in Treejack, and we're thrilled (relieved) that it confirmed the findings from the 2009 paper — convincingly.

What the original study found

Bob and Cari analyzed data from twelve usability studies on websites and products ‘with varying amounts and types of content, a range of subject matter complexity, and distinct user interfaces’. They found that people were about twice as likely to complete a task successfully if they got their first click right, than if they got it wrong:

If the first click was correct, the chances of getting the entire scenario correct was 87%If the first click was incorrect, the chances of eventually getting the scenario correct was only 46%

What our analysis of tree testing data has found

We analyzed millions of tree testing responses in our database. We've found that people who get the first click correct are almost three times as likely to complete a task successfully:

If the first click was correct, the chances of getting the entire scenario correct was 70%If the first click was incorrect, the chances of eventually getting the scenario correct was 24%

To give you another perspective on the same data, here's the inverse:

If the first click was correct, the chances of getting the entire scenario incorrect was 30%If the first click was incorrect, the chances of getting the whole scenario incorrect was 76%

How Treejack measures first clicks and task success

Bob and Cari proved the usefulness of the methodology by linking two key metrics in scenario-based usability studies: first clicks and task success. Chalkmark doesn't measure task success — it's up to the researcher to determine as they're setting up the study what constitutes 'success', and then to interpret the results accordingly. Treejack does measure task success — and first clicks.

In a tree test, participants are asked to complete a task by clicking though a text-only version of a website hierarchy, and then clicking 'I'd find it here' when they've chosen an answer. Each task in a tree test has a pre-determined correct answer — as was the case in Bob and Cari's usability studies — and every click is recorded, so we can see participant paths in detail.

Thus, every single time a person completes an individual Treejack task, we record both their first click and whether they are successful or not. When we came to test the 'correct first click leads to task success' hypothesis, we could therefore mine data from millions of task.

To illustrate this, have a look at the results for one task.The overall Task result, you see a score for success and directness, and a breakdown of whether each Success, Fail, or Skip was direct (they went straight to an answer), or indirect (they went back up the tree before they selected an answer):

tree testing results

In the pietree for the same task, you can look in more detail at how many people went the wrong way froma label (each label representing one page of your website):

tree testing results

In the First Click tab, you get a percentage breakdown of which label people clicked first to complete the task:

tree testing results

And in the Paths tab, you can view individual participant paths in detail (including first clicks), and can filter the table by direct and indirect success, fails, and skips (this table is only displaying direct success and direct fail paths):

tree testing results

How to get busy with first click testing

This analysis reinforces something we already knew that firstclicks matterIt is worth your time to get that first impression right.You have plenty of options for measuring the link between first clicks and task success in your scenario-based usability tests. From simply noting where your participants go during observations, to gathering quantitative first click data via online tools, you'll win either way. And if you want to add the latter to your research, Chalkmark can give you first click data on wireframes and landing pages,and Treejack on your information architecture.

To finish, here's a few invaluable insights from other researchers ongetting the most from first click testing:

Learn more
1 min read

Quantifying the value of User Research in 2024 

Think your company is truly user-centric? Think again. Our groundbreaking report on UX Research (UXR) in 2024 shatters common assumptions about our industry.

We've uncovered a startling gap between what companies say about user-centricity and what they actually do. Prepare to have your perceptions challenged as we reveal the true state of UXR integration and its untapped potential in today's business landscape.

The startling statistics 😅

Here's a striking finding: only 16% of organizations have fully embedded UXR into their processes and culture. This disconnect between intention and implementation underscores the challenges in demonstrating and maximizing the true value of user research.

What's inside the white paper 👀

In this comprehensive white paper, we explore:

  • How companies use and value UX research
  • Why it's hard to show how UX research helps businesses
  • Why having UX champions in the company matters
  • New ways to measure and show the worth of UX research
  • How to share UX findings with different people in the company
  • New trends changing how people see and use UX research

Stats sneak peek 🤖

- Only 16% of organizations have fully embedded UX Research (UXR) into their processes and culture. This highlights a significant gap between the perceived importance of user-centricity and its actual implementation in businesses.

- 56% of organizations aren't measuring the impact of UXR at all. This lack of measurement makes it difficult for UX researchers to demonstrate the value of their work to stakeholders.

- 68% of respondents believe that AI will have the greatest impact on the analysis and synthesis phase of UX research projects. This suggests that while AI is expected to play a significant role in UXR, it's seen more as a tool to augment human skills rather than replace researchers entirely.

The UX research crossroads 🛣️

As our field evolves with AI, automation, and democratized research, we face a critical juncture: how do we articulate and amplify the value of UXR in this rapidly changing landscape? We’d love to know what you think! So DM us in socials and let us know what you’re doing to bridge the gap.

Are you ready to unlock the full potential of UXR in your organization? 🔐

Download our white paper for invaluable insights and actionable strategies that will help you showcase and maximize the value of user research. In an era of digital transformation, understanding and leveraging UXR's true worth has never been more crucial.

Download the white paper

What's next?🔮

Keep an eye out for our upcoming blog series, where we'll delve deeper into key findings and strategies from the report. Together, we'll navigate the evolving UX landscape and elevate the value of user insights in driving business success and exceptional user experiences.

Learn more
1 min read

Why user research is essential for product development

Many organizations are aware that staying relevant essential for their success. This can mean a lot of things to different organizations. What it often means is coming up with plenty of new, innovative ideas and products to keep pace with the demands and needs of the marketplace. It also means keeping up with the expectations and needs of your users, which often means  shorter and shorter product development life cycle times.  While maintaining this pace can be daunting, it can also be seen as a strength, tightening up your processes and cutting out unnecessary steps.

A vital part of developing new (or tweaking existing) products is considering the end user first. There really is no point in creating anything new if it isn’t meeting a need or filling a gap in the market. How can you make sure you are hitting the right mark? Ask your users.  We look into some of the key user research methods available to help you in your product development process.

If you want to know more about how to fit research into your product development process, take a read here.

What is user research? 👨🏻💻

User experience (UX) research, or user research as it’s commonly referred to, is an important part of the product development process. Primarily, UX research involves using different research methods to gather qualitative and quantitative data and insights about how your users interact with your product. It is an essential part of developing, building, and launching a product that truly meets the needs, desires, and requirements of your users. 

At its simplest, user research is talking to your users and understanding what they want and why. And using this to deliver what they need.

How does user research fit into the product development process? 🧩🧩

User research is an essential part of the product development process. By asking questions of your users about how your product works and what place it fills in the market, you can create a product that delivers what the market needs to those who need it. 

Without user research, you could literally be firing arrows in the dark, or at the very best, working from a very internal organizational view based on assuming that what you believe users need is what they want. With user research, you can collect qualitative and quantitative data that clearly tells you where and what users would like to see and how they would use it.

Investing in user research right at the start of the product development process can save the team and the organization heavy investment in time and money. With detailed data responses, your brand-new product can leapfrog many development hurdles, delivering a final product that users love and want to keep using. Firing arrows to hit a bullseye.

What user research methods should we use? 🥺

Qualitative ResearchMethods

Qualitative research is about exploration. It focuses on discovering things we cannot measure with numbers and typically involves getting to know users directly through interviews or observation.

Usability Testing – Observational

One of the best ways to learn about your users and how they interact with your new product is to observe them in their own environment. Watch how they accomplish tasks, the order they do things, what frustrates them, and what makes the task easier and/or more enjoyable for your subject. The data can be collated to inform the usability of your product, improving intuitive design and what resonates with your users.

Competitive Analysis

Reviewing products already on the market can be a great start to the product development process. Why are your competitors’ products successful? And how well do they behave for users? Learn from their successes, and even better, build on where they may not be performing as well and find where your product fills the gap in the market.

Quantitative Research Methods

Quantitative research is about measurement. It focuses on gathering data and then turning this data into usable statistics.

Surveys

Surveys are a popular user research method for gathering information from a wide range of people. In most cases, a survey will feature a set of questions designed to assess someone’s thoughts on a particular aspect of your new product. They’re useful for getting feedback or understanding attitudes, and you can use the learnings from your survey of a subset of users to draw conclusions about a larger population of users.

Wrap Up 🌯

Gathering information on your users during the product development process and before you invest time and money can be hugely beneficial to the entire process. Collating robust data and insights to guide the new product development and respond directly to user needs, and filling that all-important niche. Undertaking user experience research shouldn’t stop at product development but throughout each and every step of your product life cycle. If you want to find out more about UX research throughout the life cycle of your product, take a read of our article UX research for each product phase.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.