Optimal Blog
Articles and Podcasts on Customer Service, AI and Automation, Product, and more

In our Value of UX Research report, nearly 70% of participants identified analysis and synthesis as the area where AI could make the biggest impact.
At Optimal, we're all about cutting the busywork so you can spend more time on meaningful insights and action. That’s why we’ve built automated Insights, powered by AI, to instantly surface key themes from your survey responses.
No extra tools. No manual review. Just faster insights to help you make quicker, data-backed decisions.
What You’ll Get with Automated Insights
- Instant insight discovery
Spot patterns instantly across hundreds of responses without reading every single one. Get insights served up with zero manual digging or theme-hunting. - Insights grounded in real participant responses
We show the numbers behind every key takeaway, including percentage and participant count, so you know exactly what’s driving each insight. And when participants say it best, we pull out their quotes to bring the insights to life. - Zoom in for full context
Want to know more? Easily drill down to the exact participants behind each insight for open text responses, so you can verify, understand nuances, and make informed decisions with confidence. - Segment-specific insights
Apply any segment to your data and instantly uncover what matters most to that group. Whether you’re exploring by persona, demographic, or behavior, the themes adapt accordingly. - Available across the board
From survey questions to pre- and post-study, and post-task questions, you’ll automatically get Insights across all question types, including open text questions, matrix, ranking, and more.
Automate the Busywork, Focus on the Breakthroughs
Automated Insights are just one part of our ever-growing AI toolkit at Optimal. We're making it easier (and faster) to go from raw data to real impact, such as our AI Simplify tool to help you write better survey questions, effortlessly. Our AI assistant suggests clearer, more effective wording to help you engage participants and get higher-quality data.
Ready to level up your UX research? Log into your account to get started with these newest capabilities or sign up for a free trial to experience them for yourselves.
Topics
Research Methods
Popular
All topics
Latest
Testing FAQs with people who don’t use your site
“Questions are never indiscreet, answers sometimes are.”Oscar Wilde
Frequently asked question pages. Love them or hate them, I don’t think they’re going anywhere anytime soon. This debate has been going on for quite some time and there is an equal number of opinions on both sides of the FAQ fence. Nielsen Norman Group’s Susan Farrell says FAQs can still add value to a website when done properly, and Gerry McGovern says FAQs are the dinosaurs of web navigation.
So, how do we really know for sure if they will or won’t add value to a design? Like anything in UX, you have to test it! I don’t know about you, but I’m a shake-it-and-see-what-falls-out kind of UXer, so naturally I decided to run a Treejack study. Scouring the web one fine day, I came across Sainsbury’s Active Kids. Its FAQ page was unlike any I had ever seen and I knew I’d found the one. I was also curious to see how it would test with people who don’t use the website — after all, anyone should be able to use it. Since Active Kids is an active lifestyle program for UK schools and sports clubs, I recruited my participants entirely from the US.
Pull up a chair and get comfy because what I found out should serve as a lesson to us all.
Why Active Kids? 🤸🏼
First of all, why did I choose this in the first place? The Active Kids FAQ page caught my attention for three main reasons:
- structure
- labels
- content
The structure of this FAQs page is quite deep, complex and very different from the rest of the site — almost like another information architecture (IA) had been built within the main structure. Imagine you have a large warehouse with hundreds of shelves, and then somewhere in the middle of it, someone builds a house — that’s how it felt to me.
There are two ways to get to it: through the “Help” label on the top navigation bar and the “FAQ” label in the footer. It also uses a combination of drop-down filters that the user needs to apply, but it also has automatic filter options and confusing labels that can send you down a path you don’t necessarily want to take.
I also found it very interesting that most of the information contained within the FAQs section cannot be located anywhere else on the website and most of this is essential to gaining a fundamental understanding of what Active Kids actually does. Adding to the house in the warehouse analogy, it’s like the house holds all the key information the warehouse needs to function, but no one knows which room it’s kept in.

Setting up the study 🛠️
Treejack was the perfect choice for testing the findability of information on the Active Kids FAQ page and I decided to test the IA of the website as a whole — this means both the warehouse and the house. I couldn’t just test the house in isolation because that’s not how a user would interact with it. The test needed the context of the whole site to gain an understanding of what’s going on. Creating a Treejack study is quick and easy and all you have to do is build the structure out in a basic Excel spreadsheet and then copy and paste it into Treejack.
My next job was to determine the task based scenarios that my participants would use during the study. I decided to choose nine and all were derived from content located in the FAQs section and related to tasks a user might carry out when investigating or participating in the program. Once I had my tree and my tasks, all I had to do was set the correct answers based on where the information currently sits on the Active Kids website and I was ready to launch.
Recruiting participants for the study🙋🏾
In my experience,recruiting participants for a Treejack study is quick and easy. All you have to do is determine the screener criteria for your participants and Optimal Workshop takes care of the rest. For this study I requested 30 participants and they all had to reside in the US. I ended up with 31 completed responses and it was all over in less than two hours.
Treejack results 🌲
So, what fell out of that tree when I tested a website aimed at parents and teachers of kids in the UK with 31 Americans? I’ll be honest with you: it wasn’t pretty. Here’s what I discovered in this study:

- 81 per cent were unable to find out if home educators were eligible to apply (number 1 on the graph)
- 65 per cent were unable to find out what a Clubmark accreditation is (number 2 on the graph)
- 68 per cent were unable to find out how to share their wishlist with friends and family (number 3 on the graph)
- 64 per cent could not find the information that would explain the purpose of the £1 fee mentioned in the terms and conditions (number 4 on the graph)
- 97 per cent could not locate the information that would tell them if they could use a voucher from 2014 in 2015 (number 5 on the graph)
- No participant was able to determine if students from a middle school would be able to participate in Active Kids (number 8 on the graph)
- 58 per cent of participants in this study were unable to find out what the program is even about (number 9 on the graph)
On the flip side, 68 per cent of participants in this study were able to locate a phone number to contact Active Kids directly (number 6 on the graph) and 97 per cent were successfully able to work out how to redeem vouchers (number 7). Overall, it wasn’t great.In addition to some very useful quantitative data, Treejack also provides detailed information on the pathways followed by each participant.
Understanding the journey they took is just as valuable as discovering how many found their way to the correct destination. This additional level of granularity will show you where and when your user is getting lost in your structure and where they went next. It’s also handy for spotting patterns (e.g., multiple participants navigating to the same incorrect response).
I always set my studies to collect responses anonymously and when this occurs, Treejack assigns each participant a numerical identifier to help keep track of their experience without the participant having to share his or her personal details. For task 6, the paths chart below shows that participants numbered eight to 20 were able to navigate directly to the correct answer without deviating from the correct path I defined during setup.

For Task 3 (below) , the story told by the paths was quite different. Participant number five navigated back and forth several times through the structure in their attempt to locate information on how to share a wishlist. After all that effort, they were unable to find the information they needed to complete the task and nominated to contact Active Kids directly. Not only is this a bad experience for the user but it also puts unnecessary pressure on the call centre because the information should be readily available on the website.

Treejack also provides insights into where participants started their journey by recording first click data. Just like Chalkmark, this functionality will tell you if your users are starting out on the right foot from that all important first click.In this study I found it interesting that when looking for information regarding the eligibility of home educators in the Active Kids program, 42 per cent of participants clicked on “Schools & Groups” and 19 per cent clicked on “Parents & Community” for their first click. Only 6 per cent clicked on “Help”, which happens to be the only place this information can be found.

I also found the first click results for Task 9 to be very interesting. When looking for basic information on the program, more than half (52 per cent) of the participants in this study went straight to “Help”. This indicates that, for these participants, none of the other options were going to provide them the information they needed.

What can be learned from this study? 🎓
I mentioned earlier there was a lesson in this for everyone, and rather than dwell on how something tested, it’s time to move on to some lessons learned and constructive ideas for improvement. Based on the results of this Treejack study, here are my top three recommendations for improving the Active Kids website:
Rethink the content housed in the FAQs section
Most of the key information required to master the basics of what Active Kids is all about is housed entirely in the FAQs section. FAQs should not be the only place a user can find out basic information needed to understand the purpose of a product, program or service. I believe this website would benefit from some further thinking around what actually belongs in the FAQs section and what could be surfaced much higher.
Another idea would be to follow the lead of the Government Digital Service and remove the FAQs section altogether — food for thought. Frequently asked questions would not be frequently asked questions if people could actually find the information on your site in the first place. Figure out where the answers to these questions really belong.
If you’re using Treejack, just look at the fails in your results and figure out where people went first. Is there a trend? Is this the right place? Maybe think about putting the answer the user is looking for there instead.
Restructure the FAQs section
If you must have an FAQs section (and believe me I do understand that they don’t just disappear overnight! Just try to keep it as an interim solution only) please consider streamlining the way they are presented to the user. Ditch the filtering and display the list on one page only. Users should not have to drill down through several layers of content and then navigate through each category. For further reading on getting your FAQs straight, this Kissmetrics article is well worth a read.
Review the intent of the website
Looking at the Active Kids website and the results from this study, I feel the intent of this website could use some refining. If we come back to my warehouse and house analogy, the main chunk of the website (the warehouse) seems to be one giant advertisement, while the house (the FAQs) is where the action-oriented stuff lies. The house seems to hold the key information that people need to use the program and I think it could be displayed better. Don’t get me wrong, Active Kids does some incredibly good work for the community and should absolutely shout its achievements from the rooftops, however a sense of balance is required here. I think it’s time for the house and the warehouse to join forces into a solution that offers both rooftop shouting and usable information that facilitates participation.
The value of fresh eyes 👀
This study goes to show that regardless of where you are in your design process, whether that’s at the very beginning or a few years post-implementation, there is value to be gained from testing with a fresh set of eyes. I’m still undecided on which side of the FAQs debate I belong to — I’m going to sit on the fence and stand by the “if in doubt — test it” school of thought.
Further reading:
- "Strategic design for frequently asked questions" - This ebook from Nielsen Norman Group explains how to design world-class FAQs.
- "Infrequently asked questions of FAQs" - An article from R. Stephen Gracey published on A List Apart, discussing FAQs and their place on the web.
- "Anatomy of a website: website architecture" - An article on our blog explaining website architecture and how to test your own structure.
Using paper prototypes in UX
In UX research we are told again and again that to ensure truly user-centered design, it’s important to test ideas with real users as early as possible. There are many benefits that come from introducing the voice of the people you are designing for in the early stages of the design process. The more feedback you have to work with, the more you can inform your design to align with real needs and expectations. In turn, this leads to better experiences that are more likely to succeed in the real world.It is not surprising then that paper prototypes have become a popular tool used among researchers. They allow ideas to be tested as they emerge, and can inform initial designs before putting in the hard yards of building the real thing. It would seem that they’re almost a no-brainer for researchers, but just like anything out there, along with all the praise, they have also received a fair share of criticism, so let’s explore paper prototypes a little further.
What’s a paper prototype anyway? 🧐📖
Paper prototyping is a simple usability testing technique designed to test interfaces quickly and cheaply. A paper prototype is nothing more than a visual representation of what an interface could look like on a piece of paper (or even a whiteboard or chalkboard). Unlike high-fidelity prototypes that allow for digital interactions to take place, paper prototypes are considered to be low-fidelity, in that they don’t allow direct user interaction. They can also range in sophistication, from a simple sketch using a pen and paper to simulate an interface, through to using designing or publishing software to create a more polished experience with additional visual elements.

Showing a research participant a paper prototype is far from the real deal, but it can provide useful insights into how users may expect to interact with specific features and what makes sense to them from a basic, user-centered perspective. There are some mixed attitudes towards paper prototypes among the UX community, so before we make any distinct judgements, let's weigh up their pros and cons.
Advantages 🏆
- They’re cheap and fastPen and paper, a basic word document, Photoshop. With a paper prototype, you can take an idea and transform it into a low-fidelity (but workable) testing solution very quickly, without having to write code or use sophisticated tools. This is especially beneficial to researchers who work with tight budgets, and don’t have the time or resources to design an elaborate user testing plan.
- Anyone can do itPaper prototypes allow you to test designs without having to involve multiple roles in building them. Developers can take a back seat as you test initial ideas, before any code work begins.
- They encourage creativityFrom both the product teams participating in their design, but also from the users. They require the user to employ their imagination, and give them the opportunity express their thoughts and ideas on what improvements can be made. Because they look unfinished, they naturally invite constructive criticism and feedback.
- They help minimize your chances of failurePaper prototypes and user-centered design go hand in hand. Introducing real people into your design as early as possible can help verify whether you are on the right track, and generate feedback that may give you a good idea of whether your idea is likely to succeed or not.
Disadvantages 😬
- They’re not as polished as interactive prototypesIf executed poorly, paper prototypes can appear unprofessional and haphazard. They lack the richness of an interactive experience, and if our users are not well informed when coming in for a testing session, they may be surprised to be testing digital experiences on pieces of paper.
- The interaction is limitedDigital experiences can contain animations and interactions that can’t be replicated on paper. It can be difficult for a user to fully understand an interface when these elements are absent, and of course, the closer the interaction mimics the final product, the more reliable our findings will be.
- They require facilitationWith an interactive prototype you can assign your user tasks to complete and observe how they interact with the interface. Paper prototypes, however, require continuous guidance from a moderator in communicating next steps and ensuring participants understand the task at hand.
- Their results have to be interpreted carefullyPaper prototypes can’t emulate the final experience entirely. It is important to interpret their findings while keeping their limitations in mind. Although they can help minimize your chances of failure, they can’t guarantee that your final product will be a success. There are factors that determine success that cannot be captured on a piece of paper, and positive feedback at the prototyping stage does not necessarily equate to a well-received product further down the track.
Improving the interface of card sorting, one prototype at a time 💡
We recently embarked on a research project looking at the user interface of our card-sorting tool, OptimalSort. Our research has two main objectives — first of all to benchmark the current experience on laptops and tablets and identify ways in which we can improve the current interface. The second objective is to look at how we can improve the experience of card sorting on a mobile phone.
Rather than replicating the desktop experience on a smaller screen, we want to create an intuitive experience for mobiles, ensuring we maintain the quality of data collected across devices.Our current mobile experience is a scaled down version of the desktop and still has room for improvement, but despite that, 9 per cent of our users utilize the app. We decided to start from the ground up and test an entirely new design using paper prototypes. In the spirit of testing early and often, we decided to jump right into testing sessions with real users. In our first testing sprint, we asked participants to take part in two tasks. The first was to perform an open or closed card sort on a laptop or tablet. The second task involved using paper prototypes to see how people would respond to the same experience on a mobile phone.

Context is everything 🎯
What did we find? In the context of our research project, paper prototypes worked remarkably well. We were somewhat apprehensive at first, trying to figure out the exact flow of the experience and whether the people coming into our office would get it. As it turns out, people are clever, and even those with limited experience using a smartphone were able to navigate and identify areas for improvement just as easily as anyone else. Some participants even said they prefered the experience of testing paper prototypes over a laptop. In an effort to make our prototype-based tasks easy to understand and easy to explain to our participants, we reduced the full card sort to a few key interactions, minimizing the number of branches in the UI flow.
This could explain a preference for the mobile task, where we only asked participants to sort through a handful of cards, as opposed to a whole set.The main thing we found was that no matter how well you plan your test, paper prototypes require you to be flexible in adapting the flow of your session to however your user responds. We accepted that deviating from our original plan was something we had to embrace, and in the end these additional conversations with our participants helped us generate insights above and beyond the basics we aimed to address. We now have a whole range of feedback that we can utilize in making more sophisticated, interactive prototypes.
Whether our success with using paper prototypes was determined by the specific setup of our testing sessions, or simply by their pure usefulness as a research technique is hard to tell. By first performing a card sorting task on a laptop or tablet, our participants approached the paper prototype with an understanding of what exactly a card sort required. Therefore there is no guarantee that we would have achieved the same level of success in testing paper prototypes on their own. What this does demonstrate, however, is that paper prototyping is heavily dependent on the context of your assessment.
Final thoughts 💬
Paper prototypes are not guaranteed to work for everybody. If you’re designing an entirely new experience and trying to describe something complex in an abstracted form on paper, people may struggle to comprehend your idea. Even a careful explanation doesn’t guarantee that it will be fully understood by the user. Should this stop you from testing out the usefulness of paper prototypes in the context of your project? Absolutely not.
In a perfect world we’d test high fidelity interactive prototypes that resemble the real deal as closely as possible, every step of the way. However, if we look at testing from a practical perspective, before we can fully test sophisticated designs, paper prototypes provide a great solution for generating initial feedback.In his article criticizing the use of paper prototypes, Jake Knapp makes the point that when we show customers a paper prototype we’re inviting feedback, not reactions. What we found in our research however, was quite the opposite.
In our sessions, participants voiced their expectations and understanding of what actions were possible at each stage, without us having to probe specifically for feedback. Sure we also received general comments on icon or colour preferences, but for the most part our users gave us insights into what they felt throughout the experience, in addition to what they thought.
Further reading 🧠
- Why You Only Need To Test With 5 Users - Nielsen Norman Group’s Jakob Nielsen explains reveals his thoughts on only using five users when conducting usability testing.
- Sketching for better mobile experiences - Lennart Hennigs explains how to sketch and why you should do it in an article for Smashing Magazine.
- Paper prototypes work as well as software prototypes - Bob Bailey explains why he thinks paper prototypes are just as good as software prototypes for usability testing.

How to write great questions for your research
“The art and science of asking questions is the source of all knowledge.”- Thomas Berger
In 1974, Elizabeth Loftus and John Palmer conducted a simple study to illustrate the impact of different wording on responses to a question. The two researchers quizzed their participants on an accident that occurred, asking them to recall the speed that two vehicles were traveling before the incident happened.
One of the questions Loftus and Palmer asked was: “About how fast were the cars going when they smashed into each other?”, which elicited higher speed estimates than questions containing the verbs ‘collided’, ‘bumped’, ‘contacted’ or ‘hit’. Unsurprisingly the ‘smashed’ group was also more likely to recall seeing broken glass at the scene, without there being any glass present.Small wording changes can impact your data in a big way. The way you ask a question not only frames how a person responds to it, but it can introduce unintended bias in your findings. If you intend to use the data you collect in meaningful ways — to identify issues, deepen your understanding or make evidence-based decisions — you want to ensure your data is of the highest possible quality. The best way to be confident that you’re collecting quality data and not wasting your time and resources is knowing how to avoid the common mistakes that plague question design.
Regardless of whether you’re adding some pre- or post-survey questions to your Treejack, OptimalSort or Chalkmark survey, or simply aiming to ask your questions on paper, in person or your website, there are some basic principles you should follow. But first…
Questionnaire = survey?
Questionnaire and survey are terms that are frequently used interchangeably, and it is often tempting to use them synonymously. Unless you’re a word purist, chances are people will understand what you’re saying regardless of the term you use, however, it’s good to be aware of their differences in order to understand how they relate.A survey is a type of research design. It’s the process of gathering information for the purpose of measuring something, which encompasses everything from design, sampling, data collection and analysis. Surveys involve aggregating your data to reveal patterns and draw conclusions.
A questionnaire is a method of data collection.
Traditionally, questionnaires are used to collect information on an individual level, and have use cases such as job or loan applications, patient history forms etc. Think of questionnaires as an instrument you can use within conducting a wider survey, alongside other methods such as face-to-face interviews.There are differences involved in collecting survey information by post, email, online, telephone or face to face, and each method comes with its own set of advantages and disadvantages to consider. For now, however, let’s keep things simple, and focus on the very basic principles that will hold true regardless of the method you choose.
Here are some practical tips to help you become a confident question writer.
1. Think clearly about your needs 🤔💭
Clearly define your objectives. Start by asking yourself “What do I really need to learn?”When planning research, it’s tempting to jump right into writing your questions. However, taking a step back can save you a lot of time and frustration later down the road. Start by thinking about what you want to get out of your questions. Understand your information needs, draft your research questions and review them with your team or stakeholders before proceeding. Once you know what you want to get out of your study, you can narrow your focus and start to think about your objectives in greater detail. Being precise about the data you want to get out of your questions means it will be easier to plan how to organize and filter your findings.
2. Choose your words wisely 💬
Badly worded questions lead to poor quality data. To help you write better questions it’s good to be aware of seemingly obvious, yet common mistakes that can plague question writing. Here are some tips to follow.
Use clear, plain language.
Avoid technical descriptions, acronyms and jargon. If necessary, add a definition or some help text around your question to avoid confusion.
Be specific.
Avoid ambiguity in what you are asking. The more specific your question is, the more likely people are to understand it in the same way. “Where do you usually shop?” will likely be interpreted differently by each respondent.
Ensure your questions are neutral and unbiased.
Bias can be introduced into your questions in many ways:
- Avoid asking double-barrelled questions, e.g., “How satisfied are you with the use and visual feel of our website?”. Instead, stick to asking one question at a time.
- Leading or loaded questions use assumptions and emotional language to elicit particular responses. They (intentionally or unintentionally) bias respondents towards certain answers, e.g., “How happy are you with our service?” would become “How do you feel about our service?”
Set realistic timeframes.
Utilizing appropriate timeframes in your questions leads to better estimates and more reliable data from your respondents. When providing timeframes, be sure to keep them reasonable — some behaviors can be asked on a yearly basis (e.g., switching internet providers), while others are easier to think of over the space of a week (e.g., supermarket visits).It is also important to be realistic about how much people are able to remember over time. If asking about satisfaction with a service in the past year, people are most likely to remember either their most recent, or their worst experiences. Sticking to reasonable recall periods will lead to better quality data.
Don’t assume.
The way we experience the world influences our thinking, and it is important to be aware of your own biases to avoid questions that make assumptions, e.g., “How many UX Researchers do you have at your company?”.
Don’t play the negatives game.
Avoid the use of negatives and double negatives when writing your questions. On a cognitive level, negative questions take more time to comprehend and process. Double negatives include two negative aspects within a question e.g., “Do you agree or disagree that it is not a good idea to not show up to work on time”. Negatives and double negatives can lead to confusion and contradictory responses.
3. Think about your audience 👨👩👦👦
Who is likely to answer your questions? What are the characteristics of the people you are trying to target? Consider the group you are writing for, and what kind of language and terminology they may be familiar with. Remember that not everyone is a native speaker of your language and no matter how sophisticated your vocabulary might be, plain language is going to lead to a better result.Context is important and knowing your audience can impact their willingness to contribute to your research. Questions written for a sample of academics will differ in tone from those intended for high school students. Don’t be afraid to give your questions a casual feel if you’re trying to connect to a group that may otherwise be unwilling to provide their answers.

4. Don’t burden your participants 😒😖😩
No matter how great your questions are, if they are too long, complex or repetitive, it’s likely your respondents will quickly lose interest. Bored respondents lead to not only poor quality data, but also higher nonresponse rates. Some subject areas lend themselves to higher respondent burden by nature, for example insurance, mortgages, or medical histories.Generally if it’s not an immediate priority, avoid unnecessary details. A shorter set of high quality data is more valuable than a whole stack of potentially erroneous data collected via a lengthy questionnaire. One way to remedy respondent burden is to offer incentives like vouchers, discount codes or competition entries. Giving people a good reason to answer your questions will not only make it easier for you to find willing respondents, but may increase engagement and lead to higher quality data.

5. Consider your response options (and avoid data insanity) 📊 🤪
It is important to be pragmatic when choosing your response options to avoid being swarmed with data that’s difficult to handle and analyze.Open questions invite respondents to elaborate and can help in identifying themes that closed questions may overlook. So, on the one hand they can provide a wealth of useful information, but on the other it is important to consider their practicality. If you want to collect 1,000 responses but don’t have the time or resources to review a multitude of varying open-ended data, consider whether it’s worth collecting in the first place. Open ended questions can be useful when you’re not quite sure what you’re looking for, so unless you’re running an exploratory study on a small group of people, try to limit their use.
Closed questions force respondents to select an existing option from a list. They are quick to fill in, and easy to code and analyze. Closed questions can include tick boxes, scales or single choice radio buttons. When asking closed questions it’s important to ensure the response options you provide are balanced, exclusive (they don’t overlap) and exhaustive (they contain all possible answers), even if this means adding an ‘other — please specify’ or a ‘not applicable’ option. For potentially sensitive questions, it’s important to give your respondents a ‘prefer not to say’ option, as forcing responses may lead to higher dropout rates and poor quality data.
6. Think carefully about order ➤➤➤
Question order is important as it can impact the truthfulness of the responses you collect.The general rule to follow is to start simple with easy, factual questions that are relevant to the objective of your survey. Additionally, it’s good to start with closed questions before introducing open-ended questions that may require more consideration. Once you get the basics out of the way, you can then introduce questions that are more specific, difficult, or abstract. Situate unrelated or demographic questions at the end. Once a rapport has been established your respondents will be more likely to answer these questions without dropping out.
7. If in doubt, test 🧪🕵️
Pretesting your questions before you go out to collect your data is a great way to identify any immediate issues. In a lot of cases, a simple peer review by a friend or colleague will help identify the things that are likely to cause problems for respondents. For evaluating your questions more thoroughly, you may want to observe people as they make their way through your survey. This is a good time to see whether respondents are understanding and interpreting your questions in the same way, and will help identify issues with wording and response options. Getting your participants to think aloud is a useful technique for understanding how people are working through your questions.
8. Remember the basics! 🔤
Always explain the purpose of your research to your participants and how the information you collect will be used. Provide a statement that guarantees confidentiality and outline who will have access to the information provided.Above all, remember to thank your participants for their time. We’re all human, and people want to know that their contribution is valuable and appreciated.

Further reading 📚
- The psychology of survey respondents - Our CEO Andrew discusses the different kinds of motivation people have to respond to surveys.
- How to ask about user satisfaction in a survey - An article by Caroline Jarrett on UXMatters talks about how to gauge the satisfaction of your users.
- Keep online surveys short - Jakob Nielsen from Nielsen Norman Group explains how to get high response rates and great results by using shorter surveys.
- Avoiding bias in user testing - Our very own Agony Aunt explains how to avoid bias in your user testing.
- Reconstruction of automobile destruction: An example of the interaction between language and memory - The original study from Loftus and Palmer showing the different questions and responses.
How Andy is using card sorting to prioritize our product improvements
There has been a flurry of new faces in the Optimal Workshop office since the beginning of the year, myself included! One of the more recent additions is Andy (not to be confused with our CEO Andrew) who has stepped into the role of product manager. I caught up with Andy to hear about how he’s making use of OptimalSort to fast-track the process of prioritizing product improvements.
I was also keen to learn more about how he ensures our users are at the forefront throughout the prioritization process.Only a few weeks in, it’s no surprise that the current challenges of the product manager role are quite different to what they’ll be in a year or two. Aside from learning all he can about Optimal Workshop and our suite of tools, Andy says that the greatest task he currently faces is prioritizing the infinite list of things that we could do. There's certainly no shortage of high value ideas!
Product improvement prioritization: a plethora of methods
So, what’s the best approach for prioritization, especially when everything is brand new to you? Andy says that despite his experience working with a variety of people and different techniques, he’s found that there’s no single, perfect answer. Factors that could favor a particular technique over another range from company strategy, type of product or project, team structure, and time constraints. Just to illustrate the range of potential approaches, this guide by Daniel Zacarias, a freelance product management consultant, discusses no less than 20 popular product prioritization techniques! Above all, a product manager should never make decisions in isolation; you can only be successful if you bring in experts on the business direction and the technical considerations — and of course your users!
Fact-packed prioritization with card sorting
For his first pass at tackling the lengthy list of improvements, Andy settled on running a prioritization exercise in OptimalSort. As an added benefit, this gave him the chance to familiarize himself with one of Optimal Workshop’s tools from a user’s perspective.In preparation for the sort, Andy ran quick interviews with members across the Optimal Workshop team in order to understand what they saw as the top priority features. The Customer Success and User Research teams, in particular, were encouraged to contribute suggestions directly from the wealth of user feedback that they receive.
From these suggestions, Andy eliminated any duplicates and created a list of 30 items that covered the top priority features. He then created a closed card sort with these items and asked the whole team to to rank cards as ‘Most important’, ‘Very important’, and ‘Important’. He also added the options ‘Not sure what these cards mean’ and ‘No opinion on these cards’.
He provided descriptions to give a short explanation of each feature, and set the survey options so that participants were required to sort all cards. Although this is not compulsory for an internal prioritization sort such as this, particularly if your participants are motivated to provide feedback, it can ensure that you gather as much feedback as possible.
The benefit of using OptimalSort to prioritize product improvements was that it allowed Andy to efficiently tap into the collective knowledge of the whole team. He admits that he could have approached the activity by running a series of more focussed, detailed meetings with key decision makers, but this wouldn’t have allowed him to engage the whole team and may have taken him longer to arrive at similar insights.
Ranking the results of the prioritization sort 🥇
Following an initial review of the prioritization sort results, there were some clear areas of agreement across the team. Topping the lot was implementing the improvements to Reframer that our research has identified as critical. Other clear priorities were increasing the functionality of Chalkmark and streamlining the process of upgrading surveys, so that users can carry this out themselves.Outside of this, the other priorities were not quite as evident. Andy decided to apply a two-tiered approach for ranking the sorted cards by including:
- any card that was placed in the ‘Most important’ group by at least two people,
- and any card whose weighted priority was 20 or greater. (He calculated the weighted priority by multiplying the total of each card placed in ‘Most important’, ‘Very important’ and ‘Important’ by four, two and one, respectively.)
By applying the following criteria to the sort results, Andy was left with a solid list of 15 priority features to take forward. While there’s still more work to be done in terms of integrating these priorities into the product roadmap, the prioritization sort got Andy to the point where he could start having more useful conversations. In addition, he said the exercise gave him confidence in understanding the areas that need more investigation.
Improving the process of prioritizing with card sorting 🃏
Is there anything that we’d do differently when using card sorting for future prioritization exercises? For our next exercise, Andy recommended ensuring each card represented a feature of a similar size. For this initial sort, some cards described smaller, specific features, while others were larger and less well-defined, which meant it could be difficult to compare them side by side in terms of priority.
Thinking back, a ‘Not important’ category could also have been useful. He had initially shied away from doing this, as each card had come from at least one team member’s top five priorities. Andy now recognizes this could have actually encouraged good debate if some team members thought a particular feature was a priority, whereas others ranked it as ‘Not important’.
For the purposes of this sort, he didn’t make use of the card ranking feature which shows the order in which each participant sorted a card within a category. However, he thinks this would be invaluable if he was looking to carry out finer analysis for future prioritization sorts.
Prioritizing with a public roadmap 🛣️
While this initial prioritization sort included indirect user feedback via the Customer Success and User Research teams, it would also be invaluable to run a similar exercise with users themselves. In the longer-term, Andy mentioned he’d love to look into developing a customer-facing roadmap and voting system, similar to those run by companies such as Atlassian.
"It’s a product manager’s dream to have a community of highly engaged users and for them to be able to view and directly feedback on the development pipeline. People then have visibility over the range of requests, can see how others’ receive their requests and can often answer each other’s questions," Andy explains.
Have you ever used OptimalSort for a prioritization exercise? What other methods do you use to prioritize what needs to be done? Have you worked somewhere with a customer-facing product road map and how did this work for you? We’d love to learn about your ideas and experience, so leave us a comment below!

Which comes first: card sorting or tree testing?
“Dear Optimal Workshop,I want to test the structure of a university website (well certain sections anyway). My gut instinct is that it's pretty 'broken'. Lots of sections feel like they're in the wrong place. I want to test my hypotheses before proposing a new structure. I'm definitely going to do some card sorting, and was planning a mixture of online and offline. My question is about when to bring in tree testing. Should I do this first to test the existing IA? Or is card sorting sufficient? I do intend to tree test my new proposed IA in order to validate it, but is it worth doing it upfront too?" — Matt
Dear Matt,
Ah, the classic chicken or the egg scenario: Which should come first — tree testing or card sorting?
It’s a question that many researchers often ask themselves, but I’m here to help clear the air!You should always use both methods when changing up your information architecture (IA) in order to capture the most information.
Tree testing and card sorting, when used together, can give you fantastic insight into the way your users interact with your site. First of all, I’ll run through some of the benefits of each testing method.
What is card sorting and why should I use it?
Card sorting is a great method to gauge the way in which your users organize the content on your site. It helps you figure out which things go together and which things don’t. There are two main types of card sorting: open and closed.
Closed card sorting involves providing participants with pre-defined categories into which they sort their cards. For example, you might be reorganizing the categories for your online clothing store for women. Your cards would have all the names of your products (e.g., “socks”, “skirts” and “singlets”) and you also provide the categories (e.g.,“outerwear”, “tops” and “bottoms”).
Open card sorting involves providing participants with cards and leaving them to organize the content in a way that makes sense to them. It’s the opposite to closed card sorting, in that participants dictate the categories themselves and also label them. This means you’d provide them with the cards only — no categories.
Card sorting, whether open or closed, is very user focused. It involves a lot of thought, input, and evaluation from each participant, helping you to form the structure of your new IA.
What is tree testing and why should I use it?
Tree testing is a fantastic way to determine how your users are navigating your site and how they’re finding information. Your site is organised into a tree structure, sorted into topics and subtopics, and participants are provided with some tasks that they need to perform. The results will show you how your participants performed those tasks, if they were successful or unsuccessful, and which route they took to complete the tasks. This data is extremely useful for creating a new and improved IA.
Tree testing is an activity that requires participants to seek information, which is quite the contrast to card sorting — an activity that requires participants to sort and organize information. Each activity requires users to behave in different ways, so each method will give its own valuable results.
Should you run a card or tree test first?
In this scenario, I’d recommend running a tree test first in order to find out how your existing IA currently performs. You said your gut instinct is telling you that your existing IA is pretty “broken”, but it’s good to have the data that proves this and shows you where your users get lost.
An initial tree test will give you a benchmark to work with — after all, how will you know your shiny, new IA is performing better if you don’t have any stats to compare it with? Your results from your first tree test will also show you which parts of your current IA are the biggest pain points and from there you can work on fixing them. Make sure you keep these tasks on hand — you’ll need them later!
Once your initial tree test is done, you can start your card sort, based on the results from your tree test. Here, I recommend conducting an open card sort so you can understand how your users organize the content in a way that makes sense to them. This will also show you the language your participants use to name categories, which will help you when you’re creating your new IA.
Finally, once your card sort is done you can conduct another tree test on your new, proposed IA. By using the same (or very similar) tasks from your initial tree test, you will be able to see that any changes in the results can be directly attributed to your new and improved IA.
Once your test has concluded, you can use this data to compare the performance from the tree test for your original information architecture — hopefully it is much better now!
Web usability guide
There’s no doubt usability is a key element of all great user experiences, how do we apply and test usability principles for a website? This article looks at usability principles in web design, how to test it, practical tips for success and a look at our remote testing tool, Treejack.
A definition of usability for websites 🧐📖
Web usability is defined as the extent to which a website can be used to achieve a specific task or goal by a user. It refers to the quality of the user experience and can be broken down into five key usability principles:
- Ease of use: How easy is the website to use? How easily are users able to complete their goals and tasks? How much effort is required from the user?
- Learnability: How easily are users able to complete their goals and tasks the first time they use the website?
- Efficiency: How quickly can users perform tasks while using your website?
- User satisfaction: How satisfied are users with the experience the website provides? Is the experience a pleasant one?
- Impact of errors: Are users making errors when using the website and if so, how serious are the consequences of those errors? Is the design forgiving enough make it easy for errors to be corrected?
Why is web usability important? 👀
Aside from the obvious desire to improve the experience for the people who use our websites, web usability is crucial to your website’s survival. If your website is difficult to use, people will simply go somewhere else. In the cases where users do not have the option to go somewhere else, for example government services, poor web usability can lead to serious issues. How do we know if our website is well-designed? We test it with users.
Testing usability: What are the common methods? 🖊️📖✏️📚
There are many ways to evaluate web usability and here are the common methods:
- Moderated usability testing: Moderated usability testing refers to testing that is conducted in-person with a participant. You might do this in a specialised usability testing lab or perhaps in the user’s contextual environment such as their home or place of business. This method allows you to test just about anything from a low fidelity paper prototype all the way up to an interactive high fidelity prototype that closely resembles the end product.
- Moderated remote usability testing: Moderated remote usability testing is very similar to the previous method but with one key difference- the facilitator and the participant/s are not in the same location. The session is still a moderated two-way conversation just over skype or via a webinar platform instead of in person. This method is particularly useful if you are short on time or unable to travel to where your users are located, e.g. overseas.
- Unmoderated remote usability testing: As the name suggests, unmoderated remote usability testing is conducted without a facilitator present. This is usually done online and provides the flexibility for your participants to complete the activity at a time that suits them. There are several remote testing tools available ( including our suite of tools ) and once a study is launched these tools take care of themselves collating the results for you and surfacing key findings using powerful visual aids.
- Guerilla testing: Guerilla testing is a powerful, quick and low cost way of obtaining user feedback on the usability of your website. Usually conducted in public spaces with large amounts of foot traffic, guerilla testing gets its name from its ‘in the wild’ nature. It is a scaled back usability testing method that usually only involves a few minutes for each test but allows you to reach large amounts of people and has very few costs associated with it.
- Heuristic evaluation: A heuristic evaluation is conducted by usability experts to assess a website against recognized usability standards and rules of thumb (heuristics). This method evaluates usability without involving the user and works best when done in conjunction with other usability testing methods eg Moderated usability testing to ensure the voice of the user is heard during the design process.
- Tree testing: Also known as a reverse card sort, tree testing is used to evaluate the findability of information on a website. This method allows you to work backwards through your information architecture and test that thinking against real world scenarios with users.
- First click testing: Research has found that 87% of users who start out on the right path from the very first click will be able to successfully complete their task while less than half ( 46%) who start down the wrong path will succeed. First click testing is used to evaluate how well a website is supporting users and also provides insights into design elements that are being noticed and those that are being ignored.
- Hallway testing: Hallway testing is a usability testing method used to gain insights from anyone nearby who is unfamiliar with your project. These might be your friends, family or the people who work in another department down the hall from you. Similar to guerilla testing but less ‘wild’. This method works best at picking up issues early in the design process before moving on to testing a more refined product with your intended audience.
Online usability testing tool: Tree testing 🌲🌳🌿
Tree testing is a remote usability testing tool that uses tree testing to help you discover exactly where your users are getting lost in the structure of your website. Treejack uses a simplified text-based version of your website structure removing distractions such as navigation and visual design allowing you to test the design from its most basic level.
Like any other tree test, it uses task based scenarios and includes the opportunity to ask participants pre and post study questions that can be used to gain further insights. Tree testing is a useful tool for testing those five key usability principles mentioned earlier with powerful inbuilt features that do most of the heavy lifting for you. Tree testing records and presents the following for each task:
- complete details of the pathways followed by each participant
- the time taken to complete each task
- first click data
- the directness of each result
- visibility on when and where participants skipped a task
Participant paths data in our tree testing tool 🛣️
The level of detail recorded on the pathways followed by your participants makes it easy for you to determine the ease of use, learnability, efficiency and impact of errors of your website. The time taken to complete each task and the directness of each result also provide insights in relation to those four principles and user satisfaction can be measured through the results to your pre and post survey questions.
The first click data brings in the added benefits of first click testing and knowing when and where your participants gave up and moved on can help you identify any issues.Another thing tree testing does well is the way it brings all data for each task together into one comprehensive overview that tells you everything you need to know at a glance. Tree testing's task overview- all the key information in one placeIn addition to this, tree testing also generates comprehensive pathway maps called pietrees.
Each junction in the pathway is a piechart showing a statistical breakdown of participant activity at that point in the site structure including details about: how many were on the right track, how many were following the incorrect path and how many turned around and went back. These beautiful diagrams tell the story of your usability testing and are useful for communicating the results to your stakeholders.
Usability testing tips 🪄
Here are seven practical usability testing tips to get you started:
- Test early and often: Usability testing isn’t something that only happens at the end of the project. Start your testing as soon as possible and iterate your design based on findings. There are so many different ways to test an idea with users and you have the flexibility to scale it back to suit your needs.
- Try testing with paper prototypes: Just like there are many usability testing methods, there are also several ways to present your designs to your participant during testing. Fully functioning high fidelity prototypes are amazing but they’re not always feasible (especially if you followed the previous tip of test early and often). Paper prototypes work well for usability testing because your participant can draw on them and their own ideas- they’re also more likely to feel comfortable providing feedback on work that is less resolved! You could also use paper prototypes to form the basis for collaborative design sessions with your users by showing them your idea and asking them to redesign or design the next page/screen.
- Run a benchmarking round of testing: Test the current state of the design to understand how your users feel about it. This is especially useful if you are planning to redesign an existing product or service and will save you time in the problem identification stages.
- Bring stakeholders and clients into the testing process: Hearing how a product or service is performing direct from a user can be quite a powerful experience for a stakeholder or client. If you are running your usability testing in a lab with an observation room, invite them to attend as observers and also include them in your post session debriefs. They’ll gain feedback straight from the source and you’ll gain an extra pair of eyes and ears in the observation room. If you’re not using a lab or doing a different type of testing, try to find ways to include them as observers in some way. Also, don’t forget to remind them that as observers they will need to stay silent for the entire session beyond introducing themselves so as not to influence the participant - unless you’ve allocated time for questions.
- Make the most of available resources: Given all the usability testing options out there, there’s really no excuse for not testing a design with users. Whether it’s time, money, human resources or all of the above making it difficult for you, there’s always something you can do. Think creatively about ways to engage users in the process and consider combining elements of different methods or scaling down to something like hallway testing or guerilla testing. It is far better to have a less than perfect testing method than to not test at all.
- Never analyse your findings alone: Always analyse your usability testing results as a team or with at least one other person. Making sense of the results can be quite a big task and it is easy to miss or forget key insights. Bring the team together and affinity diagram your observations and notes after each usability testing session to ensure everything is captured. You could also use Reframer to record your observations live during each session because it does most of the analysis work for you by surfacing common themes and patterns as they emerge. Your whole team can use it too saving you time.
- Engage your stakeholders by presenting your findings in creative ways: No one reads thirty page reports anymore. Help your stakeholders and clients feel engaged and included in the process by delivering the usability testing results in an easily digestible format that has a lasting impact. You might create an A4 size one page summary, or maybe an A0 size wall poster to tell everyone in the office the story of your usability testing or you could create a short video with snippets taken from your usability testing sessions (with participant permission of course) to communicate your findings. Remember you’re also providing an experience for your clients and stakeholders so make sure your results are as usable as what you just tested.
Related reading 🎧💌📖