July 12, 2023
3 min

Get a headstart on your research with templates

We’re excited to announce our first six project templates are now available in Optimal Workshop! We understand that not everyone knows where to start with customer research, so these ready-made templates have been created with UX industry experts to give you the confidence to quickly launch studies and get the results you need to make data-driven decisions.  

Templates aren’t only a great solution for people who need guidance with which study type to use and when; our detailed templates also give you the tools to develop your IA thinking, compare the performance of studies over time, and get detailed project plans to guide you through your information architecture. 

How do templates work?

On the dashboard, you’ll see a new button called Browse Templates. From the templates menu, you can select a template that matches your use case, e.g. ‘I need to organise content into categories’. The templates are a helpful starting point, for you to adapt to suit your research goals. 

Let’s take a look at some of our favourite project templates. 

Organize content into categories

This template helps you design the best categories to organize your information based on how your  users think. It's useful for designing your product, website, or knowledge base experience, as well as re-evaluating any part of it. In this template, we will first conduct an open card sort, and then use that information to design a navigation structure that will be tested on end users.

1. First up run a card sort with OptimalSort

During this study, users will organise all information presented to them into categories they create themselves using an open card sort. This method is great for generating category ideas based on how users process this information allowing you to better design an experience in a more user-focused way. To find out more on how to set up your card sort, refer to our card sorting 101 guide.

2. Test your navigation structure with a Treejack

Based on the groupings that were produced from the card sort, you can now generate a hierarchy for your users to test using Treejack. Users search for the information you’ve categorised and represented as a hierarchy, which is valuable because it helps to confirm whether information placed within your hierarchy is findable and understandable.  

To learn more about tree testing, refer to our tree testing 101 guide.


Evaluate an existing navigation experience 

Regularly evaluating an existing navigation experience is a good way to monitor the health and performance of your website and product. This template is useful for both redesigning your experience and for re-evaluating part of it by helping you design ideal categories to organize your information based on how your target users think and improve findability and task completion. 

1. Start by identifying your top tasks using Reframer

Using Reframer, conduct interviews with various stakeholders in your business to evaluate and theme which tasks your organization believes are the most important within your existing environment. This is a solid first step towards building a list of top tasks for testing. Reframer allows you to easily visualize and group your observations by proximity using the affinity map.  

2. Survey users to understand their top tasks

Next, survey users to confirm their top tasks and identify any existing issues with our survey tool Questions. This will provide insight into what users believe are their top tasks and whether anything is getting in their way to achieve them. This step helps to ensure all design work is informed by up-to-date user tasks.

3. Design and test your current experience in Treejack  

Using the prioritised top tasks create a tree test using Treejack to test your navigation experience with your users. For example “How would you open a home loan” or “How would you upgrade your broadband plan” This will enable you to see how your users navigate your website in order to achieve the most business critical tasks in your organization. This is a valuable step that helps to identify information and design problems to solve early in the design process. 

More templates from our community

This is just the beginning of templates in Optimal Workshop and while we continue to add value and build up our collection, we’d love your input! If there are templates that you regularly use and think the community could benefit from, let us know at hello@optimalworkshop.com.

Share this article
Author
Sarah
Flutey

Related articles

View all blog articles
Learn more
1 min read

Measuring the impact of UXR: beyond CSAT and NPS

In the rapidly evolving world of user experience research (UXR), demonstrating value and impact has become more crucial than ever. While traditional metrics like Customer Satisfaction (CSAT) scores and Net Promoter Scores (NPS) have long been the go-to measures for UX professionals, they often fall short in capturing the full scope and depth of UXR's impact. As organizations increasingly recognize the strategic importance of user-centered design, it's time to explore more comprehensive and nuanced approaches to measuring UXR's contribution.

Limitations of traditional metrics

CSAT and NPS, while valuable, have significant limitations when it comes to measuring UXR impact. These metrics provide a snapshot of user sentiment but fail to capture the direct influence of research insights on product decisions, business outcomes, or long-term user behavior. Moreover, they can be influenced by factors outside of UXR's control, such as marketing campaigns or competitor actions, making it challenging to isolate the specific impact of research efforts.

Another limitation is the lack of context these metrics provide. They don't offer insights into why users feel a certain way or how specific research-driven improvements contributed to their satisfaction. This absence of depth can lead to misinterpretation of data and missed opportunities for meaningful improvements.

Alternative measurement approaches

To overcome these limitations, UX researchers are exploring alternative approaches to measuring impact. One promising method is the use of proxy measures that more directly tie to research activities. For example, tracking the number of research-driven product improvements implemented or measuring the reduction in customer support tickets related to usability issues can provide more tangible evidence of UXR's impact.

Another approach gaining traction is the integration of qualitative data into impact measurement. By combining quantitative metrics with rich, contextual insights from user interviews and observational studies, researchers can paint a more comprehensive picture of how their work influences user behavior and product success.

Linking UXR to business outcomes

Perhaps the most powerful way to demonstrate UXR's value is by directly connecting research insights to key business outcomes. This requires a deep understanding of organizational goals and close collaboration with stakeholders across functions. For instance, if a key business objective is to increase user retention, UX researchers can focus on identifying drivers of user loyalty and track how research-driven improvements impact retention rates over time.

Risk reduction is another critical area where UXR can demonstrate significant value. By validating product concepts and designs before launch, researchers can help organizations avoid costly mistakes and reputational damage. Tracking the number of potential issues identified and resolved through research can provide a tangible measure of this impact.

Case studies of successful impact measurement

While standardized metrics for UXR impact remain elusive, some organizations have successfully implemented innovative measurement approaches. For example, one technology company developed a "research influence score" that tracks how often research insights are cited in product decision-making processes and the subsequent impact on key performance indicators.

Another case study involves a financial services firm that implemented a "research ROI calculator." This tool estimates the potential cost savings and revenue increases associated with research-driven improvements, providing a clear financial justification for UXR investments.

These case studies highlight the importance of tailoring measurement approaches to the specific context and goals of each organization. By thinking creatively and collaborating closely with stakeholders, UX researchers can develop meaningful ways to quantify their impact and demonstrate the strategic value of their work.

As the field of UXR continues to evolve, so too must our approaches to measuring its impact. By moving beyond traditional metrics and embracing more holistic and business-aligned measurement strategies, we can ensure that the true value of user research is recognized and leveraged to drive organizational success. The future of UXR lies not just in conducting great research, but in effectively communicating its impact and cementing its role as a critical strategic function within modern organizations.

Maximize UXR ROI with Optimal 

While innovative measurement approaches are crucial, having the right tools to conduct and analyze research efficiently is equally important for maximizing UXR's return on investment. This is where the Optimal Workshop platform comes in, offering a comprehensive solution to streamline your UXR efforts and amplify their impact.

The Optimal Platform provides a suite of user-friendly tools designed to support every stage of the research process, from participant recruitment to data analysis and insight sharing. By centralizing your research activities on a single platform, you can significantly reduce the time and resources spent on administrative tasks, allowing your team to focus on generating valuable insights.

Key benefits of using Optimal for improving UXR ROI include:

  • Faster research cycles: With automated participant management and data collection tools, you can complete studies more quickly, enabling faster iteration and decision-making.

  • Enhanced collaboration: The platform's sharing features make it easy to involve stakeholders throughout the research process, increasing buy-in and ensuring insights are actioned promptly.

  • Robust analytics: Advanced data visualization and analysis tools help you uncover deeper insights and communicate them more effectively to decision-makers.

  • Scalable research: The platform's user-friendly interface enables non-researchers to conduct basic studies, democratizing research across your organization and increasing its overall impact.

  • Comprehensive reporting: Generate professional, insightful reports that clearly demonstrate the value of your research to stakeholders at all levels.

By leveraging the Optimal Workshop, you're not just improving your research processes – you're positioning UXR as a strategic driver of business success. Our platform's capabilities align perfectly with the advanced measurement approaches discussed earlier, enabling you to track research influence, calculate ROI, and demonstrate tangible impact on key business outcomes.

Ready to transform how you measure and communicate the impact of your UX research? Sign up for a free trial of the Optimal platform today and experience firsthand how it can drive your UXR efforts to new heights of efficiency and effectiveness. 

Learn more
1 min read

How to interpret your card sort results Part 2: closed card sorts and next steps

In Part 1 of this series we looked at how to interpret results from open and hybrid card sorts and now in Part 2, we’re going to talk about closed card sorts. In closed card sorts, participants are asked to sort the cards into predetermined categories and are not allowed to create any of their own. You might use this approach when you are constrained by specific category names or as a quick checkup before launching a new or newly redesigned website.In Part 1, we also discussed the two different - but complementary - types of analysis that are generally used together for interpreting card sort results: exploratory and statistical. Exploratory analysis is intuitive and creative while statistical analysis is all about the numbers. Check out Part 1 for a refresher or learn more about exploratory and statistical analysis in Donna Spencer’s book.

Getting started

Closed card sort analysis is generally much quicker and easier than open and hybrid card sorts because there are no participant created category names to analyze - it’s really just about where the cards were placed. There are some similarities about how you might start to approach your analysis process but overall there’s a lot less information to take in and there isn’t much in the way of drilling down into the details like we did in Part 1.Just like with an open card sort, kick off your analysis process by taking an overall look at the results as a whole. Quickly cast your eye over each individual card sort and just take it all in. Look for common patterns in how the cards have been sorted. Does anything jump out as surprising? Are there similarities or differences between participant sorts?

If you’re redesigning an existing information architecture (IA), how do your results compare to the current state? If this is a final check up before launching a live website, how do these results compare to what you learned during your previous research studies?If you ran your card sort using information architecture tool OptimalSort, head straight to the Overview and Participants Table presented in the results section of the tool. If you ran a moderated card sort using OptimalSort’s printed cards, you’ve probably been scanning them in after each completed session, but now is a good time to double check you got them all. And if you didn’t know about this handy feature of OptimalSort, it’s something to keep in mind for next time!

The Participants Table shows a breakdown of your card sorting data by individual participant. Start by reviewing each individual card sort one by one by clicking on the arrow in the far left column next to the Participants numbers. From here you can easily flick back and forth between participants without needing to close that modal window. Don’t spend too much time on this — you’re just trying to get a general impression of how the cards were sorted into your predetermined categories. Keep an eye out for any card sorts that you might like to exclude from the results. For example participants who have lumped everything into one group and haven’t actually sorted the cards.

Don’t worry- excluding or including participants isn’t permanent and can be toggled on or off at anytime.Once you’re happy with the individual card sorts that will and won’t be included in your results visualizations, it’s time to take a look at the Results Matrix in OptimalSort. The Results Matrix shows the number of times each card was sorted into each of your predetermined categories- the higher the number, the darker the shade of blue (see below).

A screenshot of the Results Matrix tab in OptimalSort.
Results Matrix in OptimalSort.

This table enables you to quickly and easily get across how the cards were sorted and gauge the highest and lowest levels of agreement among your participants. This will tell you if you’re on the right track or highlight opportunities for further refinement of your categories.If we take a closer look (see below) we can see that in this example closed card sort conducted on the Dewey Decimal Classification system commonly used in libraries, The Interpretation of Dreams by Sigmund Freud was sorted into ‘Philosophy and psychology’ 38 times in study a completed by 51 participants.

A screenshot of the Results Matrix in OptimalSort zoomed in.
Results Matrix in OptimalSort zoomed in with hover.

In the real world, that is exactly where that content lives and this is useful to know because it shows that the current state is supporting user expectations around findability reasonably well. Note: this particular example study used image based cards instead of word label based cards so the description that appears in both the grey box and down the left hand side of the matrix is for reference purposes only and was hidden from the participants.Sometimes you may come across cards that are popular in multiple categories. In our example study, How to win friends and influence people by Dale Carnegie, is popular in two categories: ‘Philosophy & psychology’ and ‘Social sciences’ with 22 and 21 placements respectively. The remaining card placements are scattered across a further 5 categories although in much smaller numbers.

A screenshot of the Results Matrix in OptimalSort showing cards popular in multiple categories.
Results Matrix showing cards popular in multiple categories.

When this happens, it’s up to you to determine what your number thresholds are. If it’s a tie or really close like it is in this case, you might review the results against any previous research studies to see if anything has changed or if this is something that comes up often. It might be a new category that you’ve just introduced, it might be an issue that hasn’t been resolved yet or it might just be limited to this one study. If you’re really not sure, it’s a good idea to run some in-person card sorts as well so you can ask questions and gain clarification around why your participants felt a card belonged in a particular category. If you’ve already done that great! Time to review those notes and recordings!You may also find yourself in a situation where no category is any more popular than the others for a particular card. This means there’s not much agreement among your participants about where that card actually belongs. In our example closed card sort study, the World Book Encyclopedia was placed into 9 of 10 categories. While it was placed in ‘History & geography’ 18 times, that’s still only 35% of the total placements for that card- it’s hardly conclusive.

A screenshot of the Results Matrix showing a card with a lack of agreement.
Results Matrix showing a card with a lack of agreement.

Sometimes this happens when the card label or image is quite general and could logically belong in many of the categories. In this case, an encyclopedia could easily fit into any of those categories and I suspect this happened because people may not be aware that encyclopedias make up a very large part of the category on the far left of the above matrix: ‘Computer science, information & general works’. You may also see this happening when a card is ambiguous and people have to guess where it might belong. Again - if you haven’t already - if in doubt, run some in-person card sorts so you can ask questions and get to the bottom of it!After reviewing the Results Matrix in OptimalSort, visit the Popular Placements Matrix to see which cards were most popular for each of your categories based on how your participants sorted them (see below 2 images).

A screenshot of the Popular Placements Matrix in OptimalSort, with the top half of the diagram showing.
Popular Placements Matrix in OptimalSort- top half of the diagram.

A screenshot of the Popular Placements Matrix in OptimalSort, with the top half of the diagram showing.
Popular Placements Matrix in OptimalSort- scrolled to show the bottom half of the diagram.

The diagram shades the most popular placements for each category in blue making it very easy to spot what belongs where in the eyes of your participants. It’s useful for quickly identifying clusters and also highlights the categories that didn’t get a lot of card sorting love. In our example study (2 images above) we can see that ‘Technology’ wasn’t a popular card category choice potentially indicating ambiguity around that particular category name. As someone familiar with the Dewey Decimal Classification system I know that ‘Technology’ is a bit of a tricky one because it contains a wide variety of content that includes topics on medicine and food science - sometimes it will appear as ‘Technology & applied sciences’. These results appear to support the case for exploring that alternative further!

Where to from here?

Now that we’ve looked at how to interpret your open, hybrid and closed card sorts, here are some next steps to help you turn those insights into action!Once you’ve analyzed your card sort results, it’s time to feed those insights into your design process and create your taxonomy which goes hand in hand with your information architecture. You can build your taxonomy out in Post-it notes before popping it into a spreadsheet for review. This is also a great time to identify any alternate labelling and placement options that came out of your card sorting process for further testing.From here, you might move into tree testing your new IA or you might run another card sort focussing on a specific area of your website. You can learn more about card sorting in general via our 101 guide.

When interpreting card sort results, don’t forget to have fun! It’s easy to get overwhelmed and bogged down in the results but don’t lose sight of the magic that is uncovering user insights.I’m going to leave you with this quote from Donna Spencer that summarizes the essence of card sort analysis quite nicely:Remember that you are the one who is doing the thinking, not the technique... you are the one who puts it all together into a great solution. Follow your instincts, take some risks, and try new approaches. - Donna Spencer

Further reading

  • Card Sorting 101 – Learn about the differences between open, closed and hybrid card sorts, and how to run your own using OptimalSort.

Learn more
1 min read

Usability Experts Unite: The Power of Heuristic Evaluation in User Interface Design

Usability experts play an essential role in the user interface design process by evaluating the usability of digital products from a very important perspective - the users! Usability experts utilize various techniques such as heuristic evaluation, usability testing, and user research to gather data on how users interact with digital products and services. This data helps to identify design flaws and areas for improvement, leading to the development of user-friendly and efficient products.

Heuristic evaluation is a usability research technique used to evaluate the user interface design of a digital product based on a set of ‘heuristics’ or ‘usability principles’. These heuristics are derived from a set of established principles of user experience design - attributed to the landmark article “Improving a Human-Computer Dialogue” published by web usability pioneers Jakob Nielsen and Rolf Molich in 1990. The principles focus on the experiential aspects of a user interface. 

In this article, we’ll discuss what heuristic evaluation is and how usability experts use the principles to create exceptional design. We’ll also discuss how usability testing works hand-in-hand with heuristic evaluation, and how minimalist design and user control impact user experience. So, let’s dive in!

Understanding Heuristic Evaluation


Heuristic evaluation helps usability experts to examine interface design against tried and tested rules of thumb. To conduct a heuristic evaluation, usability experts typically work through the interface of the digital product and identify any issues or areas for improvement based on these broad rules of thumb, of which there are ten. They broadly cover the key areas of design that impact user experience - not bad for an article published over 30 years ago!

The ten principles are:

  1. Prevention error: Well-functioning error messages are good, but instead of messages, can these problems be removed in the first place? Remove the opportunity for slips and mistakes to occur.
  2. Consistency and standards: Language, terms, and actions used should be consistent to not cause any confusion.
  3. Control and freedom for users: Give your users the freedom and control to undo/redo actions and exit out of situations if needed.
  4. System status visibility: Let your users know what’s going on with the site. Is the page they’re on currently loading, or has it finished loading?
  5. Design and aesthetics: Cut out unnecessary information and clutter to enhance visibility. Keep things in a minimalist style.
  6. Help and documentation: Ensure that information is easy to find for users, isn’t too large and is focused on your users’ tasks.
  7. Recognition, not recall: Make sure that your users don’t have to rely on their memories. Instead, make options, actions and objects visible. Provide instructions for use too.
  8. Provide a match between the system and the real world: Does the system speak the same language and use the same terms as your users? If you use a lot of jargon, make sure that all users can understand by providing an explanation or using other terms that are familiar to them. Also ensure that all your information appears in a logical and natural order.
  9. Flexibility: Is your interface easy to use and it is flexible for users? Ensure your system can cater to users to all types, from experts to novices.
  10. Help users to recognize, diagnose and recover from errors: Your users should not feel frustrated by any error messages they see. Instead, express errors in plain, jargon-free language they can understand. Make sure the problem is clearly stated and offer a solution for how to fix it.

Heuristic evaluation is a cost-effective way to identify usability issues early in the design process (although they can be performed at any stage) leading to faster and more efficient design iterations. It also provides a structured approach to evaluating user interfaces, making it easier to identify usability issues. By providing valuable feedback on overall usability, heuristic evaluation helps to improve user satisfaction and retention.

The Role of Usability Experts in Heuristic Evaluation

Usability experts play a central role in the heuristic evaluation process by providing feedback on the usability of a digital product, identifying any issues or areas for improvement, and suggesting changes to optimize user experience.

One of the primary goals of usability experts during the heuristic evaluation process is to identify and prevent errors in user interface design. They achieve this by applying the principles of error prevention, such as providing clear instructions and warnings, minimizing the cognitive load on users, and reducing the chances of making errors in the first place. For example, they may suggest adding confirmation dialogs for critical actions, ensuring that error messages are clear and concise, and making the navigation intuitive and straightforward.

Usability experts also use user testing to inform their heuristic evaluation. User testing involves gathering data from users interacting with the product or service and observing their behavior and feedback. This data helps to validate the design decisions made during the heuristic evaluation and identify additional usability issues that may have been missed. For example, usability experts may conduct A/B testing to compare the effectiveness of different design variations, gather feedback from user surveys, and conduct user interviews to gain insights into users' needs and preferences.

Conducting user testing with users that represent, as closely as possible, actual end users, ensures that the product is optimized for its target audience. Check out our tool Reframer, which helps usability experts collaborate and record research observations in one central database.

Minimalist Design and User Control in Heuristic Evaluation

Minimalist design and user control are two key principles that usability experts focus on during the heuristic evaluation process. A minimalist design is one that is clean, simple, and focuses on the essentials, while user control refers to the extent to which users can control their interactions with the product or service.

Minimalist design is important because it allows users to focus on the content and tasks at hand without being distracted by unnecessary elements or clutter. Usability experts evaluate the level of minimalist design in a user interface by assessing the visual hierarchy, the use of white space, the clarity of the content, and the consistency of the design elements. Information architecture (the system and structure you use to organize and label content) has a massive impact here, along with the content itself being concise and meaningful.

Incorporating minimalist design principles into heuristic evaluation can improve the overall user experience by simplifying the design, reducing cognitive load, and making it easier for users to find what they need. Usability experts may incorporate minimalist design by simplifying the navigation and site structure, reducing the number of design elements, and removing any unnecessary content (check out our tool Treejack to conduct site structure, navigation, and categorization research). Consistent color schemes and typography can also help to create a cohesive and unified design.

User control is also critical in a user interface design because it gives users the power to decide how they interact with the product or service. Usability experts evaluate the level of user control by looking at the design of the navigation, the placement of buttons and prompts, the feedback given to users, and the ability to undo actions. Again, usability testing plays an important role in heuristic evaluation by allowing researchers to see how users respond to the level of control provided, and gather feedback on any potential hiccups or roadblocks.

Usability Testing and Heuristic Evaluation

Usability testing and heuristic evaluation are both important components of the user-centered design process, and they complement each other in different ways.

Usability testing involves gathering feedback from users as they interact with a digital product. This feedback can provide valuable insights into how users perceive and use the user interface design, identify any usability issues, and help validate design decisions. Usability testing can be conducted in different forms, such as moderated or unmoderated, remote or in-person, and task-based or exploratory. Check out our usability testing 101 article to learn more.

On the other hand, heuristic evaluation is a method in which usability experts evaluate a product against a set of usability principles. While heuristic evaluation is a useful method to quickly identify usability issues and areas for improvement, it does not involve direct feedback from users.

Usability testing can be used to validate heuristic evaluation findings by providing evidence of how users interact with the product or service. For example, if a usability expert identifies a potential usability issue related to the navigation of a website during heuristic evaluation, usability testing can be used to see if users actually have difficulty finding what they need on the website. In this way, usability testing provides a reality check to the heuristic evaluation and helps ensure that the findings are grounded in actual user behavior.

Usability testing and heuristic evaluation work together in the design process by informing and validating each other. For example, a designer may conduct heuristic evaluation to identify potential usability issues and then use the insights gained to design a new iteration of the product or service. The designer can then use usability testing to validate that the new design has successfully addressed the identified usability issues and improved the user experience. This iterative process of designing, testing, and refining based on feedback from both heuristic evaluation and usability testing leads to a user-centered design that is more likely to meet user needs and expectations.

Conclusion

Heuristic evaluation is a powerful usability research technique that usability experts use to evaluate digital product interfaces based on a set of established principles of user experience design. After all these years, the ten principles of heuristic evaluation still cover the key areas of design that impact user experience, making it easier to identify usability issues early in the design process, leading to faster and more efficient design iterations. Usability experts play a critical role in the heuristic evaluation process by identifying design flaws and areas for improvement, using user testing to validate design decisions, and ensuring that the product is optimized for its intended users.

Minimalist design and user control are two key principles that usability experts focus on during the heuristic evaluation process. A minimalist design is clean, simple, and focuses on the essentials, while user control gives users the freedom and control to undo/redo actions and exit out of situations if needed. By following these principles, usability experts can create an exceptional design that enhances visibility, reduces cognitive load, and provides a positive user experience. 

Ultimately, heuristic evaluation is a cost-effective way to identify usability issues at any point in the design process, leading to faster and more efficient design iterations, and improving user satisfaction and retention. How many of the ten heuristic design principles does your digital product satisfy? 

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.