October 15, 2024
3

The Power of Prototype Testing Live Training

If you missed our recent live training on Prototype Testing, don’t worry—we’ve got everything you need right here! You can catch up at your convenience, so grab a cup of tea, put your feet up, and enjoy the show.

In the session, we explored the powerful new features of our Prototype Testing tool, offering a step-by-step guide to setting up, running, and analyzing your tests like a seasoned pro. This tool is a game-changer for your design workflow, helping you identify usability issues and gather real user feedback before committing significant resources to development.


Here’s a quick recap of the highlights:

1. Creating a prototype test from scratch using images

We walked through how to create a prototype test from scratch using static images. This method is perfect for early-stage design concepts, where you want to quickly test user flows without a fully interactive prototype.

2. Preparing your Figma prototype for testing

Figma users, we’ve got you covered! We discussed how to prepare your Figma prototype for the smoothest possible testing experience. From setting up interactions to ensuring proper navigation, these tips ensure participants have an intuitive experience during the test. For more detailed instructions, check out our help article 

3. Seamless Figma prototype imports

One of the standout features of the tool is its seamless integration with Figma. We showed how easy it is to import your designs directly from Figma into Optimal, streamlining the setup process. You can bring your working files straight in, and resync when you need to with one click of a button.

4. Understanding usability metrics and analyzing results

We explored how to analyze the usability metrics, and walked through what the results can indicate on click maps and paths. These visual tools allow you to see exactly how participants navigate your design, making it easier to spot pain points, dead ends, or areas of friction. By understanding user behavior, you can rapidly iterate and refine your prototypes for optimal user experience.

Share this article
Author
Sarah
Flutey

Related articles

View all blog articles
Learn more
1 min read

Ready for take-off: Best practices for creating and launching remote user research studies

"Hi Optimal Work,I was wondering if there are some best practices you stick to when creating or sending out different UX research studies (i.e. Card sorts, Prototyye Test studies, etc)? Thank you! Mary"

Indeed I do! Over the years I’ve learned a lot about creating remote research studies and engaging participants. That experience has taught me a lot about what works, what doesn’t and what leaves me refreshing my results screen eagerly anticipating participant responses and getting absolute zip. Here are my top tips for remote research study creation and launch success!

Creating remote research studies

Use screener questions and post-study questions wisely

Screener questions are really useful for eliminating participants who may not fit the criteria you’re looking for but you can’t exactly stop them from being less than truthful in their responses. Now, I’m not saying all participants lie on the screener so they can get to the activity (and potentially claim an incentive) but I am saying it’s something you can’t control. To help manage this, I like to use the post-study questions to provide additional context and structure to the research.

Depending on the study, I might ask questions to which the answers might confirm or exclude specific participants from a specific group. For example, if I’m doing research on people who live in a specific town or area, I’ll include a location based question after the study. Any participant who says they live somewhere else is getting excluded via that handy toggle option in the results section. Post-study questions are also great for capturing additional ideas and feedback after participants complete the activity as remote research limits your capacity to get those — you’re not there with them so you can’t just ask. Post-study questions can really help bridge this gap. Use no more than five post-study questions at a time and consider not making them compulsory.

Do a practice run

No matter how careful I am, I always miss something! A typo, a card with a label in the wrong case, forgetting to update a new version of an information architecture after a change was made — stupid mistakes that we all make. By launching a practice version of your study and sharing it with your team or client, you can stop those errors dead in their tracks. It’s also a great way to get feedback from the team on your work before the real deal goes live. If you find an error, all you have to do is duplicate the study, fix the error and then launch. Just keep an eye on the naming conventions used for your studies to prevent the practice version and the final version from getting mixed up!

Sending out remote research studies

Manage expectations about how long the study will be open for

Something that has come back to bite me more than once is failing to clearly explain when the study will close. Understandably, participants can be left feeling pretty annoyed when they mentally commit to complete a study only to find it’s no longer available. There does come a point when you need to shut the study down to accurately report on quantitative data and you’re not going to be able to prevent every instance of this, but providing that information upfront will go a long way.

Provide contact details and be open to questions

You may think you’re setting yourself up to be bombarded with emails, but I’ve found that isn’t necessarily the case. I’ve noticed I get around 1-3 participants contacting me per study. Sometimes they just want to tell me they completed it and potentially provide additional information and sometimes they have a question about the project itself. I’ve also found that sometimes they have something even more interesting to share such as the contact details of someone I may benefit from connecting with — or something else entirely! You never know what surprises they have up their sleeves and it’s important to be open to it. Providing an email address or social media contact details could open up a world of possibilities.

Don’t forget to include the link!

It might seem really obvious, but I can’t tell you how many emails I received (and have been guilty of sending out) that are missing the damn link to the study. It happens! You’re so focused on getting that delivery right and it becomes really easy to miss that final yet crucial piece of information.

To avoid this irritating mishap, I always complete a checklist before hitting send:

  • Have I checked my spelling and grammar?
  • Have I replaced all the template placeholder content with the correct information?
  • Have I mentioned when the study will close?
  • Have I included contact details?
  • Have I launched my study and received confirmation that it is live?
  • Have I included the link to the study in my communications to participants?
  • Does the link work? (yep, I’ve broken it before)

General tips for both creating and sending out remote research studies

Know your audience

First and foremost, before you create or disseminate a remote research study, you need to understand who it’s going to and how they best receive this type of content. Posting it out when none of your followers are in your user group may not be the best approach. Do a quick brainstorm about the best way to reach them. For example if your users are internal staff, there might be an internal communications channel such as an all-staff newsletter, intranet or social media site that you can share the link and approach content to.

Keep it brief

And by that I’m talking about both the engagement mechanism and the study itself. I learned this one the hard way. Time is everything and no matter your intentions, no one wants to spend more time than they have to. Even more so in situations where you’re unable to provide incentives (yep, I’ve been there). As a rule, I always stick to no more than 10 questions in a remote research study and for card sorts, I’ll never include more than 60 cards. Anything more than that will see a spike in abandonment rates and of course only serve to annoy and frustrate your participants. You need to ensure that you’re balancing your need to gain insights with their time constraints.

As for the accompanying approach content, short and snappy equals happy! In the case of an email, website, other social media post, newsletter, carrier pigeon etc, keep your approach spiel to no more than a paragraph. Use an audience appropriate tone and stick to the basics such as: a high level sentence on what you’re doing, roughly how long the study will take participants to complete, details of any incentives on offer and of course don’t forget to thank them.

Set clear instructions

The default instructions in Optimal Workshop’s suite of tools are really well designed and I’ve learned to borrow from them for my approach content when sending the link out. There’s no need for wheel reinvention and it usually just needs a slight tweak to suit the specific study. This also helps provide participants with a consistent experience and minimizes confusion allowing them to focus on sharing those valuable insights!

Create a template

When you’re on to something that works — turn it into a template! Every time I create a study or send one out, I save it for future use. It still needs minor tweaks each time, but I use them to iterate my template.What are your top tips for creating and sending out remote user research studies? Comment below!

Learn more
1 min read

Democratizing UX research: empowering cross-functional teams

In today's fast-paced product development landscape, the ability to quickly gather and act on user insights is more critical than ever. While dedicated UX researchers play a crucial role, there's a growing trend towards democratizing UX research – empowering team members across various functions to contribute to and benefit from user insights. Let's explore how this approach can transform your organization's approach to user-centered design.

Benefits of a democratized UXR approach

Democratizing UX research is a transformative approach that empowers organizations to unlock the full potential of user insights. By breaking down traditional barriers and involving a broader range of team members in the research process, companies can foster a culture of user-centricity, accelerate decision-making, and drive innovation. This inclusive strategy not only enhances the depth and breadth of user understanding but also aligns diverse perspectives to create more impactful, user-friendly products and services. Here are a few of the benefits of this movement:

Increased research velocity

By enabling more team members to conduct basic research, organizations can gather insights more frequently and rapidly. This means that instead of waiting for dedicated UX researchers to be available, product managers, designers, or marketers can quickly run simple surveys or usability tests. For example, a product manager could use a user-friendly tool to get quick feedback on a new feature idea, allowing the team to iterate faster. This increased velocity helps organizations stay agile and responsive to user needs in a fast-paced market.

Broader perspective

Cross-functional participation brings diverse viewpoints to research, potentially uncovering insights that might be missed by specialized researchers alone. A developer might ask questions from a technical feasibility standpoint, while a marketer might focus on brand perception. This diversity in approach can lead to richer, more comprehensive insights. For instance, during a user interview, a sales team member might pick up on specific pain points related to competitor products that a UX researcher might not have thought to explore.

Enhanced user-centricity

When more team members engage directly with users, it fosters a culture of user-centricity across the organization. This direct exposure to user feedback and behaviors helps all team members develop empathy for the user. As a result, user needs and preferences become a central consideration in all decision-making processes, not just in UX design. For example, seeing users struggle with a feature firsthand might motivate a developer to champion user-friendly improvements in future sprints.

Improved research adoption

Team members who participate in research are more likely to understand and act on the insights generated. When people are involved in gathering data, they have a deeper understanding of the context and nuances of the findings. This personal investment leads to greater buy-in and increases the likelihood that research insights will be applied in practical ways. For instance, a product manager who conducts user interviews is more likely to prioritize features based on actual user needs rather than assumptions.

Resource optimization

Democratization allows dedicated researchers to focus on more complex, high-value research initiatives. By offloading simpler research tasks to other team members, professional UX researchers can dedicate their expertise to more challenging projects, such as longitudinal studies, complex usability evaluations, or strategic research initiatives. This optimization ensures that specialized skills are applied where they can have the most significant impact.

Our survey revealed that organizations with a more democratized approach to UXR tend to have higher levels of research maturity and integration into product development processes. This correlation suggests that democratization not only increases the quantity of research conducted but also enhances its quality and impact. Organizations that empower cross-functional teams to participate in UXR often develop more sophisticated research practices over time.

For example, these organizations might:

  • Have better-defined research processes and guidelines
  • Integrate user insights more consistently into decision-making at all levels
  • Develop more advanced metrics for measuring the impact of UXR
  • Foster a culture where challenging assumptions with user data is the norm
  • Create more opportunities for collaboration between different departments around user insights

By democratizing UXR, organizations can create a virtuous cycle where increased participation leads to better research practices, which in turn drives more value from UXR activities. This approach helps to embed user-centricity deeply into the organizational culture, leading to better products and services that truly meet user needs.

Strategies for upskilling people who do research (PWDRs)

To successfully democratize UXR, it's crucial to provide proper training and support:

1. UXR basics workshops

Offer regular training sessions on fundamental research methods and best practices. These workshops should cover a range of topics, including:

  • Introduction to user research methodologies (e.g., interviews, surveys, usability testing)
  • Basics of research design and planning
  • Participant recruitment strategies
  • Data analysis techniques
  • Ethical considerations in user research

For example, a monthly "UXR 101" workshop could be organized, where different aspects of UX research are covered in depth. These sessions could be led by experienced researchers and include practical exercises to reinforce learning.

Check out our 101 Guides

2. Mentorship programs

Pair non-researchers with experienced UX researchers for guidance and support. This one-on-one relationship allows for personalized learning and hands-on guidance. 

Mentors can:

  • Provide feedback on research plans
  • Offer advice on challenging research scenarios
  • Share best practices and personal experiences
  • Help mentees navigate the complexities of user research in their specific organizational context

A formal mentorship program could be established with clear goals, regular check-ins, and a defined duration (e.g., 6 months), after which mentees could become mentors themselves, scaling the program.

3. Research playbooks

Develop standardized templates and guidelines for common research activities. These playbooks serve as go-to resources for non-researchers, ensuring consistency and quality across studies. 

They might include:

  • Step-by-step guides for different research methods
  • Templates for research plans, screeners, and report structures
  • Best practices for participant interaction
  • Guidelines for data privacy and ethical considerations
  • Tips for presenting and socializing research findings

For instance, a "Usability Testing Playbook" could walk a product manager through the entire process of planning, conducting, and reporting on a usability test.

Check out Optimal Playbooks

4. Collaborative research

Involve non-researchers in studies led by experienced UX professionals to provide hands-on learning opportunities.

This approach allows non-researchers to:

  • Observe best practices in action
  • Contribute to real research projects
  • Understand the nuances and challenges of UX research
  • Build confidence in their research skills under expert guidance

For example, a designer could assist in a series of user interviews, gradually taking on more responsibility with each session under the researcher's supervision.

5. Continuous learning resources

Provide access to online courses, webinars, and industry events to foster ongoing skill development. This could include:

  • Subscriptions to UX research platforms and tools
  • Access to online course libraries (e.g., Coursera, LinkedIn Learning)
  • Budget for attending UX conferences and workshops
  • Internal knowledge sharing sessions where team members present on recent learnings or projects

An internal UX research resource hub could be created, curating relevant articles, videos, and courses for easy access by team members.

As one UX leader in our study noted, "It's been exciting to see [UXR] evolve as a discipline and see where it is today, and to see the various backgrounds and research specialisms that [user] researchers have today is not something I'd have expected."

This quote highlights the dynamic nature of UX research and the diversity it now encompasses. The field has evolved to welcome practitioners from various backgrounds, each bringing unique perspectives and skills. This diversity enriches the discipline and makes it more adaptable to different organizational contexts.

For example:

  • A former teacher might excel at educational research for EdTech products
  • A psychologist could bring deep insights into user behavior and motivation
  • A data scientist might introduce advanced analytical techniques to UX research

By embracing this diversity and providing comprehensive support for skill development, organizations can create a rich ecosystem of UX research capabilities. This not only democratizes the practice but also elevates its overall quality and impact.

The key to successful democratization lies in balancing accessibility with rigor. While making UX research more widely practiced, it's crucial to maintain high standards and ethical practices. The strategies outlined above help achieve this balance by providing structure, guidance, and ongoing support to those new to UX research, while leveraging the expertise of experienced researchers to ensure quality and depth in the organization's overall research efforts.

Tools and platforms enabling broader participation

The democratization of UXR has been greatly facilitated by comprehensive, user-friendly research platforms like Optimal Workshop. Our all-in-one solution offers a suite of tools designed to empower both seasoned researchers and non-researchers alike:

Surveys

Our intuitive survey creation tool allows anyone in your organization to quickly design and distribute surveys. With customizable templates and an easy-to-use interface, gathering user feedback has never been simpler.

Tree Testing and Card Sorting

These powerful tools simplify the process of conducting information architecture and card sorting studies. Non-researchers can easily set up and run tests to validate navigation structures and content organization.

Qualitative Insights

Our powerful qualitative analysis tool enables team members across your organization to efficiently analyze and synthesize user interview data. With its user-friendly interface, our Qualitative Insights tool makes deriving meaningful insights from qualitative research accessible to researchers and non-researchers alike.

First-click Testing

This easy-to-use first-click testing tool empowers anyone in your team to quickly set up and run tests to evaluate the effectiveness of their designs. First-click Testing simplifies the process of gathering initial user impressions, allowing for rapid iteration and improvement of user interfaces.

These tools, integrated into a single, user-friendly platform, make it possible for non-researchers to conduct basic studies and contribute to the overall research effort without extensive training. The intuitive design of the Optimal Workshop UXR and insights platform ensures that team members across different functions can easily engage in user research activities, from planning and execution to analysis and sharing of insights.

By providing a comprehensive, accessible platform, Optimal Workshop plays a crucial role in democratizing UX research, enabling organizations to build a more user-centric culture and make data-driven decisions at all levels.

Balancing democratization with expertise

While democratizing UXR offers numerous benefits, it's crucial to strike a balance with professional expertise. This balance involves establishing quality control measures, reserving complex research initiatives for trained professionals, maintaining strategic oversight by experienced researchers, providing clear guidelines on research ethics and data privacy, and leveraging dedicated researchers' expertise for insight synthesis. 

Our survey revealed that organizations successfully balancing democratization with expertise tend to see the highest impact from their UXR efforts. The goal of democratization is not to replace dedicated researchers but to expand the organization's capacity for generating user insights. By empowering cross-functional teams to participate in UXR, companies can foster a more user-centric culture, increase the velocity of insight generation, and ultimately create products that better meet user needs. 

As we look to the future, the trend towards democratization is likely to continue, and organizations that can effectively balance broad participation with professional expertise will be best positioned to thrive in an increasingly user-centric business landscape.

Ready to democratize your UX research? Optimal Workshop's platform empowers your entire team to contribute to user insights while maintaining professional quality. Our intuitive tools accelerate research velocity and foster a user-centric culture. 

Start your free trial today and transform your UXR practice. 

Learn more
1 min read

Welcome to our latest addition: Prototype testing 🐣

Today, we’re thrilled to announce the arrival of the latest member of the Optimal family:  Prototype Testing! This exciting and much-requested new tool allows you to test designs early and often with users to gather fast insights, and make confident design decisions to create more intuitive and user-friendly digital experiences. 

Optimal gives you tools you need to easily build a prototype to test using images and screens and creating clickable areas, or you can import a prototype from Figma and get testing. The first iteration of prototype testing is an open beta, and we’ll be working closely with our customers and community to gather feedback and ideas for further improvements in the months to come.

When to use prototype testing 

Prototype testing is a great way to validate design ideas, identify usability issues, and gather feedback from users before investing too heavily in the development of products, websites, and apps. To further inform your insights, it’s a good idea to include sentiment questions or rating scales alongside your tasks.

Early in the design process: Test initial ideas and concepts to gauge user reactions and feelings about your conceptual solutions. 

Iterative design phases: Continuously test and refine prototypes as you make changes and improvements to the designs. 

Before major milestones: Validate designs before key project stages, such as stakeholder reviews or final approvals.

Usability Testing: Conduct summative research to assess a design's overall performance and gauge real user feedback to guide future design decisions and enhancements.

How it works 🧑🏽‍💻

No existing prototype? No problem. We've made it easy to create one right within Optimal. Here's how:

  1. Import your visuals

Start by uploading a series of screenshots or images that represent your design flow. These will form the backbone of your prototype.

  1. Create interactive elements

Once your visuals are in place, it's time to bring them to life. Use our intuitive interface to designate clickable areas on each screen. These will act as navigation points for your test participants.

  1. Set up the flow

Connect your screens in a logical sequence, mirroring the user journey you want to test. This creates a seamless, interactive experience for your participants.

  1. Preview and refine

Before launching your study, take a moment to walk through your prototype. Ensure all clickable areas work as intended and the flow feels natural.

The result? A fully functional prototype that looks and feels like a real digital product. Your test participants will be able to navigate through it just as they would a live website or app, providing you with authentic, actionable insights.

By empowering you to build prototypes from scratch, we're removing barriers to early-stage testing. This means you can validate ideas faster, iterate with confidence, and ultimately deliver better digital experiences.

Or…import your prototypes directly from Figma 

There’s a bit of housekeeping you’ll need to do in Figma in order to provide your participants with the best testing experience and not impact loading times of the prototype. You can import a link to your Figma prototype into your study,  and it will carry across all the interactions you have set up. You’ll need to make sure your Figma presentation mode is made public in order to share the file with participants. If you make any updates to your Figma file, you can sync the changes in just one click. 

Help Article: Find out more about how to set up your Figma file for testing

How to create tasks 🧰

When you set up your study, you’ll create tasks for participants to complete. 

There are two different ways to build tasks in your prototype tests. You can set a correct destination by adding a start screen and a correct destination screen. That way, you can watch how participants navigate your design to find their way to the correct destination. Another option is to set a correct pathway and evaluate how participants navigate a product, app, or website based on the pathway sequence you set. You can add as many pathways or destinations as you like. 

Adding post-task questions is a great way to help gather qualitative feedback on the user's experience, capturing their thoughts, feelings, and perceptions.

Help Article: Find out how to analyze your results

Prototype testing analysis and metrics 📊

Prototype testing offers a variety of analysis options and metrics to evaluate the effectiveness and usability of your design.  By using these analysis options and metrics, you can get comprehensive insights into your prototype's performance, identify areas for improvement, and make informed design decisions:

Task results 

The task results provide a deep analysis at a task level, including the success score, directness score, time taken, misclicks, and the breakdown of the task's success and failure. They provide great insight into the usability of your design to achieve a task. 

  • Success score tells you the total percentage of participants who reached the correct destination or pathway that you defined for this task. It’s a good indicator of a prototype's usability. 
  • Directness score is the total completed results minus the ‘indirect’ results.
  • A path is ‘indirect’ when a participant backtracks, viewing the same page multiple times, or if they nominate the correct destination but don’t follow the correct pathway
  • Time taken is how long it took a participant to complete your task and can be a good indicator of how easy or difficult it was to complete. 
  • Misclicks measure the total number of clicks made on areas of your prototype that weren’t clickable, clicks that didn’t result in a page change.

Clickmaps

Clickmaps provide an aggregate view of user interactions with prototypes, visualizing click patterns to reveal how users navigate and locate information. They display hits and misses on designated clickable areas, average task completion times, and heatmaps showing where users believed the next steps to be. Filters for first, second, and third page visits allow analysis of user behavior over time, including how they adapt when backtracking. This comprehensive data helps designers understand user navigation patterns and improve prototype usability.

Participant paths 

The Paths tab in Optimal provides a powerful visualization to understand and identify common navigation patterns and potential obstacles participants encounter while completing tasks. You can include thumbnails of your screens to enhance your analysis, making it easier to pinpoint where users may face difficulties or where common paths occured.

Coming soon to prototyping 🔮

Later this year, we’re running a closed beta for video recording with prototype testing. This feature captures behaviors and insights not evident in click data alone. The browser-based recording requires no plugins, simplifying setup. Consent for recording is obtained at the start of the testing process and can be customized to align with your organization's policies. This new feature will provide deeper insights into user experience and prototype usability.

These enhancements to prototype testing offer a comprehensive toolkit for user experience analysis. By combining quantitative click data with qualitative video insights, designers and researchers can gain a more nuanced understanding of user behavior, leading to more informed decisions and improved product designs.

Start prototype testing today

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.