August 7, 2024
5 min

Welcome to our latest addition: Prototype testing 🐣

Today, we’re thrilled to announce the arrival of the latest member of the Optimal family:  Prototype Testing! This exciting and much-requested new tool allows you to test designs early and often with users to gather fast insights, and make confident design decisions to create more intuitive and user-friendly digital experiences. 

Optimal gives you tools you need to easily build a prototype to test using images and screens and creating clickable areas, or you can import a prototype from Figma and get testing. The first iteration of prototype testing is an open beta, and we’ll be working closely with our customers and community to gather feedback and ideas for further improvements in the months to come.

When to use prototype testing 

Prototype testing is a great way to validate design ideas, identify usability issues, and gather feedback from users before investing too heavily in the development of products, websites, and apps. To further inform your insights, it’s a good idea to include sentiment questions or rating scales alongside your tasks.

Early in the design process: Test initial ideas and concepts to gauge user reactions and feelings about your conceptual solutions. 

Iterative design phases: Continuously test and refine prototypes as you make changes and improvements to the designs. 

Before major milestones: Validate designs before key project stages, such as stakeholder reviews or final approvals.

Usability Testing: Conduct summative research to assess a design's overall performance and gauge real user feedback to guide future design decisions and enhancements.

How it works 🧑🏽‍💻

No existing prototype? No problem. We've made it easy to create one right within Optimal. Here's how:

  1. Import your visuals

Start by uploading a series of screenshots or images that represent your design flow. These will form the backbone of your prototype.

  1. Create interactive elements

Once your visuals are in place, it's time to bring them to life. Use our intuitive interface to designate clickable areas on each screen. These will act as navigation points for your test participants.

  1. Set up the flow

Connect your screens in a logical sequence, mirroring the user journey you want to test. This creates a seamless, interactive experience for your participants.

  1. Preview and refine

Before launching your study, take a moment to walk through your prototype. Ensure all clickable areas work as intended and the flow feels natural.

The result? A fully functional prototype that looks and feels like a real digital product. Your test participants will be able to navigate through it just as they would a live website or app, providing you with authentic, actionable insights.

By empowering you to build prototypes from scratch, we're removing barriers to early-stage testing. This means you can validate ideas faster, iterate with confidence, and ultimately deliver better digital experiences.

Or…import your prototypes directly from Figma 

There’s a bit of housekeeping you’ll need to do in Figma in order to provide your participants with the best testing experience and not impact loading times of the prototype. You can import a link to your Figma prototype into your study,  and it will carry across all the interactions you have set up. You’ll need to make sure your Figma presentation mode is made public in order to share the file with participants. If you make any updates to your Figma file, you can sync the changes in just one click. 

Help Article: Find out more about how to set up your Figma file for testing

How to create tasks 🧰

When you set up your study, you’ll create tasks for participants to complete. 

There are two different ways to build tasks in your prototype tests. You can set a correct destination by adding a start screen and a correct destination screen. That way, you can watch how participants navigate your design to find their way to the correct destination. Another option is to set a correct pathway and evaluate how participants navigate a product, app, or website based on the pathway sequence you set. You can add as many pathways or destinations as you like. 

Adding post-task questions is a great way to help gather qualitative feedback on the user's experience, capturing their thoughts, feelings, and perceptions.

Help Article: Find out how to analyze your results

Prototype testing analysis and metrics 📊

Prototype testing offers a variety of analysis options and metrics to evaluate the effectiveness and usability of your design.  By using these analysis options and metrics, you can get comprehensive insights into your prototype's performance, identify areas for improvement, and make informed design decisions:

Task results 

The task results provide a deep analysis at a task level, including the success score, directness score, time taken, misclicks, and the breakdown of the task's success and failure. They provide great insight into the usability of your design to achieve a task. 

  • Success score tells you the total percentage of participants who reached the correct destination or pathway that you defined for this task. It’s a good indicator of a prototype's usability. 
  • Directness score is the total completed results minus the ‘indirect’ results.
  • A path is ‘indirect’ when a participant backtracks, viewing the same page multiple times, or if they nominate the correct destination but don’t follow the correct pathway
  • Time taken is how long it took a participant to complete your task and can be a good indicator of how easy or difficult it was to complete. 
  • Misclicks measure the total number of clicks made on areas of your prototype that weren’t clickable, clicks that didn’t result in a page change.

Clickmaps

Clickmaps provide an aggregate view of user interactions with prototypes, visualizing click patterns to reveal how users navigate and locate information. They display hits and misses on designated clickable areas, average task completion times, and heatmaps showing where users believed the next steps to be. Filters for first, second, and third page visits allow analysis of user behavior over time, including how they adapt when backtracking. This comprehensive data helps designers understand user navigation patterns and improve prototype usability.

Participant paths 

The Paths tab in Optimal provides a powerful visualization to understand and identify common navigation patterns and potential obstacles participants encounter while completing tasks. You can include thumbnails of your screens to enhance your analysis, making it easier to pinpoint where users may face difficulties or where common paths occured.

Coming soon to prototyping 🔮

Later this year, we’re running a closed beta for video recording with prototype testing. This feature captures behaviors and insights not evident in click data alone. The browser-based recording requires no plugins, simplifying setup. Consent for recording is obtained at the start of the testing process and can be customized to align with your organization's policies. This new feature will provide deeper insights into user experience and prototype usability.

These enhancements to prototype testing offer a comprehensive toolkit for user experience analysis. By combining quantitative click data with qualitative video insights, designers and researchers can gain a more nuanced understanding of user behavior, leading to more informed decisions and improved product designs.

Start prototype testing today

Share this article
Author
Sarah
Flutey

Related articles

View all blog articles
Learn more
1 min read

Welcome to our latest addition: Prototype testing 🐣

Today, we’re thrilled to announce the arrival of the latest member of the Optimal family:  Prototype Testing! This exciting and much-requested new tool allows you to test designs early and often with users to gather fast insights, and make confident design decisions to create more intuitive and user-friendly digital experiences. 

Optimal gives you tools you need to easily build a prototype to test using images and screens and creating clickable areas, or you can import a prototype from Figma and get testing. The first iteration of prototype testing is an open beta, and we’ll be working closely with our customers and community to gather feedback and ideas for further improvements in the months to come.

When to use prototype testing 

Prototype testing is a great way to validate design ideas, identify usability issues, and gather feedback from users before investing too heavily in the development of products, websites, and apps. To further inform your insights, it’s a good idea to include sentiment questions or rating scales alongside your tasks.

Early in the design process: Test initial ideas and concepts to gauge user reactions and feelings about your conceptual solutions. 

Iterative design phases: Continuously test and refine prototypes as you make changes and improvements to the designs. 

Before major milestones: Validate designs before key project stages, such as stakeholder reviews or final approvals.

Usability Testing: Conduct summative research to assess a design's overall performance and gauge real user feedback to guide future design decisions and enhancements.

How it works 🧑🏽‍💻

No existing prototype? No problem. We've made it easy to create one right within Optimal. Here's how:

  1. Import your visuals

Start by uploading a series of screenshots or images that represent your design flow. These will form the backbone of your prototype.

  1. Create interactive elements

Once your visuals are in place, it's time to bring them to life. Use our intuitive interface to designate clickable areas on each screen. These will act as navigation points for your test participants.

  1. Set up the flow

Connect your screens in a logical sequence, mirroring the user journey you want to test. This creates a seamless, interactive experience for your participants.

  1. Preview and refine

Before launching your study, take a moment to walk through your prototype. Ensure all clickable areas work as intended and the flow feels natural.

The result? A fully functional prototype that looks and feels like a real digital product. Your test participants will be able to navigate through it just as they would a live website or app, providing you with authentic, actionable insights.

By empowering you to build prototypes from scratch, we're removing barriers to early-stage testing. This means you can validate ideas faster, iterate with confidence, and ultimately deliver better digital experiences.

Or…import your prototypes directly from Figma 

There’s a bit of housekeeping you’ll need to do in Figma in order to provide your participants with the best testing experience and not impact loading times of the prototype. You can import a link to your Figma prototype into your study,  and it will carry across all the interactions you have set up. You’ll need to make sure your Figma presentation mode is made public in order to share the file with participants. If you make any updates to your Figma file, you can sync the changes in just one click. 

Help Article: Find out more about how to set up your Figma file for testing

How to create tasks 🧰

When you set up your study, you’ll create tasks for participants to complete. 

There are two different ways to build tasks in your prototype tests. You can set a correct destination by adding a start screen and a correct destination screen. That way, you can watch how participants navigate your design to find their way to the correct destination. Another option is to set a correct pathway and evaluate how participants navigate a product, app, or website based on the pathway sequence you set. You can add as many pathways or destinations as you like. 

Adding post-task questions is a great way to help gather qualitative feedback on the user's experience, capturing their thoughts, feelings, and perceptions.

Help Article: Find out how to analyze your results

Prototype testing analysis and metrics 📊

Prototype testing offers a variety of analysis options and metrics to evaluate the effectiveness and usability of your design.  By using these analysis options and metrics, you can get comprehensive insights into your prototype's performance, identify areas for improvement, and make informed design decisions:

Task results 

The task results provide a deep analysis at a task level, including the success score, directness score, time taken, misclicks, and the breakdown of the task's success and failure. They provide great insight into the usability of your design to achieve a task. 

  • Success score tells you the total percentage of participants who reached the correct destination or pathway that you defined for this task. It’s a good indicator of a prototype's usability. 
  • Directness score is the total completed results minus the ‘indirect’ results.
  • A path is ‘indirect’ when a participant backtracks, viewing the same page multiple times, or if they nominate the correct destination but don’t follow the correct pathway
  • Time taken is how long it took a participant to complete your task and can be a good indicator of how easy or difficult it was to complete. 
  • Misclicks measure the total number of clicks made on areas of your prototype that weren’t clickable, clicks that didn’t result in a page change.

Clickmaps

Clickmaps provide an aggregate view of user interactions with prototypes, visualizing click patterns to reveal how users navigate and locate information. They display hits and misses on designated clickable areas, average task completion times, and heatmaps showing where users believed the next steps to be. Filters for first, second, and third page visits allow analysis of user behavior over time, including how they adapt when backtracking. This comprehensive data helps designers understand user navigation patterns and improve prototype usability.

Participant paths 

The Paths tab in Optimal provides a powerful visualization to understand and identify common navigation patterns and potential obstacles participants encounter while completing tasks. You can include thumbnails of your screens to enhance your analysis, making it easier to pinpoint where users may face difficulties or where common paths occured.

Coming soon to prototyping 🔮

Later this year, we’re running a closed beta for video recording with prototype testing. This feature captures behaviors and insights not evident in click data alone. The browser-based recording requires no plugins, simplifying setup. Consent for recording is obtained at the start of the testing process and can be customized to align with your organization's policies. This new feature will provide deeper insights into user experience and prototype usability.

These enhancements to prototype testing offer a comprehensive toolkit for user experience analysis. By combining quantitative click data with qualitative video insights, designers and researchers can gain a more nuanced understanding of user behavior, leading to more informed decisions and improved product designs.

Start prototype testing today

Learn more
1 min read

Ready for take-off: Best practices for creating and launching remote user research studies

"Hi Optimal Work,I was wondering if there are some best practices you stick to when creating or sending out different UX research studies (i.e. Card sorts, Prototyye Test studies, etc)? Thank you! Mary"

Indeed I do! Over the years I’ve learned a lot about creating remote research studies and engaging participants. That experience has taught me a lot about what works, what doesn’t and what leaves me refreshing my results screen eagerly anticipating participant responses and getting absolute zip. Here are my top tips for remote research study creation and launch success!

Creating remote research studies

Use screener questions and post-study questions wisely

Screener questions are really useful for eliminating participants who may not fit the criteria you’re looking for but you can’t exactly stop them from being less than truthful in their responses. Now, I’m not saying all participants lie on the screener so they can get to the activity (and potentially claim an incentive) but I am saying it’s something you can’t control. To help manage this, I like to use the post-study questions to provide additional context and structure to the research.

Depending on the study, I might ask questions to which the answers might confirm or exclude specific participants from a specific group. For example, if I’m doing research on people who live in a specific town or area, I’ll include a location based question after the study. Any participant who says they live somewhere else is getting excluded via that handy toggle option in the results section. Post-study questions are also great for capturing additional ideas and feedback after participants complete the activity as remote research limits your capacity to get those — you’re not there with them so you can’t just ask. Post-study questions can really help bridge this gap. Use no more than five post-study questions at a time and consider not making them compulsory.

Do a practice run

No matter how careful I am, I always miss something! A typo, a card with a label in the wrong case, forgetting to update a new version of an information architecture after a change was made — stupid mistakes that we all make. By launching a practice version of your study and sharing it with your team or client, you can stop those errors dead in their tracks. It’s also a great way to get feedback from the team on your work before the real deal goes live. If you find an error, all you have to do is duplicate the study, fix the error and then launch. Just keep an eye on the naming conventions used for your studies to prevent the practice version and the final version from getting mixed up!

Sending out remote research studies

Manage expectations about how long the study will be open for

Something that has come back to bite me more than once is failing to clearly explain when the study will close. Understandably, participants can be left feeling pretty annoyed when they mentally commit to complete a study only to find it’s no longer available. There does come a point when you need to shut the study down to accurately report on quantitative data and you’re not going to be able to prevent every instance of this, but providing that information upfront will go a long way.

Provide contact details and be open to questions

You may think you’re setting yourself up to be bombarded with emails, but I’ve found that isn’t necessarily the case. I’ve noticed I get around 1-3 participants contacting me per study. Sometimes they just want to tell me they completed it and potentially provide additional information and sometimes they have a question about the project itself. I’ve also found that sometimes they have something even more interesting to share such as the contact details of someone I may benefit from connecting with — or something else entirely! You never know what surprises they have up their sleeves and it’s important to be open to it. Providing an email address or social media contact details could open up a world of possibilities.

Don’t forget to include the link!

It might seem really obvious, but I can’t tell you how many emails I received (and have been guilty of sending out) that are missing the damn link to the study. It happens! You’re so focused on getting that delivery right and it becomes really easy to miss that final yet crucial piece of information.

To avoid this irritating mishap, I always complete a checklist before hitting send:

  • Have I checked my spelling and grammar?
  • Have I replaced all the template placeholder content with the correct information?
  • Have I mentioned when the study will close?
  • Have I included contact details?
  • Have I launched my study and received confirmation that it is live?
  • Have I included the link to the study in my communications to participants?
  • Does the link work? (yep, I’ve broken it before)

General tips for both creating and sending out remote research studies

Know your audience

First and foremost, before you create or disseminate a remote research study, you need to understand who it’s going to and how they best receive this type of content. Posting it out when none of your followers are in your user group may not be the best approach. Do a quick brainstorm about the best way to reach them. For example if your users are internal staff, there might be an internal communications channel such as an all-staff newsletter, intranet or social media site that you can share the link and approach content to.

Keep it brief

And by that I’m talking about both the engagement mechanism and the study itself. I learned this one the hard way. Time is everything and no matter your intentions, no one wants to spend more time than they have to. Even more so in situations where you’re unable to provide incentives (yep, I’ve been there). As a rule, I always stick to no more than 10 questions in a remote research study and for card sorts, I’ll never include more than 60 cards. Anything more than that will see a spike in abandonment rates and of course only serve to annoy and frustrate your participants. You need to ensure that you’re balancing your need to gain insights with their time constraints.

As for the accompanying approach content, short and snappy equals happy! In the case of an email, website, other social media post, newsletter, carrier pigeon etc, keep your approach spiel to no more than a paragraph. Use an audience appropriate tone and stick to the basics such as: a high level sentence on what you’re doing, roughly how long the study will take participants to complete, details of any incentives on offer and of course don’t forget to thank them.

Set clear instructions

The default instructions in Optimal Workshop’s suite of tools are really well designed and I’ve learned to borrow from them for my approach content when sending the link out. There’s no need for wheel reinvention and it usually just needs a slight tweak to suit the specific study. This also helps provide participants with a consistent experience and minimizes confusion allowing them to focus on sharing those valuable insights!

Create a template

When you’re on to something that works — turn it into a template! Every time I create a study or send one out, I save it for future use. It still needs minor tweaks each time, but I use them to iterate my template.What are your top tips for creating and sending out remote user research studies? Comment below!

Learn more
1 min read

From Exposition to Resolution: Looking at User Experience as a Narrative Arc

“If storymapping could unearth patterns and bring together a cohesive story that engages audiences in the world of entertainment and film, why couldn’t we use a similar approach to engage our audiences?’Donna Lichaw and Lis Hubert

User Experience work makes the most sense to me in the context of storytelling. So when I saw Donna Lichaw and Lis Hubert’s presentation on storymapping at edUi recently, it resonated. A user’s path through a website can be likened to the traditional storytelling structure of crisis or conflict, exposition — and even a climax or two.

The narrative arc and the user experience

So just how can the same structure that suits fairytales help us to design a compelling experience for our customers? Well, storyboarding is an obvious example of how UX design and storytelling mesh. A traditional storyboard for a movie or TV episode lays out sequential images to help visualize what the final production will show. Similarly, we map out users' needs and journeys via wireframes, sketches, and journey maps, all the while picturing how people will actually interact with the product.

But the connection between storytelling and the user experience design process goes even deeper than that. Every time a user interacts with our website or product, we get to tell them a story. And a traditional literary storytelling structure maps fairly well to just how users interact with the digital stories we’re telling.Hence Donna and Lis’ conception of storymapping as ‘a diagram that maps out a story using a traditional narrative structure called a narrative arc.’ They concede that while ‘using stories in UX design...is nothing new’, a ‘narrative-arc diagram could also help us to rapidly assess content strengths, weaknesses, and opportunities.’

Storytelling was a common theme at edUI

The edUi conference in Richmond, Virginia brought together an assembly of people who produce websites or web content for large institutions. I met people from libraries, universities, museums, various levels of government, and many other places. The theme of storytelling was present throughout, both explicitly and implicitly.Keynote speaker Matt Novak from Paleofuture talked about how futurists of the past tried to predict the future, and what we can learn from the stories they told. Matthew Edgar discussed what stories our failed content tell — what story does a 404 page tell? Or a page telling users they have zero search results? Two great presentations that got me thinking about storytelling in a different way.

Ultimately, it all clicked for me when I attended Donna and Lis’ presentation ‘Storymapping: A Macguyver Approach to Content Strategy’ (and yes, it was as compelling as the title suggests). They presented a case study of how they applied a traditional narrative structure to a website redesign process. The basic story structure we all learned in school usually includes a pretty standard list of elements. Donna and Lis had tweaked the definitions a bit, and applied them to the process of how users interact with web content.

Points on the Narrative Arc (from their presentation)

narrative arc UX

Exposition — provides crucial background information and often ends with ‘inciting incident’ kicking off the rest of the story

Donna and Lis pointed out that in the context of doing content strategy work, the inciting incident could be the problem that kicks off a development process. I think it can also be the need that brings users to a website to begin with.

Rising Action — Building toward the climax, users explore a website using different approaches

Here I think the analogy is a little looser. While a story can sometimes be well-served by a long and winding rising action, it’s best to keep this part of the process a bit more straightforward in web work. If there’s too much opportunity for wandering, users may get lost or never come back.

Crisis / Climax — The turning point in a story, and then when the conflict comes to a peak

The crisis is what leads users to your site in the first place — a problem to solve, an answer to find, a purchase to make. And to me the climax sounds like the aha! moment that we all aspire to provide, when the user answers their question, makes a purchase, or otherwise feels satisfied from using the site. If a user never gets to this point, their story just peters out unresolved. They’re forced to either begin the entire process again on your site (now feeling frustrated, no doubt), or turn to a competitor.

Falling Action — The story or user interaction starts to wind down and loose ends are tied up

A confirmation of purchase is sent, or maybe the user signs up for a newsletter.

Denouement / Resolution — The end of the story, the main conflict is resolved

The user goes away with a hopefully positive experience, having been able to meet their information or product needs. If we’re lucky, they spread the word to others!Check out Part 2 of Donna and Lis' three-part article on storymapping.  I definitely recommend exploring their ideas in more depth, and having a go at mapping your own UX projects to the above structure.

A word about crises. The idea of a ‘crisis’ is at the heart of the narrative arc. As we know from watching films and reading novels, the main character always has a problem to overcome. So crisis and conflict show up a few times through this process.While the word ‘crisis’ carries some negative connotations (and that clearly applies to visiting a terribly designed site!), I think it can be viewed more generally when we apply the term to user experience. Did your user have a crisis that brought them to your site? What are they trying to resolve by visiting it? Their central purpose can be the crisis that gives rise to all the other parts of their story.

Why storymapping to a narrative arc is good for your design

Mapping a user interaction along the narrative arc makes it easy to spot potential points of frustration, and also serves to keep the inciting incident or fundamental user need in the forefront of our thinking. Those points of frustration and interaction are natural fits for testing and further development.

For example, if your site has a low conversion rate, that translates to users never hitting the climactic point of their story. It might be helpful to look at their interactions from the earlier phases of their story before they get to the climax. Maybe your site doesn’t clearly establish its reason for existing (exposition), or it might be too hard for users to search and explore your content (rising action).Guiding the user through each phase of the structure described above makes it more difficult to skip an important part of how our content is found and used.

We can ask questions like:

  • How does each user task fit into a narrative structure?
  • Are we dumping them into the climax without any context?
  • Does the site lack a resolution or falling action?
  • How would it feel to be a user in those situations?

These questions bring up great objectives for qualitative testing — sitting down with a user and asking them to show us their story.

What to do before mapping to narrative arc

Many sessions at edUi also touched on analytics or user testing. In crafting a new story, we can’t ignore what’s already in place — especially if some of it is appreciated by users. So before we can start storymapping the user journey, we need to analyze our site analytics, and run quantitative and qualitative user tests. This user research will give us insights into what story we’re already telling (whether it’s on purpose or not).

What’s working about the narrative, and what isn’t? Even if a project is starting from scratch on a new site, your potential visitors will bring stories of their own. It might be useful to check stats to see if users leave early on in the process, during the exposition phase. A high bounce rate might mean a page doesn't supply that expositional content in a way that's clear and engaging to encourage further interaction.Looking at analytics and user testing data can be like a movie's trial advance screening — you can establish how the audience/users actually want to experience the site's content.

How mapping to the narrative arc is playing out in my UX practice

Since I returned from edUi, I've been thinking about the narrative structure constantly. I find it helps me frame user interactions in a new way, and I've already spotted gaps in storytelling that can be easily filled in. My attention instantly went to the many forms on our site. What’s the Rising Action like at that point? Streamlining our forms and using friendly language can help keep the user’s story focused and moving forward toward clicking that submit button as a climax.

I’m also trying to remember that every user is the protagonist of their own story, and that what works for one narrative might not work for another. I’d like to experiment with ways to provide different kinds of exposition to different users. I think it’s possible to balance telling multiple stories on one site, but maybe it’s not the best idea to mix exposition for multiple stories on the same page.And I also wonder if we could provide cues to a user that direct them to exposition for their own inciting incident...a topic for another article perhaps.What stories are you telling your users? Do they follow a clear arc, or are there rough transitions? These are great questions to ask yourself as you design experiences and analyze existing ones. The edUi conference was a great opportunity to investigate these ideas, and I can’t wait to return next year.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.