August 7, 2024
5 min

Welcome to our latest addition: Prototype testing 🐣

Today, we’re thrilled to announce the arrival of the latest member of the Optimal family:  Prototype Testing! This exciting and much-requested new tool allows you to test designs early and often with users to gather fast insights, and make confident design decisions to create more intuitive and user-friendly digital experiences. 

Optimal gives you tools you need to easily build a prototype to test using images and screens and creating clickable areas, or you can import a prototype from Figma and get testing. The first iteration of prototype testing is an open beta, and we’ll be working closely with our customers and community to gather feedback and ideas for further improvements in the months to come.

When to use prototype testing 

Prototype testing is a great way to validate design ideas, identify usability issues, and gather feedback from users before investing too heavily in the development of products, websites, and apps. To further inform your insights, it’s a good idea to include sentiment questions or rating scales alongside your tasks.

Early in the design process: Test initial ideas and concepts to gauge user reactions and feelings about your conceptual solutions. 

Iterative design phases: Continuously test and refine prototypes as you make changes and improvements to the designs. 

Before major milestones: Validate designs before key project stages, such as stakeholder reviews or final approvals.

Usability Testing: Conduct summative research to assess a design's overall performance and gauge real user feedback to guide future design decisions and enhancements.

How it works 🧑🏽‍💻

No existing prototype? No problem. We've made it easy to create one right within Optimal. Here's how:

  1. Import your visuals

Start by uploading a series of screenshots or images that represent your design flow. These will form the backbone of your prototype.

  1. Create interactive elements

Once your visuals are in place, it's time to bring them to life. Use our intuitive interface to designate clickable areas on each screen. These will act as navigation points for your test participants.

  1. Set up the flow

Connect your screens in a logical sequence, mirroring the user journey you want to test. This creates a seamless, interactive experience for your participants.

  1. Preview and refine

Before launching your study, take a moment to walk through your prototype. Ensure all clickable areas work as intended and the flow feels natural.

The result? A fully functional prototype that looks and feels like a real digital product. Your test participants will be able to navigate through it just as they would a live website or app, providing you with authentic, actionable insights.

By empowering you to build prototypes from scratch, we're removing barriers to early-stage testing. This means you can validate ideas faster, iterate with confidence, and ultimately deliver better digital experiences.

Or…import your prototypes directly from Figma 

There’s a bit of housekeeping you’ll need to do in Figma in order to provide your participants with the best testing experience and not impact loading times of the prototype. You can import a link to your Figma prototype into your study,  and it will carry across all the interactions you have set up. You’ll need to make sure your Figma presentation mode is made public in order to share the file with participants. If you make any updates to your Figma file, you can sync the changes in just one click. 

Help Article: Find out more about how to set up your Figma file for testing

How to create tasks 🧰

When you set up your study, you’ll create tasks for participants to complete. 

There are two different ways to build tasks in your prototype tests. You can set a correct destination by adding a start screen and a correct destination screen. That way, you can watch how participants navigate your design to find their way to the correct destination. Another option is to set a correct pathway and evaluate how participants navigate a product, app, or website based on the pathway sequence you set. You can add as many pathways or destinations as you like. 

Adding post-task questions is a great way to help gather qualitative feedback on the user's experience, capturing their thoughts, feelings, and perceptions.

Help Article: Find out how to analyze your results

Prototype testing analysis and metrics 📊

Prototype testing offers a variety of analysis options and metrics to evaluate the effectiveness and usability of your design.  By using these analysis options and metrics, you can get comprehensive insights into your prototype's performance, identify areas for improvement, and make informed design decisions:

Task results 

The task results provide a deep analysis at a task level, including the success score, directness score, time taken, misclicks, and the breakdown of the task's success and failure. They provide great insight into the usability of your design to achieve a task. 

  • Success score tells you the total percentage of participants who reached the correct destination or pathway that you defined for this task. It’s a good indicator of a prototype's usability. 
  • Directness score is the total completed results minus the ‘indirect’ results.
  • A path is ‘indirect’ when a participant backtracks, viewing the same page multiple times, or if they nominate the correct destination but don’t follow the correct pathway
  • Time taken is how long it took a participant to complete your task and can be a good indicator of how easy or difficult it was to complete. 
  • Misclicks measure the total number of clicks made on areas of your prototype that weren’t clickable, clicks that didn’t result in a page change.

Clickmaps

Clickmaps provide an aggregate view of user interactions with prototypes, visualizing click patterns to reveal how users navigate and locate information. They display hits and misses on designated clickable areas, average task completion times, and heatmaps showing where users believed the next steps to be. Filters for first, second, and third page visits allow analysis of user behavior over time, including how they adapt when backtracking. This comprehensive data helps designers understand user navigation patterns and improve prototype usability.

Participant paths 

The Paths tab in Optimal provides a powerful visualization to understand and identify common navigation patterns and potential obstacles participants encounter while completing tasks. You can include thumbnails of your screens to enhance your analysis, making it easier to pinpoint where users may face difficulties or where common paths occured.

Coming soon to prototyping 🔮

Later this year, we’re running a closed beta for video recording with prototype testing. This feature captures behaviors and insights not evident in click data alone. The browser-based recording requires no plugins, simplifying setup. Consent for recording is obtained at the start of the testing process and can be customized to align with your organization's policies. This new feature will provide deeper insights into user experience and prototype usability.

These enhancements to prototype testing offer a comprehensive toolkit for user experience analysis. By combining quantitative click data with qualitative video insights, designers and researchers can gain a more nuanced understanding of user behavior, leading to more informed decisions and improved product designs.

Start prototype testing today

Share this article
Author
Sarah
Flutey

Related articles

View all blog articles
Learn more
1 min read

Transforming digital experiences: Optimal Workshop's radical refresh

At Optimal Workshop, we've always prided ourselves on being pioneers in the field of UX research tools. Since our inception, we've been the go-to platform for UX researchers and designers looking to conduct card sorting, tree testing, and other critical user research activities. Our tools have helped shape the digital experiences of some of the world's leading companies.

The digital landscape is ever-changing, and with it, the needs of the professionals who create and optimize digital experiences. As we've grown and evolved alongside the industry, we've recognized the need to expand our reach and refine our value proposition to better serve not just UX researchers, but all professionals involved in delivering exceptional digital experiences.

That's why we're excited to announce a significant brand refresh and value pivot for Optimal Workshop. This evolution isn't just about a new coat of paint – it's a fundamental shift in how we position ourselves and the value we provide to our users and the broader digital experience community.

Why fix it if it ain’t broke? 💔🔨

Expanding Our User Base

While we've built a strong reputation among UX researchers, we've recognized that there's a broader audience of professionals who benefit from our tools and expertise. Product managers, marketers, customer experience specialists, and others involved in shaping digital experiences often lack access to professional-grade user research tools or the knowledge to effectively implement them.

Our brand refresh aims to make Optimal Workshop more accessible and relevant to this wider circle of professionals, without losing the deep respect we've earned from UX experts.

Adapting to market changes

The UX research landscape has evolved significantly since we first entered the market. With the proliferation of prototyping tools and the increasing speed of digital product development, there's been a shift away from extensive upfront research. However, we firmly believe that incorporating the voice of the customer throughout the development process is more critical than ever.

Our pivot allows us to emphasize the importance of integrating user research and insights at various stages of the digital experience lifecycle, not just during initial design phases.

Leveraging our expertise

As pioneers in the field, we have accumulated a wealth of knowledge about best practices in UX research and digital experience optimization. Our brand refresh positions us not just as a tool provider, but as a trusted partner and thought leader in the industry. We're doubling down on our commitment to sharing knowledge, standardizing best practices, and elevating the work of exceptional practitioners in the field.

Meeting evolving business needs through effective UX

The UX industry is evolving rapidly, with increasing investment and a diverse range of roles getting involved. From junior designers leveraging AI to seasoned researchers with deep expertise, the landscape is rich with talent and tools. However, the true value lies in how effectively these resources translate into business outcomes.

Optimal Workshop recognizes that exceptional digital experiences are no longer just nice-to-have – they're critical for engagement, conversion, and overall business success. Our tools bridge the gap between UX insights and tangible ROI by:

  • Democratizing research: Enabling teams across experience levels to gather valuable user data quickly and efficiently.
  • Accelerating decision-making: Providing fast, actionable insights that reduce design iterations and time-to-market.
  • Enhancing team effectiveness: Facilitating collaboration and knowledge sharing between junior and senior team members.
  • Driving business value: Directly linking UX improvements to key performance indicators and bottom-line results.

In a landscape where basic UX practices are becoming table stakes, Optimal Workshop empowers organizations to go beyond the basics. We help teams leverage the full spectrum of UX expertise – from AI-assisted analysis to seasoned human insight – to create digital experiences that truly set businesses apart and deliver measurable returns on UX investment.

What's changing at Optimal Workshop 🐛🦋

1. Best-in-class research & insights platform

While our core tools remain a critical part of our offering, we're broadening our focus to position Optimal Workshop as a best-in-class research & insights platform for digital experience professionals. This means developing new tools, workflows, and integrations that cater to a wider range of use cases and user types.

2. Accessibility without compromise

We're committed to making our professional-grade tools more accessible to a broader audience without sacrificing the depth and rigor that our expert users expect. This means developing new onboarding experiences, creating more intuitive interfaces, and providing educational resources to help users at all levels get the most out of our platform.

3. Championing best practices

As part of our evolution, we're placing a greater emphasis on identifying, sharing, and standardizing best practices in digital experience research and optimization. Through case studies, partnerships with industry leaders, and our own thought leadership content, we aim to elevate the entire field of digital experience design.

4. Fostering a community of excellence

We're doubling down on our commitment to building and nurturing a community of digital experience professionals. This includes expanding our educational offerings, hosting more events and webinars, and creating opportunities for our users to connect and learn from each other. 

5. Emphasizing outcomes and ROI

We're aligning our messaging and product development with the real-world impact our users are seeking. That’s why the new names for our tools emphasize what each tool helps you achieve, rather than the technical methodology behind it. This outcome-focused approach helps users quickly identify which tool is right for their specific needs.

How our evolution benefits you 🚀

For our long-time users, rest assured that the tools and features you know and love aren't going anywhere, but their names are changing to plain English terms to be more approachable for professionals who may not have a background in UX research. In fact, our UXR platform is only going to get better as we invest in improvements and new capabilities. You'll also benefit from an expanded community of practitioners and a wealth of new resources to help you continue to grow and excel in your field.

For professionals who may be new to Optimal Workshop or to formalized user research in general, our refresh means it's easier than ever to get started with professional-grade tools and methodologies. We're here to support you at every step, from your first card sort to building a comprehensive, data-driven approach to optimizing digital experiences.

Join us in shaping tomorrow's digital experiences 🌟

This brand refresh and value pivot mark an exciting new chapter for Optimal Workshop. We're committed to continuing our tradition of excellence while expanding our reach and impact in the world of digital experiences.

As we move forward, we'll be rolling out new features, resources, and initiatives that align with our refreshed brand and value proposition. We're excited to partner with our users – both new and long-standing – to push the boundaries of what's possible in creating truly exceptional digital experiences.

Thank you for being part of the Optimal Workshop community. Whether you're a UX research veteran or just starting your journey in optimizing digital experiences, we're here to support you with the best tools, knowledge, and community in the industry. Together, we can shape the future of digital experiences and make the online world more intuitive, efficient, and enjoyable for everyone.

Onwards and upwards,

Alex Burke, CEO Optimal Workshop

Learn more
1 min read

Exciting updates to Optimal’s pricing plans

Big things are happening in 2024! 🎉

We’re undergoing a huge transformation in 2024 to deliver more value for our customers with exciting new products like prototype testing, features like video recording, upgrading our survey tool, introducing AI, and improving how we support large organizations and multiple teams managing their accounts. These new products and features mean we need to update our pricing plans to continue innovating and providing top-tier UX research tools for our customers now and in the future.

Say hello to our new pricing plans  👋🏽

Starting July 22, 2024, we’ll be introducing new plans—Individual and Individual+—and updating our Team and Enterprise plans. We’ve reduced the price to join Optimal from $249 a month on the Pro plan to $129 on the new Individual plan. This reduction will help make our tools more accessible for people to do research and includes two months free on the individual annual plan, too.

We’ll be discontinuing some of our current plans, including Starter, Pro, and Pay per Study, and letting customers know about the changes that will affect their account via email and in information on the plans page in the app.

Prototype testing is just around the corner 🛣️ 🥳

The newest edition to the Optimal platform  is  days away, and will be available to use on the Individual+, Team and Enterprise plans from early August.  Prototype testing will allow you to quickly test designs with users throughout the design process, to help inform decisions so you can build on with confidence.  You’ll be able to build your own prototype from scratch using images or screenshots or import a prototype directly from Figma. Keep an eye out in app for this new exciting addition.

Learn more
1 min read

Ready for take-off: Best practices for creating and launching remote user research studies

"Hi Optimal Work,I was wondering if there are some best practices you stick to when creating or sending out different UX research studies (i.e. Card sorts, Prototyye Test studies, etc)? Thank you! Mary"

Indeed I do! Over the years I’ve learned a lot about creating remote research studies and engaging participants. That experience has taught me a lot about what works, what doesn’t and what leaves me refreshing my results screen eagerly anticipating participant responses and getting absolute zip. Here are my top tips for remote research study creation and launch success!

Creating remote research studies

Use screener questions and post-study questions wisely

Screener questions are really useful for eliminating participants who may not fit the criteria you’re looking for but you can’t exactly stop them from being less than truthful in their responses. Now, I’m not saying all participants lie on the screener so they can get to the activity (and potentially claim an incentive) but I am saying it’s something you can’t control. To help manage this, I like to use the post-study questions to provide additional context and structure to the research.

Depending on the study, I might ask questions to which the answers might confirm or exclude specific participants from a specific group. For example, if I’m doing research on people who live in a specific town or area, I’ll include a location based question after the study. Any participant who says they live somewhere else is getting excluded via that handy toggle option in the results section. Post-study questions are also great for capturing additional ideas and feedback after participants complete the activity as remote research limits your capacity to get those — you’re not there with them so you can’t just ask. Post-study questions can really help bridge this gap. Use no more than five post-study questions at a time and consider not making them compulsory.

Do a practice run

No matter how careful I am, I always miss something! A typo, a card with a label in the wrong case, forgetting to update a new version of an information architecture after a change was made — stupid mistakes that we all make. By launching a practice version of your study and sharing it with your team or client, you can stop those errors dead in their tracks. It’s also a great way to get feedback from the team on your work before the real deal goes live. If you find an error, all you have to do is duplicate the study, fix the error and then launch. Just keep an eye on the naming conventions used for your studies to prevent the practice version and the final version from getting mixed up!

Sending out remote research studies

Manage expectations about how long the study will be open for

Something that has come back to bite me more than once is failing to clearly explain when the study will close. Understandably, participants can be left feeling pretty annoyed when they mentally commit to complete a study only to find it’s no longer available. There does come a point when you need to shut the study down to accurately report on quantitative data and you’re not going to be able to prevent every instance of this, but providing that information upfront will go a long way.

Provide contact details and be open to questions

You may think you’re setting yourself up to be bombarded with emails, but I’ve found that isn’t necessarily the case. I’ve noticed I get around 1-3 participants contacting me per study. Sometimes they just want to tell me they completed it and potentially provide additional information and sometimes they have a question about the project itself. I’ve also found that sometimes they have something even more interesting to share such as the contact details of someone I may benefit from connecting with — or something else entirely! You never know what surprises they have up their sleeves and it’s important to be open to it. Providing an email address or social media contact details could open up a world of possibilities.

Don’t forget to include the link!

It might seem really obvious, but I can’t tell you how many emails I received (and have been guilty of sending out) that are missing the damn link to the study. It happens! You’re so focused on getting that delivery right and it becomes really easy to miss that final yet crucial piece of information.

To avoid this irritating mishap, I always complete a checklist before hitting send:

  • Have I checked my spelling and grammar?
  • Have I replaced all the template placeholder content with the correct information?
  • Have I mentioned when the study will close?
  • Have I included contact details?
  • Have I launched my study and received confirmation that it is live?
  • Have I included the link to the study in my communications to participants?
  • Does the link work? (yep, I’ve broken it before)

General tips for both creating and sending out remote research studies

Know your audience

First and foremost, before you create or disseminate a remote research study, you need to understand who it’s going to and how they best receive this type of content. Posting it out when none of your followers are in your user group may not be the best approach. Do a quick brainstorm about the best way to reach them. For example if your users are internal staff, there might be an internal communications channel such as an all-staff newsletter, intranet or social media site that you can share the link and approach content to.

Keep it brief

And by that I’m talking about both the engagement mechanism and the study itself. I learned this one the hard way. Time is everything and no matter your intentions, no one wants to spend more time than they have to. Even more so in situations where you’re unable to provide incentives (yep, I’ve been there). As a rule, I always stick to no more than 10 questions in a remote research study and for card sorts, I’ll never include more than 60 cards. Anything more than that will see a spike in abandonment rates and of course only serve to annoy and frustrate your participants. You need to ensure that you’re balancing your need to gain insights with their time constraints.

As for the accompanying approach content, short and snappy equals happy! In the case of an email, website, other social media post, newsletter, carrier pigeon etc, keep your approach spiel to no more than a paragraph. Use an audience appropriate tone and stick to the basics such as: a high level sentence on what you’re doing, roughly how long the study will take participants to complete, details of any incentives on offer and of course don’t forget to thank them.

Set clear instructions

The default instructions in Optimal Workshop’s suite of tools are really well designed and I’ve learned to borrow from them for my approach content when sending the link out. There’s no need for wheel reinvention and it usually just needs a slight tweak to suit the specific study. This also helps provide participants with a consistent experience and minimizes confusion allowing them to focus on sharing those valuable insights!

Create a template

When you’re on to something that works — turn it into a template! Every time I create a study or send one out, I save it for future use. It still needs minor tweaks each time, but I use them to iterate my template.What are your top tips for creating and sending out remote user research studies? Comment below!

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.