October 15, 2024
1 min read

The Power of Prototype Testing Live Training

If you missed our recent live training on Prototype Testing, don’t worry—we’ve got everything you need right here! You can catch up at your convenience, so grab a cup of tea, put your feet up, and enjoy the show.

In the session, we explored the powerful new features of our Prototype Testing tool, offering a step-by-step guide to setting up, running, and analyzing your tests like a seasoned pro. This tool is a game-changer for your design workflow, helping you identify usability issues and gather real user feedback before committing significant resources to development.


Here’s a quick recap of the highlights:

1. Creating a prototype test from scratch using images

We walked through how to create a prototype test from scratch using static images. This method is perfect for early-stage design concepts, where you want to quickly test user flows without a fully interactive prototype.

2. Preparing your Figma prototype for testing

Figma users, we’ve got you covered! We discussed how to prepare your Figma prototype for the smoothest possible testing experience. From setting up interactions to ensuring proper navigation, these tips ensure participants have an intuitive experience during the test. For more detailed instructions, check out our help article 

3. Seamless Figma prototype imports

One of the standout features of the tool is its seamless integration with Figma. We showed how easy it is to import your designs directly from Figma into Optimal, streamlining the setup process. You can bring your working files straight in, and resync when you need to with one click of a button.

4. Understanding usability metrics and analyzing results

We explored how to analyze the usability metrics, and walked through what the results can indicate on click maps and paths. These visual tools allow you to see exactly how participants navigate your design, making it easier to spot pain points, dead ends, or areas of friction. By understanding user behavior, you can rapidly iterate and refine your prototypes for optimal user experience.

Share this article
Author
Sarah
Flutey

Related articles

View all blog articles
Learn more
1 min read

Understanding a museum’s digital audience

Ahead of her talk at UX New Zealand 2016, Lana Gibson from Lanalytics writes about a project she worked on with Te Papa.Te Papa (a museum in Wellington, New Zealand) created audience personas based on user research, and I used these as a basis to create audience segments in Google Analytics to give us further insight into different groups. By regularly engaging with our audience using both qualitative and quantitative user insight methods, we’re starting to build up a three-dimensional picture of their needs and how Te Papa can serve them.

Personas based on user research

At Te Papa the digital team created six audience personas to inform their site redesign, based on user research:

  • enthusiast
  • tourist
  • social
  • educator
  • volunteer
  • Wellingtonian

These formed a good basis for understanding why people are using the site. For example the educator persona wants fodder for lesson plans for her class — trustworthy, subject-based resources that will excite her students. The tourist persona wants practical information — what’s on, how to plan a visit. And they want to get this information quickly and get on with their visit.We’ll follow the tourist persona through a couple more rounds of user research, to give an example of what you can find out by segmenting your audience.

Interpreting tourist needs with data

Te Papa holds information for the Tourist audience in the Visit and What’s on sections of the site. I created a segment in Google Analytics which filters the reports to show how people who visit pages within these two sections interact with the whole site. For example the keywords they search for in Google before arriving on Te Papa, what they search for when on the site, and how many of them email us.Deeper digging revealed that the Tourist audience makes up about half of our overall audience. Because the content is useful to everyone wanting to visit the museum, and not just tourists, we broadened the scope of this persona and called the segment ‘Museum visitor’.

Why segment by site category — what if the audience goes beyond these pages?

Google Analytics segments allow you to see all the pages that a particular audience visits, not just the ones you’ve filtered. For example over 2,000 people who visited a page within the Visit and What’s on sections also visited the Kids and families section in July 2016. So, the audience segment allows us to expand our concept of our audiences.You can segment by a lot of different behaviors. For example you could segment visitors by keyword, isolating people who come to the site from Google after searching for ‘parking’ and ‘opening hours’ and seeing what they do afterwards. But segmenting by site category tests the information architecture of your site, which can be very useful if you’ve got it wrong!

Visit persona wants opening hours information

What did we learn from these personas? One example is that the most searched term on the site for the Visit persona was ‘opening hours’. To help fix this, the team put the opening hours on every page of the redesigned site:

Portion of the site showing the opening times for Te Papa

This resulted in a 90% drop in searches that include ‘hours’ (May 2016 compared with May 2015):

Analytics showing a drop in searches for opening hours

Developing personas with Matariki

After the re-design the team ran a project to increase the reach and engagement of the Te Papa Matariki audience. You can read more about this in "Using data to help people celebrate Matariki". Te Papa holds Matariki events in the museum, such as the Kaumātua kapa haka, and this event in particular enhanced and challenged our ideas about this audience.

Experiencing Kaumātua kapa haka performances online

The Kaumātua kapa haka is the biggest Matariki event held at Te Papa, and this year we had 4,000 unique page views to the two Kaumātua kapa haka event pages. Traffic spiked over the event weekend, particularly from Facebook and mobile devices. We assumed the traffic was from people who were planning to come to the event, as they sit in the What’s on section. But further analysis indicates that people were visiting for the live streaming of the event — we included embedded Youtube videos on these pages.The popularity of the videos suggests that we’re taking events held within the museum walls out to people on the move, or in the comfort of their own homes. Based on this insight we’re looking into live streaming more events.

We’ve taken Te Papa personas through three iterations, based on user research, analytics, then a practical application of these to the Matariki festival. Each user research method has limitations, but by regularly using qualitative and quantitative methods we’re engaging with a  three dimensional view of our audience that’s constantly evolving. Each user research piece builds that view, and allows us to plan projects and site changes with greater clarity about what our users need. It means we can plan projects that will have real and measurable impact, and allow people to engage with Te Papa in useful and meaningful ways.

Want to hear more? Come to UX New Zealand!

If you'd like to hear more about how Lana and Ruth redesigned the Te Papa website, plus a bunch of other cool UX-related talks, head along to UX New Zealand 2016 hosted by Optimal Workshop. The conference runs from 12-14 October, 2016, including a day of fantastic workshops, and you can get your tickets here. Got some questions you'd like to ask Lana before the conference? You can Tweet her on @lanalytics00!

Learn more
1 min read

Using paper prototypes in UX

In UX research we are told again and again that to ensure truly user-centered design, it’s important to test ideas with real users as early as possible. There are many benefits that come from introducing the voice of the people you are designing for in the early stages of the design process. The more feedback you have to work with, the more you can inform your design to align with real needs and expectations. In turn, this leads to better experiences that are more likely to succeed in the real world.It is not surprising then that paper prototypes have become a popular tool used among researchers. They allow ideas to be tested as they emerge, and can inform initial designs before putting in the hard yards of building the real thing. It would seem that they’re almost a no-brainer for researchers, but just like anything out there, along with all the praise, they have also received a fair share of criticism, so let’s explore paper prototypes a little further.

What’s a paper prototype anyway? 🧐📖

Paper prototyping is a simple usability testing technique designed to test interfaces quickly and cheaply. A paper prototype is nothing more than a visual representation of what an interface could look like on a piece of paper (or even a whiteboard or chalkboard). Unlike high-fidelity prototypes that allow for digital interactions to take place, paper prototypes are considered to be low-fidelity, in that they don’t allow direct user interaction. They can also range in sophistication, from a simple sketch using a pen and paper to simulate an interface, through to using designing or publishing software to create a more polished experience with additional visual elements.

Screen Shot 2016-04-15 at 9.26.30 AM
Different ways of designing paper prototypes, using OptimalSort as an example

Showing a research participant a paper prototype is far from the real deal, but it can provide useful insights into how users may expect to interact with specific features and what makes sense to them from a basic, user-centered perspective. There are some mixed attitudes towards paper prototypes among the UX community, so before we make any distinct judgements, let's weigh up their pros and cons.

Advantages 🏆

  • They’re cheap and fastPen and paper, a basic word document, Photoshop. With a paper prototype, you can take an idea and transform it into a low-fidelity (but workable) testing solution very quickly, without having to write code or use sophisticated tools. This is especially beneficial to researchers who work with tight budgets, and don’t have the time or resources to design an elaborate user testing plan.
  • Anyone can do itPaper prototypes allow you to test designs without having to involve multiple roles in building them. Developers can take a back seat as you test initial ideas, before any code work begins.
  • They encourage creativityFrom both the product teams participating in their design, but also from the users. They require the user to employ their imagination, and give them the opportunity express their thoughts and ideas on what improvements can be made. Because they look unfinished, they naturally invite constructive criticism and feedback.
  • They help minimize your chances of failurePaper prototypes and user-centered design go hand in hand. Introducing real people into your design as early as possible can help verify whether you are on the right track, and generate feedback that may give you a good idea of whether your idea is likely to succeed or not.

Disadvantages 😬

  • They’re not as polished as interactive prototypesIf executed poorly, paper prototypes can appear unprofessional and haphazard. They lack the richness of an interactive experience, and if our users are not well informed when coming in for a testing session, they may be surprised to be testing digital experiences on pieces of paper.
  • The interaction is limitedDigital experiences can contain animations and interactions that can’t be replicated on paper. It can be difficult for a user to fully understand an interface when these elements are absent, and of course, the closer the interaction mimics the final product, the more reliable our findings will be.
  • They require facilitationWith an interactive prototype you can assign your user tasks to complete and observe how they interact with the interface. Paper prototypes, however, require continuous guidance from a moderator in communicating next steps and ensuring participants understand the task at hand.
  • Their results have to be interpreted carefullyPaper prototypes can’t emulate the final experience entirely. It is important to interpret their findings while keeping their limitations in mind. Although they can help minimize your chances of failure, they can’t guarantee that your final product will be a success. There are factors that determine success that cannot be captured on a piece of paper, and positive feedback at the prototyping stage does not necessarily equate to a well-received product further down the track.

Improving the interface of card sorting, one prototype at a time 💡

We recently embarked on a research project looking at the user interface of our card-sorting tool, OptimalSort. Our research has two main objectives — first of all to benchmark the current experience on laptops and tablets and identify ways in which we can improve the current interface. The second objective is to look at how we can improve the experience of card sorting on a mobile phone.

Rather than replicating the desktop experience on a smaller screen, we want to create an intuitive experience for mobiles, ensuring we maintain the quality of data collected across devices.Our current mobile experience is a scaled down version of the desktop and still has room for improvement, but despite that, 9 per cent of our users utilize the app. We decided to start from the ground up and test an entirely new design using paper prototypes. In the spirit of testing early and often, we decided to jump right into testing sessions with real users. In our first testing sprint, we asked participants to take part in two tasks. The first was to perform an open or closed card sort on a laptop or tablet. The second task involved using paper prototypes to see how people would respond to the same experience on a mobile phone.

blog_artwork_01-03

Context is everything 🎯

What did we find? In the context of our research project, paper prototypes worked remarkably well. We were somewhat apprehensive at first, trying to figure out the exact flow of the experience and whether the people coming into our office would get it. As it turns out, people are clever, and even those with limited experience using a smartphone were able to navigate and identify areas for improvement just as easily as anyone else. Some participants even said they prefered the experience of testing paper prototypes over a laptop. In an effort to make our prototype-based tasks easy to understand and easy to explain to our participants, we reduced the full card sort to a few key interactions, minimizing the number of branches in the UI flow.

This could explain a preference for the mobile task, where we only asked participants to sort through a handful of cards, as opposed to a whole set.The main thing we found was that no matter how well you plan your test, paper prototypes require you to be flexible in adapting the flow of your session to however your user responds. We accepted that deviating from our original plan was something we had to embrace, and in the end these additional conversations with our participants helped us generate insights above and beyond the basics we aimed to address. We now have a whole range of feedback that we can utilize in making more sophisticated, interactive prototypes.

Whether our success with using paper prototypes was determined by the specific setup of our testing sessions, or simply by their pure usefulness as a research technique is hard to tell. By first performing a card sorting task on a laptop or tablet, our participants approached the paper prototype with an understanding of what exactly a card sort required. Therefore there is no guarantee that we would have achieved the same level of success in testing paper prototypes on their own. What this does demonstrate, however, is that paper prototyping is heavily dependent on the context of your assessment.

Final thoughts 💬

Paper prototypes are not guaranteed to work for everybody. If you’re designing an entirely new experience and trying to describe something complex in an abstracted form on paper, people may struggle to comprehend your idea. Even a careful explanation doesn’t guarantee that it will be fully understood by the user. Should this stop you from testing out the usefulness of paper prototypes in the context of your project? Absolutely not.

In a perfect world we’d test high fidelity interactive prototypes that resemble the real deal as closely as possible, every step of the way. However, if we look at testing from a practical perspective, before we can fully test sophisticated designs, paper prototypes provide a great solution for generating initial feedback.In his article criticizing the use of paper prototypes, Jake Knapp makes the point that when we show customers a paper prototype we’re inviting feedback, not reactions. What we found in our research however, was quite the opposite.

In our sessions, participants voiced their expectations and understanding of what actions were possible at each stage, without us having to probe specifically for feedback. Sure we also received general comments on icon or colour preferences, but for the most part our users gave us insights into what they felt throughout the experience, in addition to what they thought.

Further reading 🧠

Learn more
1 min read

Exciting updates to Optimal’s pricing plans

Big things are happening in 2024! 🎉

We’re undergoing a huge transformation in 2024 to deliver more value for our customers with exciting new products like prototype testing, features like video recording, upgrading our survey tool, introducing AI, and improving how we support large organizations and multiple teams managing their accounts. These new products and features mean we need to update our pricing plans to continue innovating and providing top-tier UX research tools for our customers now and in the future.

Say hello to our new pricing plans  👋🏽

Starting July 22, 2024, we’ll be introducing new plans—Individual and Individual+—and updating our Team and Enterprise plans. We’ve reduced the price to join Optimal from $249 a month on the Pro plan to $129 on the new Individual plan. This reduction will help make our tools more accessible for people to do research and includes two months free on the individual annual plan, too.

We’ll be discontinuing some of our current plans, including Starter, Pro, and Pay per Study, and letting customers know about the changes that will affect their account via email and in information on the plans page in the app.

Prototype testing is just around the corner 🛣️ 🥳

The newest edition to the Optimal platform  is  days away, and will be available to use on the Individual+, Team and Enterprise plans from early August.  Prototype testing will allow you to quickly test designs with users throughout the design process, to help inform decisions so you can build on with confidence.  You’ll be able to build your own prototype from scratch using images or screenshots or import a prototype directly from Figma. Keep an eye out in app for this new exciting addition.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.