August 8, 2022
4 min

Usability Testing Guide: What It Is, How to Run It, and When to Use Each Method

Knowing and understanding why and how your users use your product can be invaluable for getting to the nitty gritty of usability. Where they get stuck and where they fly through. Delving deep with probing questions into motivation or skimming over looking for issues can equally be informative.

Usability testing can be done in several ways, each way has its benefits. Put super simply, usability testing literally is testing how useable your product is for your users. If your product isn't useable users will not stick around or very often complete their task, let alone come back for more.

What is usability testing?

Usability testing is a research method used to evaluate how easy something is to use by testing it with representative users.

These tests typically involve observing a participant as they work through a series of tasks involving the product being tested. Having conducted several usability tests, you can analyze your observations to identify the most common issues.

We go into the three main methods of usability testing:

  1. Moderated and unmoderated
  2. Remote or in person
  3. Explorative, assessment or comparative

1. Moderated or unmoderated usability testing

Moderated usability testing


Moderated usability testing
is done in-person or remotely by a researcher who introduces the test to participants, answers their queries, and asks follow-up questions. Often these tests are done in real time with participants and can involve other research stakeholders. Moderated testing usually produces more in-depth results thanks to the direct interaction between researchers and test participants. However, this can be expensive to organize and run.

Top tip: Use moderated testing to investigate the reasoning behind user behavior.

Unmoderated usability testing


Unmoderated usability testing
is done without direct supervision; likely participants are in their own homes and/or using their own devices to browse the website that is being tested. And often at their own pace.  The cost of unmoderated testing is lower, though participant answers can remain superficial and making follow-up questions can be difficult.

Top tip: Use unmoderated testing to test a very specific question or observe and measure behavior patterns.

2. Research or in-person usability testing

Remote usability testing


Remote usability testing is done over the internet or by phone. Allowing the participants to have the time and space to work in their own environment and at their own pace. This however doesn’t give the researcher much in the way of contextual data because you’re unable to ask questions around intention or probe deeper if the participant makes a particular decision. Remote testing doesn’t go as deep into a participant’s reasoning, but it allows you to test large numbers of people in different geographical areas using fewer resources.

Top tip: Use remote testing when a large group of participants are needed and the questions asked can be direct and unambiguous.

In-person usability testing


In-person usability testing, as the name suggests, is done in the presence of a researcher. In-person testing does provide contextual data as researchers can observe and analyze body language and facial expressions. You’re also often able to converse with participants and find out more about why they do something. However, in-person testing can be expensive and time-consuming: you have to find a suitable space, block out a specific date, and recruit (and often pay) participants.

Top tip: In-person testing gives researchers more time and insight into motivation for decisions.

3. Explorative, Assessment or comparative testing

These three usability testing methods generate different types of information:

Explorative testing


Explorative testing is open-ended. Participants are asked to brainstorm, give opinions, and express emotional impressions about ideas and concepts. The information is typically collected in the early stages of product development and helps researchers pinpoint gaps in the market, identify potential new features, and workshop new ideas.

Assessment research


Assessment research is used to test a user's satisfaction with a product and how well they are able to use it. It's used to evaluate general functionality.

Comparative research


Comparative research methods involve asking users to choose which of two solutions they prefer, and they may be used to compare a product with its competitors.

Top tip: Depending on what research is being done, and how much qualitative or quantitative data is wanted.

Which method is right for you?

Whether the testing is done in-person, remote, moderated or unmoderated will depend on your purpose, what you want out of the testing, and to some extent your budget. 

Depending on what you are testing, each of the usability testing methods we explored here can offer an answer. If you are at the development stage of a product it can be useful to conduct a usability test on the entire product. Checking the intuitive usability of your website, to ensure users can make the best decisions, quickly. Or adding, changing or upgrading a product can also be the moment to check on a specific question around usability. Planning and understanding your objectives are key to selecting the right usability testing option for your project.

Let's take a look at a couple of examples of usability testing.

1. Lab based, in-person moderated testing - mid-life website

Imagine you have a website that sells sports equipment. Over time your site has become cluttered and disorganized, much like a bricks and mortar store may. You’ve noticed a drop in sales in certain areas. How do you find out what is going wrong or where users are getting lost? Having an in-person, lab (or other controlled environment), moderated usability test with users you can set tasks, watch (and record) what they do.

The researcher can literally be standing or sitting next to the participant throughout, recording contextual information such as how they interacted with the mouse, laptop or even the seat. Watching for cues as to the comfort of the participant and asking questions about why they make decisions can provide richer insights. Maybe they wanted purple yoga pants, but couldn’t find the ‘yoga’ section which was listed under gym rather than a clothing section.

Meaning you can look at how your stock is organised, or even investigate undertaking a card sort. This provides robust and fully rounded feedback on users behaviours, expectations and experiences. Providing data that can directly be turned into actionable directives when redeveloping the website. 

2. Remote, moderated assessment testing - app product development

You are looking at launching an app for parents to access for information and updates for the school. It’s still in development stage and at this point you want to know how easy the app is to use. Setting some very specific set tasks for participants to complete the app can be sent to them and they can be left to complete (or not). Providing feedback and comments around the usability.

The next step may be to use first click testing to see how and where the interface is clicked and where participants may be spending time, or becoming lost. Whilst the feedback and data gathered from this testing can be light, it will be very direct to the questions asked. And will provide data to back up (or possibly not) what assumptions were made.

3. Moderated, In-person, explorative testing - new product development

You’re right at the start of the development process. The idea is new and fresh and the basics are being considered. What better way to get an understanding of what your users’ truly want than an explorative study.

Open-ended questions with participants in a one-on-one environment (or possibly in groups) can provide rich data and insights for the development team. Imagine you have an exciting new promotional app that you are developing for a client. There are similar apps on the market but none as exciting as what your team has dreamt up. By putting it (and possibly the competitors) to participants they can give direct feedback on what they like, love and loathe.

They can also help brainstorm ideas or better ways to make the app work, or improve the interface. All of this done, before there is money sunk in development.

Usability testing summary: When to use each method (and why)

Key objectives will dictate which usability testing method will deliver the answers to your questions.

Whether it’s in-person, remote, moderated or comparative with a bit of planning you can gather data around your users very real experience of your product. Identify issues, successes and failures. Addressing your user experience with real data, and knowledge can but lead to a more intuitive product.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

How to create a UX research plan

Summary: A detailed UX research plan helps you keep your overarching research goals in mind as you work through the logistics of a research project.

There’s nothing quite like the feeling of sitting down to interview one of your users, steering the conversation in interesting directions and taking note of valuable comments and insights. But, as every researcher knows, it’s also easy to get carried away. Sometimes, the very process of user research can be so engrossing that you forget the reason you’re there in the first place, or unexpected things that come up that can force you to change course or focus.

This is where a UX research plan comes into play. Taking the time to set up a detailed overview of your high-level research goals, team, budget and timeframe will give your research the best chance of succeeding. It's also a good tool for fostering alignment - it can make sure everyone working on the project is clear on the objectives and timeframes. Over the course of your project, you can refer back to your plan – a single source of truth. After all, as Benjamin Franklin famously said: “By failing to prepare, you are preparing to fail”.

In this article, we’re going to take a look at the best way to put together a research plan.

Your research recipe for success

Any project needs a plan to be successful, and user research is no different. As we pointed out above, a solid plan will help to keep you focused and on track during your research – something that can understandably become quite tricky as you dive further down the research rabbit hole, pursuing interesting conversations during user interviews and running usability tests. Thought of another way, it’s really about accountability. Even if your initial goal is something quite broad like “find out what’s wrong with our website”, it’s important to have a plan that will help you to identify when you’ve actually discovered what’s wrong.

So what does a UX research plan look like? It’s basically a document that outlines the where, why, who, how and what of your research project.

It’s time to create your research plan! Here’s everything you need to consider when putting this plan together.

Make a list of your stakeholders

The first thing you need to do is work out who the stakeholders are on your project. These are the people who have a stake in your research and stand to benefit from the results. In those instances where you’ve been directed to carry out a piece of research you’ll likely know who these people are, but sometimes it can be a little tricky. Stakeholders could be C-level executives, your customer support team, sales people or product teams. If you’re working in an agency or you’re freelancing, these could be your clients.

Make a list of everyone you think needs to be consulted and then start setting up catch-up sessions to get their input. Having a list of stakeholders also makes it easy to deliver insights back to these people at the end of your research project, as well as identify any possible avenues for further research. This also helps you identify who to involve in your research (not just report findings back to).

Action: Make a list of all of your stakeholders.

Write your research questions

Before we get into timeframes and budgets you first need to determine your research questions, also known as your research objectives. These are the ‘why’ of your research. Why are you carrying out this research? What do you hope to achieve by doing all of this work? Your objectives should be informed by discussions with your stakeholders, as well as any other previous learnings you can uncover. Think of past customer support discussions and sales conversations with potential customers.

Here are a few examples of basic research questions to get you thinking. These questions should be actionable and specific, like the examples we’ve listed here:

  • “How do people currently use the wishlist feature on our website?”
  • “How do our current customers go about tracking their orders?”
  • “How do people make a decision on which power company to use?”
  • “What actions do our customers take when they’re thinking about buying a new TV?”

A good research question should be actionable in the sense that you can identify a clear way to attempt to answer it, and specific in that you’ll know when you’ve found the answer you’re looking for. It's also important to keep in mind that your research questions are not the questions you ask during your research sessions - they should be broad enough that they allow you to formulate a list of tasks or questions to help understand the problem space.

Action: Create a list of possible research questions, then prioritize them after speaking with stakeholders.

What is your budget?

Your budget will play a role in how you conduct your research, and possibly the amount of data you're able to gather.

Having a large budget will give you flexibility. You’ll be able to attract large numbers of participants, either by running paid recruitment campaigns on social media or using a dedicated participant recruitment service. A larger budget helps you target more people, but also target more specific people through dedicated participant services as well as recruitment agencies.

Note that more money doesn't always equal better access to tools - e.g. if I work for a company that is super strict on security, I might not be able to use any tools at all. But it does make it easier to choose appropriate methods and that allow you to deliver quality insights. E.g. a big budget might allow you to travel, or do more in-person research which is otherwise quite expensive.

With a small budget, you’ll have to think carefully about how you’ll reward participants, as well as the number of participants you can test. You may also find that your budget limits the tools you can use for your testing. That said, you shouldn’t let your budget dictate your research. You just have to get creative!

Action: Work out what the budget is for your research project. It’s also good to map out several cheaper alternatives that you can pursue if required.

How long will your project take?

How long do you think your user research project will take? This is a necessary consideration, especially if you’ve got people who are expecting to see the results of your research. For example, your organization’s marketing team may be waiting for some of your exploratory research in order to build customer personas. Or, a product team may be waiting to see the results of your first-click test before developing a new signup page on your website.

It’s true that qualitative research often doesn’t have a clear end in the way that quantitative research does, for example as you identify new things to test and research. In this case, you may want to break up your research into different sub-projects and attach deadlines to each of them.

Action: Figure out how long your research project is likely to take. If you’re mixing qualitative and quantitative research, split your project timeframe into sub-projects to make assigning deadlines easier.

Understanding participant recruitment

Who you recruit for your research comes from your research questions. Who can best give you the answers you need? While you can often find participants by working with your customer support, sales and marketing teams, certain research questions may require you to look further afield.

The methods you use to carry out your research will also have a part to play in your participants, specifically in terms of the numbers required. For qualitative research methods like interviews and usability tests, you may find you’re able to gather enough useful data after speaking with 5 people. For quantitative methods like card sorts and tree tests, it’s best to have at least 30 participants. You can read more about participant numbers in this Nielsen Norman article.

At this stage of the research plan process, you’ll also want to write some screening questions. These are what you’ll use to identify potential participants by asking about their characteristics and experience.

Action: Define the participants you’ll need to include in your research project, and where you plan to source them. This may require going outside of your existing user base.

Which research methods will you use?

The research methods you use should be informed by your research questions. Some questions are best answered by quantitative research methods like surveys or A/B tests, with others by qualitative methods like contextual inquiries, user interviews and usability tests. You’ll also find that some questions are best answered by multiple methods, in what’s known as mixed methods research.

If you’re not sure which method to use, carefully consider your question. If we go back to one of our earlier research question examples: “How do our current customers go about tracking their orders?”, we’d want to test the navigation pathways.

If you’re not sure which method to use, it helps to carefully consider your research question. Let’s use one of our earlier examples: “Is it easy for users to check their order history in our iPhone app?” as en example. In this case, because we want to see how users move through our app, we need a method that’s suited to testing navigation pathways – like tree testing.

For the question: “What actions do our customers take when they’re thinking about buying a new TV?”, we’d want to take a different approach. Because this is more of an exploratory question, we’re probably best to carry out a round of user interviews and ask questions about their process for buying a TV.

Action: Before diving in and setting up a card sort, consider which method is best suited to answer your research question.

Develop your research protocol

A protocol is essentially a script for your user research. For the most part, it’s a list of the tasks and questions you want to cover in your in-person sessions. But, it doesn’t apply to all research types. For example, for a tree test, you might write your tasks, but this isn't really a script or protocol.

Writing your protocol should start with actually thinking about what these questions will be and getting feedback on them, as well as:

  • The tasks you want your participants to do (usability testing)
  • How much time you’ve set aside for the session
  • A script or description that you can use for every session
  • Your process for recording the interviews, including how you’ll look after participant data.

Action: This is essentially a research plan within a research plan – it’s what you’d take to every session.

Happy researching!

Related UX plan reading

Learn more
1 min read

Building Trust Through Design for Financial Services

When it comes to financial services, user experience goes way beyond just making things easy to use. It’s about creating a seamless journey and establishing trust at every touchpoint. Think about it: as we rely more and more on digital banking and financial apps in our everyday lives, we need to feel absolutely confident that our personal information is safe and that the companies managing our money actually know what they're doing. Without that trust foundation, even the most competitive brands will struggle with customer adoption.

Why Trust Matters More Than Ever

The stakes are uniquely high in financial UX. Unlike other digital products where a poor experience might result in minor frustration, financial applications handle our life savings, investment portfolios, and sensitive personal data. A single misstep in design can trigger alarm bells for users, potentially leading to lost customers.

Using UX Research to Measure and Build Trust

Building high trust experiences requires deep insights into user perceptions, behaviors, and pain points. The best UX platforms can help financial companies spot trust issues and test whether their solutions actually work.

Identify Trust Issues with Tree Testing

Tree testing helps financial institutions understand how easily users can find critical information and features:

  • Test information architecture to ensure security features and privacy information are easily discoverable
  • Identify confusing terminology that may undermine user confidence
  • Compare findability metrics for trust-related content across different user segments

Optimize for Trustworthy First Impressions with First-Click Testing

First-click testing helps identify where users naturally look for visual symbols and cues that are associated with security:

  • Test where users instinctively look for security indicators like references to security certifications
  • Compare the effectiveness of different visual trust symbols (locks, shields, badges)
  • Identify the optimal placement for security messaging across key screens

Map User Journeys with Card Sorting

Card sorting helps brands understand how users organize concepts. Reducing confusion, helps your financial brand appear more trustworthy, quickly:

  • Use open card sorts to understand how users naturally categorize security and privacy features
  • Identify terminology that resonates with users' perceptions around security

Qualitative Insights Through Targeted Questions

Gathering qualitative data through strategically placed questions allows financial institutions to collect rich, timely insights about how much their customers trust their brand:

  • Ask open ended questions about trust concerns at key moments in the testing process
  • Gather specific feedback on security terminology understanding and recognition
  • Capture emotional responses to different trust indicators

What Makes a Financial Brand Look Trustworthy?

Visual Consistency and Professional Polish

When someone opens your financial app or website, they're making snap judgments about whether they can trust you with their money. It happens in milliseconds, and a lot of that decision comes down to how polished and consistent everything looks.Clean, consistent design sends that signal of stability and attention to detail that people expect when money's involved.

To achieve this, develop and rigorously apply a solid design system across all digital touchpoints including fonts, colors, button styles, and spacing, it all needs to be consistent across every page and interaction. Even small inconsistencies can make people subconsciously lose confidence.

Making Security Visible

Unlike walking into a bank where you can see the vault and security cameras, digital security happens behind the scenes. Users can't see all the protection you've built in unless you make a point of showing them.

Highlighting your security measures in ways that feel reassuring rather than overwhelming gives people that same sense of "my money is safe here" that they'd get from seeing a bank's physical security.

From a design perspective, apply this thinking to elements like:

  • Real time login notifications
  • Transaction verification steps
  • Clear encryption indicators
  • Transparent data usage explanations
  • Session timeout warnings

You can test the success of these design elements through preference testing, where you can compare different approaches to security visualization to determine which elements most effectively communicate trust without creating anxiety.

Making Complex Language Simple

Financial terminology is naturally complex, but your interface content doesn't have to be. Clear, straightforward language builds trust so it’s important to develop a content strategy that:

  • Explains unavoidable complex terms contextually
  • Replaces jargon with plain language
  • Provides proactive guidance before errors occur
  • Uses positive, confident messaging around security features

You can test your language and navigation elements by using tree testing to evaluate user understanding of different terminology, measuring success rates for finding information using different labeling options.

Create an Ongoing Trust Measurement Program

A user research platform enables financial institutions to implement ongoing trust measurement across the product lifecycle:

Establish Trust Benchmarks

Use UX research tools to establish baseline metrics for measuring user trust:

  • Findability scores for security features
  • User reported confidence ratings
  • Success rates for security related tasks
  • Terminology comprehension levels

Validate Design Updates

Before implementing changes to critical elements, use quick tests to validate designs:

  • Compare current vs. proposed designs with prototype testing
  • Measure findability improvements with tree testing
  • Evaluate usability through first-click testing

Monitor Trust Metrics Over Time

Create a dashboard of trust metrics that can be tracked regularly:

  • Task success rates for security related activities
  • Time-to-completion for verification processes
  • Confidence ratings at key security touchpoints

Cross-Functional Collaboration to Improve Trust

While UX designers can significantly impact brand credibility, remember that trust is earned across the entire customer experience:

  • Product teams ensure feature promises align with actual capabilities
  • Security teams translate complex security measures into user-friendly experiences
  • Marketing ensures brand promises align with the actual user experience
  • Customer service supports customers when trust issues arise

Trust as a Competitive Advantage

In an industry where products and services can often seem interchangeable to consumers, trust becomes a powerful differentiator. By placing trust at the center of your design philosophy and using comprehensive user research to measure and improve trust metrics, you're not just improving user experience, you're creating a foundation for lasting customer relationships in an industry where loyalty is increasingly rare.

The most successful financial institutions of the future won't necessarily be those with the most features or the slickest interfaces, but those that have earned and maintained user trust through thoughtful UX design built on a foundation of deep user research and continuous improvement.

Learn more
1 min read

5 tips for running an effective usability test

Usability testing is one of the best ways to measure how easy and intuitive to use something is by testing it with real people. You can read about the basics of usability testing here.

Earlier this year, a small team within Optimal Workshop completely redesigned the company blog. More than anything, we wanted to create something that was user-friendly for our readers and would give them a reason to return. I was part of that team, and we ran numerous sessions interviewing regular readers as well as people unfamiliar with our blog. We also ran card sorts, tree tests and other studies to find out all we could about how people search for UX content. Unsurprisingly, one of the most valuable activities we did was usability testing – sitting down with representative users and watching them as they worked through a series of tasks we provided. We asked general questions like “Where would you go to find information about card sorting”, and we also observed them as they searched through our website for learning content.

By stripping away any barriers between ourselves and our users and observing them as they navigated through our website and learning resources, as well as those of other companies, we were able to build a blog with these people’s behaviors and motivations in mind.

Usability testing is an invaluable research method, and every user researcher should be able to run sessions effectively. Here are 5 tips for doing so, in no particular order.

1. Clarify your goals with stakeholders

Never go into a usability test blind. Before you ever sit down with a participant, make sure you know exactly what you want to get out of the session by writing down your research goals. This will help to keep you focused, essentially giving you a guiding light that you can refer back as you go about the various logistical tasks of your research. But you also need to take this a step further. It’s important to make sure that the people who will utilize the results of your research – your stakeholders – have an opportunity to give you their input on the goals as early as possible.

If you’re running usability tests with the aim of creating marketing personas, for example, meet with your organization’s marketing team and figure out the types of information they need to create these personas. In some cases, it’s also helpful to clarify how you plan to gather this data, which can involve explaining some of the techniques you’re going to use.

Lastly, find out how your stakeholders plan to use your findings. If there are a lot of objectives, organize your usability test so you ask the most important questions first. That way, if you end up going off track or you run out of time you’ll have already gathered the most important data for your stakeholders.

2. Be flexible with your questions

A list of pre-prepared questions will help significantly when it comes time to sit down and run your usability testing sessions. But while a list is essential, sometimes it can also pay to ‘follow your nose’ and steer the conversation in a (potentially) more fruitful direction.

How many times have you been having a conversation with a friend over a drink or dinner, only for you both to completely lose track of time and find yourselves discussing something completely unrelated? While it’s not good practice to let your usability testing sessions get off track to this extent, you can surface some very interesting insights by paying close attention to a user’s behavior and answers during a testing session and following interesting leads.

Ideally, and with enough practice, you’ll be able to answer your core (prepared) questions and ask a number of other questions that spring to mind during the session. This is a skill that takes time to master, however.

3. Write a script for your sessions

While a usability test script may sound like a fancy name for your research questions, it’s actually a document that’s much more comprehensive. If you prepare it correctly (we’ll explain how below), you’ll have a document that you can use to capture in-depth insights from your participants.

Here are some of the key things to keep in mind when putting together your script:

  • Write a friendly introduction – It may sound obvious, but taking the time to come up with a friendly, warm introduction will get your sessions off to a much better start. The bonus of writing it down is that you’re far less likely to forget it!
  • Ask to record the session – It’s important to record your session (whether through video or audio), as you’ll want to go back later and analyze any details you may have missed. This means asking for explicit permission to record participants. In addition to making them feel more comfortable, it’s just good practice to do so.
  • Allocate time for the basics – Don’t dive into the complex questions first, use the first few minutes to gather basic data. This could be things like where they work and their familiarity with your organization and/or product.
  • Encourage them to explain their thought process – “I’d like you to explain what you’re doing as you make your way through the task”. This simple request will give you an opportunity to ask follow-up questions that you otherwise may not have thought to ask.
  • Let participants know that they’re not being tested – Whenever a participant steps into the room for a test, they’re naturally going to feel like they’re being tested. Explain that you’re testing the product, not them. It’s also helpful to let them know that there are no right or wrong answers. This is an important step if you want to keep them relaxed.

It’s often easiest to have a document with your script printed out and ready to go for each usability test.

4. Take advantage of software

You’d never see a builder without a toolbox full of a useful assortment of tools. Likewise, software can make the life of a user research that much easier. The paper-based ways of recording information are still perfectly valid, but introducing custom tools can make both the logistics of user research and the actual sessions themselves much easier to manage.

Take a tool like Calendly, for example. This is a powerful piece of scheduling software that almost completely takes over the endless back and forth of scheduling usability tests. Calendly acts as a middle man between you and your participants, allowing you to set the times you’re free to host usability tests, and then allowing participants to choose a session that suits them from these times.

Our very own Reframer makes the task of running usability tests and analyzing insights that much easier. During your sessions, you can use Reframer to take comprehensive notes and apply tags like “positive” or “struggled” to different observations. Then, after you’ve concluded your tests, Reframer’s analysis function will help you understand wider themes that are present across your participants.

There’s another benefit to using a tool like Reframer. Keeping all of your notes in place will mean you easily pull up data from past research sessions whenever you need to.

5. Involve others

Usability tests (and user interviews, for that matter) are a great opportunity to open up research to your wider organization. Whether it’s stakeholders, other members of your immediate team or even members of entirely different departments, giving them the chance to sit down with users will show them how their products are really being used. If nothing else, these sessions will help those within your organization build empathy with the people they’re building products for.

There are quite a few ways to bring others in, such as:

  • To help you set up the research – This can be a helpful exercise for both you (the researcher) and the people you’re bringing in. Collaborate on the overarching research objectives, ask them what types of results they’d like to see and what sort of tasks they think could be used to gather these results.
  • As notetakers – Having a dedicated notetaker will make your life as a researcher significantly easier. This means you’ll have someone to record any interesting observations while you focus on running the session. Just let them know what types of notes you’d like to see.
  • To help you analyze the data – Once you’ve wrapped up your usability testing sessions, bring others in to help analyze the findings. There’s a good chance that an outside perspective will catch something you may miss. Also, if you’re bringing stakeholders into the analysis stage, they'll get a clearer picture of what it means and where the data came from.

There are myriad other tips and best practices to keep in mind when usability testing, many of which we cover in our introductory page. Important considerations include taking good quality notes, carefully managing participants during the session (not giving them too much guidance) and remaining neutral throughout when answering their questions. If you feel like we’ve missed any really important points, feel free to leave a comment!

Read more

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.