July 12, 2023
3 min

5 key areas for effective ResearchOPs

Simply put, ResearchOps is about making sure your research operations are robust, thought through and managed. 

Having systems and processes around your UX research and your team keep everyone (and everything) organized. Making user research projects quicker to get started and more streamlined to run. And robust sharing, socializing, and knowledge storage means that everyone can understand the research insights and findings and put these to use - across the organization. And even better, find these when they need them. 

Using the same tools across the team allows the research team to learn from each other, and previous research projects and be able to compare apples with apples, with everyone included. Bringing the team together across tools, research and results.

We go into more detail in our ebook ResearchOps Checklist about exactly what you can do to make sure your research team is running at its best. Let’s take a quick look at 5 way to ensure you have the grounding for a successful ResearchOps team.

1. Knowledge management 📚

What do you do with all of the insights and findings of a user research project? How do you store them, how do you manage the insights, and how do you share and socialize?

Having processes in place that manage this knowledge is important to the longevity of your research. From filing to sharing across platforms, it all needs to be standardized so everyone can search, find and share.

2. Guidelines and process templates 📝

Providing a framework for how to run research projects is are important. Building on the knowledge base from previous research can improve research efficiencies and cut down on groundwork and administration. Making research projects quicker and more streamlined to get underway.

3. Governance 🏛

User research is all about people, real people. It is incredibly important that any research be legal, safe, and ethical. Having effective governance covered is vital.

4. Tool stack 🛠

Every research team needs a ‘toolbox’ that they can use whenever they need to run card sorts, tree tests, usability tests, user interviews, and more. But which software and tools to use?

Making sure that the team is using the same tools also helps with future research projects, learning from previous projects, and ensuring that the information is owned and run by the organization (rather than whichever individuals prefer). Reduce logins and password shares, and improve security with organization-wide tools and platforms. 

5. Recruitment 👱🏻👩👩🏻👧🏽👧🏾

Key to great UX research is the ability to recruit quality participants - fast! Having strong processes in place for screening, scheduling, sampling, incentivizing, and managing participants needs to be top of the list when organizing the team.

Wrap Up 💥

Each of these ResearchOps processes are not independent of the other. And neither do they flow from one to the other. They are part of a total wrap around for the research team, creating processes, systems and tools that are built to serve the team. Allowing them to focus on the job of doing great research and generating insights and findings that develop the very best user experience. 

Afterall, we are creating user experiences that keep our users engaged and coming back. Why not look at the teams user experience and make the most of that. Freeing time and space to socialize and share the findings with the organization. 

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Radical Collaboration: how teamwork really can make the dream work

Natalie and Lulu have forged a unique team culture that focuses on positive outputs (and outcomes) for their app’s growing user base. In doing so, they turned the traditional design approach on its head and created a dynamic and supportive team. 

Natalie, Director of Design at Hatch, and Lulu, UX Design Specialist, recently spoke at UX New Zealand, the leading UX and IA conference in New Zealand hosted by Optimal Workshop, on their concept of “radical collaboration”.

In their talk, Nat and Lulu share their experience of growing a small app into a big player in the finance sector, and their unique approach to teamwork and culture which helped achieve it.

Background on Natalie Ferguson and Lulu Pachuau

Over the last two decades, Lulu and Nat have delivered exceptional customer experiences for too many organizations to count. After Nat co-founded Hatch, she begged Lulu to join her on their audacious mission: To supercharge wealth building in NZ. Together, they created a design and product culture that inspired 180,000 Kiwi investors to join in just 4 years.

Contact Details:

Email: natalie@sixfold.co.nz

LinkedIn: https://www.linkedin.com/in/natalieferguson/ and https://www.linkedin.com/in/lulupach/

Radical Collaboration - How teamwork makes the dream work 💪💪💪

Nat and Lulu discuss how they nurtured a team culture of “radical collaboration” when growing the hugely popular app Hatch, based in New Zealand. Hatch allows everyday New Zealanders to quickly and easily trade in the U.S. share market. 

The beginning of the COVID pandemic spelled huge growth for Hatch and caused significant design challenges for the product. This growth meant that the app had to grow from a baby startup to one that could operate at scale - virtually overnight. 

In navigating this challenge, Nat and Lulu coined the term radical collaboration, which aims to “dismantle organizational walls and supercharge what teams achieve”. Radical collaboration has six key pillars, which they discuss alongside their experience at Hatch.

Pillar #1: When you live and breathe your North star

Listening to hundreds of their customers’ stories, combined with their own personal experiences with money, compelled Lulu and Nat to change how their users view money. And so, “Grow the wealth of New Zealanders” became a powerful mission statement, or North Star, for Hatch. The mission was to give people the confidence and the ability to live their own lives with financial freedom and control. Nat and Lulu express the importance of truly believing in the mission of your product, and how this can become a guiding light for any team. 

Pillar #2: When you trust each other so much, you’re happy to give up control

As Hatch grew rapidly, trusting each other became more and more important. Nat and Lulu state that sometimes you need to take a step back and stop fueling growth for growth’s sake. It was at this point that Nat asked Lulu to join the team, and Nat’s first request was for Lulu to be super critical about the product design to date - no feedback was out of bounds. Letting go, feeling uncomfortable, and trusting your team can be difficult, but sometimes it’s what you need in order to drag yourself out of status quo design. This resulted in a brief hiatus from frantic delivery to take stock and reprioritize what was important - something that can be difficult without heavy doses of trust!

Pillar #3: When everyone wears all the hats

During their journey, the team at Hatch heard lots of stories from their users. Many of these stories were heard during “Hatcheversery Calls”, where team members would call users on their sign-up anniversary to chat about their experience with the app. Some of these calls were inspiring, insightful, and heartwarming.

Everyone at Hatch made these calls – designers, writers, customer support, engineers, and even the CEO. Speaking to strangers in this way was a challenge for some, especially since it was common to field technical questions about the business. Nevertheless, asking staff to wear many hats like this turned the entire team into researchers and analysts. By forcing ourselves and our team outside of our comfort zone, we forced each other to see the whole picture of the business, not just our own little piece.

Pillar #4: When you do what’s right, not what’s glam

In an increasingly competitive industry, designers and developers are often tempted to consistently deliver new and exciting features. In response to rapid growth, rather than adding more features to the app, Lulu and Nat made a conscious effort to really listen to their customers to understand what problems they needed solving. 

As it turned out, filing overseas tax returns was a significant and common problem for their customers - it was difficult and expensive. So, the team at Hatch devised a tax solution. This solution was developed by the entire team, with almost no tax specialists involved until the very end! This process was far from glamorous and it often fell outside of standard job descriptions. However, the team eventually succeeded in simplifying a notoriously difficult process and saved their customers a massive headache.

Pillar #5: When you own the outcome, not your output.

Over time Hatch’s user base changed from being primarily confident, seasoned investors, to being first-time investors. This new user group was typically scared of investing and often felt that it was only a thing wealthy people did.

At this point, Hatch felt it was necessary to take a step back from delivering updates to take stock of their new position. This meant deeply understanding their customers’ journey from signing up, to making their first trade. Once this was intimately understood, the team delivered a comprehensive onboarding process which increased the sign-up conversion rate by 10%!

Pillar #6: When you’re relentlessly committed to making it work

Nat and Lulu describe a moment when Allbirds wanted to work with Hatch to allow ordinary New Zealanders to be involved in their IPO launch on the New York stock exchange. Again, this task faced numerous tax and trade law challenges, and offering the service seemed like yet another insurmountable task. The team at Hatch nearly gave up several times during this project, but everyone was determined to get this feature across the line – and they did. As a result, New Zealanders were some of the few regular investors from outside the U.S that were able to take part in Albirds IPO. 

Why it matters 💥

Over four years, Hatch grew to 180,000 users who collectively invested over $1bn. Nat and Lulu’s success underscores the critical role of teamwork and collaboration in achieving exceptional user experiences. Product teams should remember that in the rapidly evolving tech industry, it's not just about delivering the latest features; it's about fostering a positive and supportive team culture that buys into the bigger picture.

The Hatch team grew to be more than team members and technical experts. They grew in confidence and appreciated every moving part of the business. Product teams can draw inspiration from Hatch's journey, where designers, writers, engineers, and even the CEO actively engaged with users, challenged traditional design decisions, and prioritized solving actual user problems. This approach led to better, more user-centric outcomes and a deep understanding of the end-to-end user experience.

Most importantly, through the good times and tough, the team grew to trust each other. The mission weaved its way through each member of the team, which ultimately manifested in positive outcomes for the user and the business.

Nat and Lulu’s concept of radical collaboration led to several positive outcomes for Hatch:

  • It changed the way they did business. Information was no longer held in the minds of a few individuals – instead, it was shared. People were able to step into other people's roles seamlessly. 
  • Hatch achieved better results faster by focusing on the end-to-end experience of the app, rather than by adding successive features. 
  • The team became more nimble – potential design/development issues were anticipated earlier because everyone knew what the downstream impacts of a decision would be.

Over the next week, Lulu and Nat encourage designers and researchers to get outside of their comfort zone and:

  • Visit customer support team
  • Pick up the phone and call a customer
  • Challenge status quo design decisions. Ask, does this thing solve an end-user problem?

Learn more
1 min read

Moderated vs unmoderated research: which approach is best?

Knowing and understanding why and how your users use your product is invaluable for getting to the nitty gritty of usability. Delving deep with probing questions into motivation or skimming over looking for issues can equally be informative. 

Put super simply, usability testing literally is testing how usable your product is for your users. If your product isn’t usable users often won’t complete their task, let alone come back for more. No one wants to lose users before they even get started. Usability testing gets under their skin and really into the how, why and what they want (and equally what they don’t).

As we have been getting used to video calling regularly and using the internet for interactions, usability testing has followed suit. Being able to access participants remotely has allowed us to diversify the participant pool by not being restricted to those that are close enough to be in-person. This has also allowed an increase in the number of participants per test, as it becomes more cost-effective to perform remote usability testing.

But if we’re remote, does this mean it can’t be moderated? No - remote testing, along with modern technology, can mean that remote testing can be facilitated and moderated. But what is the best method - moderated or unmoderated?

What is moderated remote research testing?

In traditional usability testing, moderated research is done in person. With the moderator and the participant in the same physical space. This, of course, allows for conversation and observational behavioral monitoring. Meaning the moderator can note not only what the participant answers but how and even make note of the body language, surroundings, and other influencing factors. 

This has also meant that traditionally, the participant pool has been limited to those that can be available (and close enough) to make it into a facility for testing. And being in person has meant it takes time (and money) to perform these tests.

As technology has moved along and the speed of internet connections and video calling has increased, this has opened up a world of opportunities for usability testing. Allowing usability testing to be done remotely. Moderators can now set up testing remotely and ‘dial in’ to observe participants anywhere they are. And potentially even running focus groups or other testing in a group format across the internet. 

Pros of moderated remote research testing:

- In-depth gathering of insights through a back-and-forth conversation and observing of the participants.

- Follow-up questions don’t underestimate the value of being available to ask questions throughout the testing. And following up in the moment.

- Observational monitoring noticing and noting the environment and how the participants are behaving, can give more insight into how or why they choose to make a decision.

- Quick remote testing can be quicker to start, find participants, and complete than in-person. This is because you only need to set up a time to connect via the internet, rather than coordinating travel times, etc.

- Location (local and/or international) Testing online removes reliance on participants being physically present for the testing. This broadens your ability to broaden the pool, and participants can be either within your country or global. 

Cons of moderated remote research testing:

- Time-consuming having to be present at each test takes time. As does analyzing the data and insights generated. But remember, this is quality data.

- Limited interactions with any remote testing there is only so much you can observe or understand across the window of a computer screen. It can be difficult to have a grasp on all the factors that might be influencing your participants.

What is unmoderated remote research testing?

In its most simple sense, unmoderated user testing removes the ‘moderated’ part of the equation. Instead of having a facilitator guide participants through the test, participants are left to complete the testing by themselves and in their own time. For the most part, everything else stays the same. 

Removing the moderator, means that there isn’t anyone to respond to queries or issues in the moment. This can either delay, influence, or even potentially force participants to not complete or maybe not be as engaged as you may like. Unmoderated research testing suits a very simple and direct type of test. With clear instructions and no room for inference. 

Pros of unmoderated remote research testing:

- Speed and turnaround,  as there is no need to schedule meetings with each and every participant. Unmoderated usability testing is usually much faster to initiate and complete.

- Size of study (participant numbers) unmoderated usability testing allows you to collect feedback from dozens or even hundreds of users at the same time. 


- Location (local and/or international) Testing online removes reliance on participants being physically present for the testing, which broadens your participant pool.  And unmoderated testing means that it literally can be anywhere while participants complete the test in their own time.

Cons of unmoderated remote research testing:

- Follow-up questions as your participants are working on their own and in their own time, you can’t facilitate and ask questions in the moment. You may be able to ask limited follow-up questions.

- Products need to be simple to use unmoderated testing does not allow for prototypes or any product or site that needs guidance. 

- Low participant support without the moderator any issues with the test or the product can’t be picked up immediately and could influence the output of the test.

When should you do moderated vs unmoderated remote usability testing?

Each moderated and unmoderated remote usability testing have its use and place in user research. It really depends on the question you are asking and what you are wanting to know.

Moderated testing allows you to gather in-depth insights, follow up with questions, and engage the participants in the moment. The facilitator has the ability to guide participants to what they want to know, to dig deeper, or even ask why at certain points. This method doesn’t need as much careful setup as the participants aren’t on their own. While this is all done online, it does still allow connection and conversation. This method allows for more investigative research. Looking at why users might prefer one prototype to another. Or possibly tree testing a new website navigation to understand where they might get lost and querying why the participant made certain choices.

Unmoderated testing, on the other hand, is literally leaving the participants to it. This method needs very careful planning and explaining upfront. The test needs to be able to be set and run without a moderator. This lends itself more to wanting to know a direct answer to a query. Such as a card sort on a website to understand how your users might sort information. Or a first click to see how/where users will click on a new website.

Planning your next user test? Here’s how to choose the right method

With the ability to expand our pool of participants across the globe with all of the advances (and acceptance of) technology and video calling etc, the ability to expand our understanding of users’ experiences is growing. Remote usability testing is a great option when you want to gather information from users in the real world. Depending on your query, moderated or unmoderated usability testing will suit your study. As with all user testing, being prepared and planning ahead will allow you to make the most of your test.

Learn more
1 min read

Figma + Optimal: Design, Test, Iterate Faster

Figma has long been the go-to tool for UI/UX designers, known for its intuitive interface and real-time collaboration. In fact, over 95% of Fortune 500 companies rely on Figma, and 13 million monthly active users trust it to design and prototype digital experiences.

If you’re already designing in Figma, integrating with Optimal can help to validate your ideas early, reduce costly mistakes, and deliver experiences users actually want.

The Hidden Cost of Skipping Design Validation

Validating designs before development and catching usability issues early has a measurable impact on both users and the business. Research consistently shows that:

Figma + Optimal: Prototype Testing and Design Validation

Instead of waiting for post-launch analytics or expensive redesigns, you can test your Figma prototypes with real users in hours, not weeks with Optimal. Get quantitative data, watch recordings, analyze heatmaps, and actually see where users struggle, all before a single line of code is written.

Here’s a look into 4 practical ways teams use Figma and Optimal together.

4 Ways to Test Figma Designs with Optimal

1. Preference Testing: Let Users Pick the Winner

Ever had a debate with your team about which design direction to take? Let data decide.

Here's how:

  • Create a Figma frame with two designs side-by-side (think: two homepage variations, competing button styles, different navigation approaches)
  • Copy your Figma link and drop it into an Optimal first-click test
  • Ask participants: "Which design do you prefer?"
  • Watch the results roll in with heatmaps showing exactly where users clicked

2. Concept Testing: Does Your Idea Actually Make Sense?

You've got a bold new concept. It makes perfect sense to you. But will users get it?

The process:

  • Build wireframes or mockups in Figma (they don't need to be pixel-perfect)
  • Import your Figma link into an Optimal first-click or prototype test
  • Create tasks like “Click the option that best matches what you’re trying to do.” or “Click where you would sign up.”
  • Analyze whether users successfully understand and navigate your concept

3. Prototype Testing: Find the Friction Before Development

You've built a clickable prototype with multiple screens and interactions. It looks polished. But does it actually work for users?

Step-by-step:

  • Build a complete interactive prototype in Figma
  • Ensure all frames and flows are complete in Figma before importing into Optimal.
  • Copy your Figma prototype URL (works even with password-protected links)
  • Paste it into an Optimal prototype test
  • Define realistic tasks: "You want to buy running shoes under $100. Complete the purchase."
  • Watch video recordings and analyze usability metrics, clickmaps, misclicks, successes/failures, and heatmaps

What you'll discover might surprise you. Users will:

  • Click on things you never intended to be clickable
  • Miss obvious CTAs you thought were perfectly placed
  • Get lost in navigation that seemed intuitive to your team
  • Abandon tasks at friction points you didn't know existed

4. AI Prototype Testing: Validate AI-Generated Designs

The rise of AI design tools like Figma Make has changed the game. You can now generate a functional prototype from a text prompt in minutes. But just because AI can create it doesn't mean users can use it.

Quick workflow:

  • Generate a prototype using Figma Make
  • Copy the URL and drop it into an Optimal live site test
  • Add your testing tasks
  • Review recordings to spot usability issues

This is perfect for rapid experimentation. 

Getting Started Is Simple

  1. Prep your Figma file - Have a prototype or design ready
  2. Copy the link - Grab your Figma share URL
  3. Create your test - Choose first-click, prototype test, or live site test in Optimal
  4. Paste and configure - Add your Figma URL and write your test tasks
  5. Launch - Use your own participants or tap into Optimal's panel or Managed Recruitment services
  6. Analyze - Review results and iterate

Launch Designs Users Love

Figma gives you the power to design and prototype rapidly, while Optimal gives you the insights to make sure those designs actually work for real users. Together, they create a workflow built on real insights, not guesswork.

By testing early and often, teams can reduce risk, build confidence in their designs, and move into development knowing their work has already been validated by users. Gather insights quickly, collaborate more effectively, and keep projects moving forward with evidence-backed decisions.

Ready to validate your next Figma prototype? Use Optimal as part of your workflow and start testing with real users today.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.