March 12, 2025
2

Efficient Research: Maximizing the ROI of Understanding Your Customers

Introduction

User research is invaluable, but in fast-paced environments, researchers often struggle with tight deadlines, limited resources, and the need to prove their impact. In our recent UX Insider webinar, Weidan Li, Senior UX Researcher at Seek, shared insights on Efficient Research—an approach that optimizes Speed, Quality, and Impact to maximize the return on investment (ROI) of understanding customers.

At the heart of this approach is the Efficient Research Framework, which balances these three critical factors:

  • Speed – Conducting research quickly without sacrificing key insights.
  • Quality – Ensuring rigor and reliability in findings.
  • Impact – Making sure research leads to meaningful business and product changes.

Within this framework, Weidan outlined nine tactics that help UX researchers work more effectively. Let’s dive in.

1. Time Allocation: Invest in What Matters Most

Not all research requires the same level of depth. Efficient researchers prioritize their time by categorizing projects based on urgency and impact:

  • High-stakes decisions (e.g., launching a new product) require deep research.
  • Routine optimizations (e.g., tweaking UI elements) can rely on quick testing methods.
  • Low-impact changes may not need research at all.

By allocating time wisely, researchers can avoid spending weeks on minor issues while ensuring critical decisions are well-informed.

2. Assistance of AI: Let Technology Handle the Heavy Lifting

AI is transforming UX research, enabling faster and more scalable insights. Weidan suggests using AI to:

  • Automate data analysis – AI can quickly analyze survey responses, transcripts, and usability test results.
  • Generate research summaries – Tools like ChatGPT can help synthesize findings into digestible insights.
  • Speed up recruitment – AI-powered platforms can help find and screen participants efficiently.

While AI can’t replace human judgment, it can free up researchers to focus on higher-value tasks like interpreting results and influencing strategy.

3. Collaboration: Make Research a Team Sport

Research has a greater impact when it’s embedded into the product development process. Weidan emphasizes:

  • Co-creating research plans with designers, PMs, and engineers to align on priorities.
  • Involving stakeholders in synthesis sessions so insights don’t sit in a report.
  • Encouraging non-researchers to run lightweight studies, such as A/B tests or quick usability checks.

When research is shared and collaborative, it leads to faster adoption of insights and stronger decision-making.

4. Prioritization: Focus on the Right Questions

With limited resources, researchers must choose their battles wisely. Weidan recommends using a prioritization framework to assess:

  • Business impact – Will this research influence a high-stakes decision?
  • User impact – Does it address a major pain point?
  • Feasibility – Can we conduct this research quickly and effectively?

By filtering out low-priority projects, researchers can avoid research for research’s sake and focus on what truly drives change.

5. Depth of Understanding: Go Beyond Surface-Level Insights

Speed is important, but efficient research isn’t about cutting corners. Weidan stresses that even quick studies should provide a deep understanding of users by:

  • Asking why, not just what – Observing behavior is useful, but uncovering motivations is key.
  • Using triangulation – Combining methods (e.g., usability tests + surveys) to validate findings.
  • Revisiting past research – Leveraging existing insights instead of starting from scratch.

Balancing speed with depth ensures research is not just fast, but meaningful.

6. Anticipation: Stay Ahead of Research Needs

Proactive researchers don’t wait for stakeholders to request studies—they anticipate needs and set up research ahead of time. This means:

  • Building a research roadmap that aligns with upcoming product decisions.
  • Running continuous discovery research so teams have a backlog of insights to pull from.
  • Creating self-serve research repositories where teams can find relevant past studies.

By anticipating research needs, UX teams can reduce last-minute requests and deliver insights exactly when they’re needed.

7. Justification of Methodology: Explain Why Your Approach Works

Stakeholders may question research methods, especially when they seem time-consuming or expensive. Weidan highlights the importance of educating teams on why specific methods are used:

  • Clearly explain why qualitative research is needed when stakeholders push for just numbers.
  • Show real-world examples of how past research has led to business success.
  • Provide a trade-off analysis (e.g., “This method is faster but provides less depth”) to help teams make informed choices.

A well-justified approach ensures research is respected and acted upon.

8. Individual Engagement: Tailor Research Communication to Your Audience

Not all stakeholders consume research the same way. Weidan recommends adapting insights to fit different audiences:

  • Executives – Focus on high-level impact and key takeaways.
  • Product teams – Provide actionable recommendations tied to specific features.
  • Designers & Engineers – Share usability findings with video clips or screenshots.

By delivering insights in the right format, researchers increase the likelihood of stakeholder buy-in and action.

9. Business Actions: Ensure Research Leads to Real Change

The ultimate goal of research is not just understanding users—but driving business decisions. To ensure research leads to action:

  • Follow up on implementation – Track whether teams apply the insights.
  • Tie findings to key metrics – Show how research affects conversion rates, retention, or engagement.
  • Advocate for iterative research – Encourage teams to re-test and refine based on new data.

Research is most valuable when it translates into real business outcomes.

Final Thoughts: Research That Moves the Needle

Efficient research is not just about doing more, faster—it’s about balancing speed, quality, and impact to maximize its influence. Weidan’s nine tactics help UX researchers work smarter by:


✔️  Prioritizing high-impact work
✔️  Leveraging AI and collaboration
✔️  Communicating research in a way that drives action

By adopting these strategies, UX teams can ensure their research is not just insightful, but transformational.

Watch the full webinar here

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Tips for recruiting quality research participants

If there’s one universal truth in user research, it’s that at some point you’re going to need to find people to actually take part in your studies. Be it a large number of participants for quantitative research or a select number for in-depth, in-person user interviews. Finding the right people (and number) of people can be a hurdle.

With the right strategy, you can source exactly the right participants for your next research project.

We share a practical step-by-step guide on how to find participants for user experience research.

The difficulties/challenges of user research recruiting 🏋️

It has to be acknowledged that there are challenges when recruiting research participants. You may recognize some of these:

  • There are so many channels and methods you can use to find participants, different channels will work better for different projects.
  • Repeatedly using the same channels and methods will result in diminishing returns (i.e. burning out participants).
  • It’s a lengthy and complex process, and some projects don’t have the luxury of time.
  • Offering the right incentives and distributing them is time-consuming.
  • It’s hard to manage participants during long-term or recurring studies, such as customer research projects.

We’ll simplify the process, talk about who the right participants are, and unpack some of the best ways to find them. Removing these blocks can be the easiest way to move forward.

Who are the right participants for different types of research? 🤔

1. The first step to a successful participant recruitment strategy is clarifying the goals of your user research and which methods you intend to use. Ask yourself:

  • What is the purpose of our research?
  • How do we plan to understand that?

2. Define who your ideal research participant is. Who is going to have the answers to your questions?

3. Work out your research recruitment strategy. That starts by understanding the differences between recruiting for qualitative and quantitative research.

Recruiting for qualitative vs. quantitative research 🙋🏻

Quantitative research recruiting is a numbers game. For your data analysis to be meaningful and statistically significant, you need a lot of data. This means you need to do a lot of research with a lot of people. When recruiting for quantitative research, you first have to define the population (the entire group you want to study). From there, you choose a sampling method that allows you to create a sample—a randomly selected subset of the population who will participate in your study.

Qualitative recruiting involves far fewer participants, but you do need to find a selection of ‘perfect’ participants. Those that fit neatly into your specific demographic, geographic, psychographic, and behavioral criteria relevant to your study. Recruiting quality participants for qualitative studies involves non-random sampling, screening, and plenty of communication.

How many participants do you need? 👱🏻👩👩🏻👧🏽👧🏾

How many participants to include in a qualitative research study is one of the most heavily discussed topics in user research circles. In most cases, you can get away with 5 people – that’s the short answer. With 5 people, you’ll uncover most of the main issues with the thing you’re testing. Depending on your research project there could be as many as 50 participants, but with each additional person, there is an additional cost (money and time).

Quantitative research is obviously quite different. With studies like card sorts and tree tests, you need higher participant numbers to get statistically meaningful results. Anywhere from 20 - 500 participants, again coming back to the purpose of your test and your research budget. These are usually easier and quicker to implement therefore the additional cost is lower.

User research recruitment - step by step 👟

Let’s get into your research recruitment strategy to find the best participants for your research project. There are 5 clear steps to get you through to the research stage:

1. Identify your ideal participants

Who are they? What do they do? How old are they? Do they already use your product? Where do they live? These are all great questions to get you thinking about who exactly you need to answer your research questions. The demographic and geographic detail of your participants are important to the quality of your research results.

2. Screen participants

Screening participants will weed out those that may not be suitable for your specific project. This can be as simple as asking if the participants have used a product similar to yours. Or coming back to your key identified demographic requirements and removing anyone that doesn’t fit these criteria.

3. Find prospective participants

This is important and can be time-consuming. For qualitative research projects, you can look within your organization or ask over social media for willing participants. Or if you’re short on time look at a participant recruitment service, which takes your requirements and has a catalog of available persons to call on. There’s a cost involved, but the time saving can negate this. For qualitative surveys, a great option can be a live intercept on your website or app that interrupts users and asks them to complete a short questionnaire.

4. Research incentives

In some cases you will need to provide incentives. This could be offering a prize or discount for those who complete online qualitative surveys. Or a fixed sum for those that take part in longer format quantitative studies.

5. Scheduling with participants

Once you have waded through the emails, options, and communication from your inquiries make a list of appropriate participants. Schedule time to do the research, either in person or remotely. Be clear about expectations and how long it will take. And what the incentive to take part is.

Tips to avoid participant burnout 📛

You’ve got your participants sorted and have a great pool of people to call on. If you keep hitting the same group of people time and time again, you will experience the law of diminishing returns. Constantly returning to the same pool of participants will eventually lead to fatigue. And this will impact the quality of your research because it’s based on interviewing the same people with the same views.

There are 2 ways to avoid this problem:

  1. Use a huge database of potential participant targets.
  2. Use a mixture of different recruitment strategies and channels.

Of course, it might be unavoidable to hit the same audience repeatedly when you’re testing your product development among your customer base.

Wrap up 🌯

Understanding your UX research recruitment strategy is crucial to recruiting quality participants. A clear idea of your purpose, who your ideal participants are, and how to find them takes time and experience. 

And to make life easier you can always leave your participant recruitment with us. With a huge catalog of quality participants all at your fingertips on our app, we can recruit the right people quickly.

Check out more here.

Learn more
1 min read

User research and agile squadification at Trade Me

Hi, I’m Martin. I work as a UX researcher at Trade Me having left Optimal Experience (Optimal Workshop's sister company) last year. For those of you who don’t know, Trade Me is New Zealand’s largest online auction site that also lists real estate to buy and rent, cars to buy, jobs listings, travel accommodation and quite a few other things besides. Over three quarters of the population are members and about three quarters of the Internet traffic for New Zealand sites goes to the sites we run.

Leaving a medium-sized consultancy and joining Trade Me has been a big change in many ways, but in others not so much, as I hadn’t expected to find myself operating in a small team of in-house consultants. The approach the team is taking is proving to be pretty effective, so I thought I’d share some of the details of the way we work with the readers of Optimal Workshop’s blog. Let me explain what I mean…

What agile at Trade Me looks like

Over the last year or so, Trade Me has moved all of its development teams over to Agile following a model pioneered by Spotify. All of the software engineering parts of the business have been ‘squadified’. These people produce the websites & apps or provide and support the infrastructure that makes everything possible.Across Squads, there are common job roles in ‘Chapters’ (like designers or testers) and because people are not easy to force into boxes, and why should they be, there are interest groups called ‘Guilds’.The squads are self-organizing, running their own processes and procedures to get to where they need to. In practice, this means they use as many or as few of the Kanban, Scrum, and Rapid tools they find useful. Over time, we’ve seen that squads tend to follow similar practices as they learn from each other.

How our UX team fits in

Our UX team of three sits outside the squads, but we work with them and with the product owners across the business.How does this work? It might seem counter-intuitive to have UX outside of the tightly-integrated, highly-focused squads, sometimes working with product owners working on stuff that might have little to do with what’s being currently developed in the squads. This comes down to the way Trade Me divides down the UX responsibilities within the organization. Within each squad there is a designer. He or she is responsible for how that feature or app looks, and, more importantly, how it acts — interaction design as well as visual design.Then what do we do, if we are the UX team?

We represent the voice of Trade Me’s users

By conducting research with Trade Me’s users we can validate the squads’ day-to-day decisions, and help frame decisions on future plans. We do this by wearing two hats. Wearing the pointy hats of structured, detailed researchers, we look into long-term trends: the detailed behaviours and goals of our different audiences. We’ve conducted lots of one-on-one interviews with hundreds of people, including top sellers, motor parts buyers, and job seekers, as well as running surveys, focus groups and user testing sessions of future-looking prototypes. For example, we recently spent time with a number of buyers and sellers, seeking to understand their motivations and getting under their skin to find out how they perceive Trade Me.

This kind of research enables Trade Me to anticipate and respond to changes in user perception and satisfaction.Swapping hats to an agile beanie (and stretching the metaphor to breaking point), we react to the medium-term, short-term and very short-term needs of the squads testing their ideas, near-finished work and finished work with users, as well as sometimes simply answering questions and providing opinion, based upon our research. Sometimes this means that we can be testing something in the afternoon having only heard we are needed in the morning. This might sound impossible to accommodate, but the pace of change at Trade Me is such that stuff is getting deployed pretty much every day, many of which affects our users directly. It’s our job to ensure that we support our colleagues to do the very best we can for our users.

How our ‘drop everything’ approach works in practice

Screen Shot 2014-07-11 at 10.00.21 am

We recently conducted five or six rounds (no one can quite remember, we did it so quickly) of testing of our new iPhone application (pictured above) — sometimes testing more than one version at a time. The development team would receive our feedback face-to-face, make changes and we’d be testing the next version of the app the same or the next day. It’s only by doing this that we can ensure that Trade Me members will see positive changes happening daily rather than monthly.

How we prioritize what needs to get done

To help us try to decide what we should be doing at any one time we have some simple rules to prioritise:

  • Core product over other business elements
  • Finish something over start something new
  • Committed work over non-committed work
  • Strategic priorities over non-strategic priorities
  • Responsive support over less time-critical work
  • Where our input is crucial over where our input is a bonus

Applying these rules to any situation makes the decision whether to jump in and help pretty easy.At any one time, each of us in the UX team will have one or more long-term projects, some medium-term projects, and either some short-term projects or the capacity for some short-term projects (usually achieved by putting aside a long-term project for a moment).

We manage our time and projects on Trello, where we can see at a glance what’s happening this and next week, and what we’ve caught sniff of in the wind that might be coming up, or definitely is coming up.On the whole, both we and the squads favour fast response, bulleted list, email ‘reports’ for any short-term requests for user testing.  We get a report out within four hours of testing (usually well within that). After all, the squads are working in short sprints, and our involvement is often at the sharp end where delays are not welcome. Most people aren’t going to read past the management summary anyway, so why not just write that, unless you have to?

How we share our knowledge with the organization

Even though we mainly keep our reporting brief, we want the knowledge we’ve gained from working with each squad or on each product to be available to everyone. So we maintain a wiki that contains summaries of what we did for each piece of work, why we did it and what we found. Detailed reports, if there are any, are attached. We also send all reports out to staff who’ve subscribed to the UX interest email group.

Finally, we send out a monthly email, which looks across a bunch of research we’ve conducted, both short and long-term, and draws conclusions from which our colleagues can learn. All of these latter activities contribute to one of our key objectives: making Trade Me an even more user-centred organization than it is.I’ve been with Trade Me for about six months and we’re constantly refining our UX practices, but so far it seems to be working very well.Right, I’d better go – I’ve just been told I’m user testing something pretty big tomorrow and I need to write a test script!

Learn more
1 min read

Collating your user testing notes

It’s been a long day. Scratch that - it’s been a long week! Admit it. You loved every second of it.

Twelve hour days, the mad scramble to get the prototype ready in time, the stakeholders poking their heads in occasionally, dealing with no-show participants and the excitement around the opportunity to speak to real life human beings about product or service XYZ. Your mind is exhausted but you are buzzing with ideas and processing what you just saw. You find yourself sitting in your war room with several pages of handwritten notes and with your fellow observers you start popping open individually wrapped lollies leftover from the day’s sessions. Someone starts a conversation around what their favourite flavour is and then the real fun begins. Sound familiar? Welcome to the post user testing debrief meeting.

How do you turn those scribbled notes and everything rushing through your mind into a meaningful picture of the user experience you just witnessed? And then when you have that picture, what do you do next? Pull up a bean bag, grab another handful of those lollies we feed our participants and get comfy because I’m going to share my idiot-proof, step by step guide for turning your user testing notes into something useful.

Let’s talk

Get the ball rolling by holding a post session debrief meeting while it’s all still fresh your collective minds. This can be done as one meeting at the end of the day’s testing or you could have multiple quick debriefs in between testing sessions. Choose whichever options works best for you but keep in mind this needs to be done at least once and before everyone goes home and forgets everything. Get all observers and facilitators together in any meeting space that has a wall like surface that you can stick post its to - you can even use a window! And make sure you use real post its - the fake ones fall off!

Mark your findings (Tagging)

Before you put sharpie to post it, it’s essential to agree as a group on how you will tag your observations. Tagging the observations now will make the analysis work much easier and help you to spot patterns and themes. Colour coding the post its is by far the simplest and most effective option and how you assign the colours is entirely up to you. You could have a different colour for each participant or testing session, you could have different colours to denote participant attributes that are relevant to your study eg senior staff and junior staff, or you could use different colours to denote specific testing scenarios that were used. There’s many ways you could carve this up and there’s no right or wrong way. Just choose the option that suits you and your team best because you’re the ones who have to look at it and understand it. If you only have one colour post it eg yellow, you could colour code the pen colours you use to write on the notes or include some kind of symbol to help you track them.

Processing the paper (Collating)

That pile of paper is not going to process itself! Your next job as a group is to work through the task of transposing your observations to post it notes. For now, just stick them to the wall in any old way that suits you. If you’re the organising type, you could group them by screen or testing scenario. The positioning will all change further down the process, so at this stage it’s important to just keep it simple. For issues that occur repeatedly across sessions, just write them down on their own post its- doubles will be useful to see further down the track.In addition to  holding a debrief meetings, you also need to round up everything that was used to capture the testing session/s. And I mean EVERYTHING.

Handwritten notes, typed notes, video footage and any audio recordings need to be reviewed just in case something was missed. Any handwritten notes should be typed to assist you with the completion of the report. Don’t feel that you have to wait until the testing is completed before you start typing up your notes because you will find they pile up very quickly and if your handwriting is anything like mine…. Well let’s just say my short term memory is often required to pick up the slack and even that has it’s limits. Type them up in between sessions where possible and save each session as it’s own document. I’ll often use the testing questions or scenario based tasks to structure my typed notes and I find that makes it really easy to refer back to.Now that you’ve processed all the observations, it’s time to start sorting your observations to surface behavioural patterns and make sense of it all.

Spotting patterns and themes through affinity diagramming

Affinity diagramming is a fantastic tool for making sense of user testing observations. In fact it’s just about my favourite way to make sense of any large mass of information. It’s an engaging and visual process that grows and evolves like a living creature taking on a life of its own. It also builds on the work you’ve just done which is a real plus!By now, testing is over and all of your observations should all be stuck to a wall somewhere. Get everyone together again as a group and step back and take it all in. Just let it sit with you for a moment before you dive in. Just let it breathe. Have you done that? Ok now as individuals working at the same time, start by grouping things that you think belong together. It’s important to just focus on the content of the labels and try to ignore the colour coded tagging at this stage, so if session one was blue post its don’t group all the blue ones together just because they’re all blue! If you get stuck, try grouping by topic or create two groups eg issues and wins and then chunk the information up from there.

You will find that the groups will change several times over the course of the process  and that’s ok because that’s what it needs to do.While you do this, everyone else will be doing the same thing - grouping things that make sense to them.  Trust me, it’s nowhere near as chaotic as it sounds! You may start working as individuals but it won’t be long before curiosity kicks in and the room is buzzing with naturally occurring conversation.Make sure you take a step back regularly and observe what everyone else is doing and don’t be afraid to ask questions and move other people’s post its around- no one owns it! No matter how silly something may seem just put it there because it can be moved again. Have a look at where your tagged observations have ended up. Are there clusters of colour? Or is it more spread out? What that means will depend largely on how you decided to tag your findings. For example if you assigned each testing session its own colour and you have groups with lot’s of different colours in them you’ll find that the same issue was experienced by multiple people.Next, start looking at each group and see if you can break them down into smaller groups and at the same time consider the overall picture for bigger groups eg can the wall be split into say three high level groups.Remember, you can still change your groups at anytime.

Thinning the herd (Merging)

Once you and your team are happy with the groups, it’s time to start condensing the size of this beast. Look for doubled up findings and stack those post its on top of each other to cut the groups down- just make sure you can still see how many there were. The point of merging is to condense without losing anything so don’t remove something just because it only happened once. That one issue could be incredibly serious. Continue to evaluate and discuss as a group until you are happy. By now clear and distinct groups of your observations should have emerged and at a glance you should be able to identify the key findings from your study.

A catastrophe or a cosmetic flaw? (Scoring)

Scoring relates to how serious the issues are and how bad the consequences of not fixing them are. There are arguments for and against the use of scoring and it’s important to recognise that it is just one way to communicate your findings.I personally rarely use scoring systems. It’s not really something I think about when I’m analysing the observations. I rarely rank one problem or finding over another. Why? Because all data is good data and it all adds to the overall picture.I’ve always been a huge advocate for presenting the whole story and I will never diminish the significance of a finding by boosting another. That said, I do understand the perspective of those who place metrics around their findings. Other designers have told me they feel that it allows them to quantify the seriousness of each issue and help their client/designer/boss make decisions about what to do next.We’ve all got our own way of doing things, so I’ll leave it up to you to choose whether or not you score the issues. If you decide to score your findings there are a number of scoring systems you can use and if I had to choose one, I quite like Jakob Nielsen’s methodology for the simple way it takes into consideration multiple factors. Ultimately you should choose the one that suits your working style best.

Let’s say you did decide to score the issues. Start by writing down each key finding on it’s own post it and move to a clean wall/ window. Leave your affinity diagram where it is. Divide the new wall in half: one side for wins eg findings that indicate things that tested well and the other for issues. You don’t need to score the wins but you do need to acknowledge what went well because knowing what you’re doing well is just as important as knowing where you need to improve. As a group (wow you must be getting sick of each other! Make sure you go out for air from time to time!) score the issues based on your chosen methodology.Once you have completed this entire process you will have everything you need to write a kick ass report.

What could possibly go wrong? (and how to deal with it)

No process is perfect and there are a few potential dramas to be aware of:

People jumping into solution mode too early

In the middle of the debrief meeting, someone has an epiphany. Shouts of We should move the help button! or We should make the yellow button smaller! ring out and the meeting goes off the rails.I’m not going to point fingers and blame any particular role because we’ve all done it, but it’s important to recognise that’s not why we’re sitting here. The debrief meeting is about digesting and sharing what you and the other observers just saw. Observing and facilitating user testing is a privilege. It’s a precious thing that deserves respect and if you jump into solution mode too soon, you may miss something. Keep the conversation on track by appointing a team member to facilitate the debrief meeting.

Storage problems

Handwritten notes taken by multiple observers over several days of testing adds up to an enormous pile of paper. Not only is it a ridiculous waste of paper but they have to be securely stored for three months following the release of the report. It’s not pretty. Typing them up can solve that issue but it comes with it’s own set of storage related hurdles. Just like the handwritten notes, they need to be stored securely. They don’t belong on SharePoint or in the share drive or any other shared storage environment that can be accessed by people outside your observer group. User testing notes are confidential and are not light reading for anyone and everyone no matter how much they complain. Store any typed notes in a limited access storage solution that only the observers have access to and if anyone who shouldn’t be reading them asks, tell them that they are confidential and the integrity of the research must be preserved and respected.

Time issues

Before the storage dramas begin, you have to actually pick through the mountain of paper. Not to mention the video footage, and the audio and you have to chase up that sneaky observer who disappeared when the clock struck 5. All of this takes up a lot of time. Another time related issue comes in the form of too much time passing in between testing sessions and debrief meetings. The best way to deal with both of these issues  is to be super organised and hold multiple smaller debriefs in between sessions where possible. As a group, work out your time commitments before testing begins and have a clear plan in place for when you will meet.  This will prevent everything piling up and overwhelming you at the end.

Disagreements over scoring

At the end of that long day/week we’re all tired and discussions around scoring the issues can get a little heated. One person’s showstopper may be another person’s mild issue. Many of the ranking systems use words as well as numbers to measure the level of severity and it’s easy to get caught up in the meaning of the words and ultimately get sidetracked from the task at hand. Be proactive and as a group set ground rules upfront for all discussions. Determine how long you’ll spend discussing an issue and what you will do in the event that agreement cannot be reached. People want to feel heard and they want to feel like their contributions are valued. Given that we are talking about an iterative process, sometimes it’s best just to write everything down to keep people happy and merge and cull the list in the next iteration. By then they’ve likely had time to reevaluate their own thinking.

And finally...

We all have our own ways of making sense of our user testing observations and there really is no right or wrong way to go about it. The one thing I would like to reiterate is the importance of collaboration and teamwork. You cannot do this alone, so please don’t try. If you’re a UX team of one, you probably already have a trusted person that you bounce ideas off. They would be a fantastic person to do this with. How do you approach this process? What sort of challenges have you faced? Let me know in the comments below.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.