Optimal Blog
Articles and Podcasts on Customer Service, AI and Automation, Product, and more

At Optimal, we know the reality of user research: you've just wrapped up a fantastic interview session, your head is buzzing with insights, and then... you're staring at hours of video footage that somehow needs to become actionable recommendations for your team.
User interviews and usability sessions are treasure troves of insight, but the reality is reviewing hours of raw footage can be time-consuming, tedious, and easy to overlook important details. Too often, valuable user stories never make it past the recording stage.
That's why we’re excited to announce the launch of Interviews, a brand-new tool that saves you time with AI and automation, turns real user moments into actionable recommendations, and provides the evidence you need to shape decisions, bring stakeholders on board, and inspire action.
Interviews, Reimagined
We surveyed more than 100 researchers, designers, and product managers, conducted discovery interviews, tested prototypes, and ran feedback sessions to help guide the discovery and development of Optimal Interviews.
The result? What once took hours of video review now takes minutes. With Interviews, you get:
- Instant clarity: Upload your interviews and let AI automatically surface key themes, pain points, opportunities, and other key insights.
- Deeper exploration: Ask follow-up questions and anything with AI chat. Every insight comes with supporting video evidence, so you can back up recommendations with real user feedback.
- Automatic highlight reels: Generate clips and compilations that spotlight the takeaways that matter.
- Real user voices: Turn insight into impact with user feedback clips and videos. Share insights and download clips to drive product and stakeholder decisions.
Groundbreaking AI at Your Service
This tool is powered by AI designed for researchers, product owners, and designers. This isn’t just transcription or summarization, it’s intelligence tailored to surface the insights that matter most. It’s like having a personal AI research assistant, accelerating analysis and automating your workflow without compromising quality. No more endless footage scrolling.
The AI used for Interviews as well as all other AI with Optimal is backed by AWS Amazon Bedrock, ensuring that your AI insights are supported with industry-leading protection and compliance.
Evolving Optimal Interviews
A big thank you to our early access users! Your feedback helped us focus on making Optimal Interviews even better. Here's what's new:
- Speed and easy access to insights: More video clips, instant download, and bookmark options to make sharing findings faster than ever.
- Privacy: Disable video playback while still extracting insights from transcripts and get PII redaction for English audio alongside transcripts and insights.
- Trust: Our enhanced, best-in-class AI chat experience lets teams explore patterns and themes confidently.
- Expanded study capability: You can now upload up to 20 videos per Interviews study.
What’s Next: The Future of Moderated Interviews in Optimal
This new tool is just the beginning. Our vision is to help you manage the entire moderated interview process inside Optimal, from recruitment to scheduling to analysis and sharing.
Here’s what’s coming:
- View your scheduled sessions directly within Optimal. Link up with your own calendar.
- Connect seamlessly with Zoom, Google Meet, or Teams.
Imagine running your full end-to-end interview workflow, all in one platform. That’s where we’re heading, and Interviews is our first step.
Ready to Explore?
Interviews is available now for our latest Optimal plans with study limits. Start transforming your footage into minutes of clarity and bring your users’ voices to the center of every decision. We can’t wait to see what you uncover.
Topics
Research Methods
Popular
All topics
Latest
Gregg Bernstein on leading research at Vox Media
Welcome to our first UX New Zealand 2019 speaker interview. In the lead up to the conference (which is just around the corner!), we’re catching up with the people who’ll be sharing their stories with you in October.
Today, we chat to Gregg Bernstein, the Senior Director of User Research at Vox Media.
I appreciate you taking the time to chat with us today Gregg. First of all, I just want to say I’m a huge fan of The Verge and the whole Vox Media network.
Gregg: Yeah, I'm a big fan too. It's a treat to get to work with them.
Let’s start off at the beginning. What got you into user research in the first place?
Gregg: So what got me into user research is that I was actually a designer for a number of years. And, after a while, I got pretty tired of design. I used to do a lot of album covers and posters for punk rock bands and independent bands and things like that. And I just felt like I was doing the same thing over and over.
I decided to go to graduate school because, after teaching design at a university for a couple of years, I wanted to teach design full time, instead of doing design work. And it was in grad school that I realized that I liked understanding the information that informs the design in the first place, right? I was fascinated by exploring what the opportunities were and who would consume the final product.
And then I realized what I was really interested in was actually UX research, a term which I didn't even know existed at the time. And then once I realized that this was an entire area of study, it made it clear to me that that's where I wanted to go with my career. So I ended up turning my master's degree in graphic design into a more encompassing study of user experience and UX research. And fortunately ended up getting to do that work at MailChimp just a year after graduating with my MFA.
That actually leads into my next question. I hear you got the original user research practice at MailChimp off the ground?
Gregg: Not exactly. I was given the opportunity to scale up the team and scale up the research practices.
When I first started, all of our work was in service of the UX team. So it was a lot of interviews and usability tests and competitive analyses that were solely to make the MailChimp app better. But over time, as my team shared our work in presentations and in internal newsletters, the rest of the company started asking us questions and it wasn't coming from our traditional UX partners. It wasn't coming from engineering, it was coming from the accounting team or the marketing team and all of this demand for research was evidence that we needed to hire more people and become more of a consultancy to the entire organization.
So I was able to scale up what we were doing in that sense, to serve not just our product and our application, but the entire organization. And really think about what are the questions that are going to help us as a business and help us make smarter decisions.
That must've been quite gratifying to see that payoff though, to see the requests for research data from throughout the organization?
Gregg: I think in hindsight it's more gratifying. When you're in the thick of it, it's, "wow, there's so much demand, how are we going to satisfy everyone?" It becomes a prioritization challenge to try to figure out, which work do we take on now versus what's nice to know but isn't going to help us with either building the right product or marketing in the right way, increasing revenue.
So I was gratified to be put in a position to hire people and try to answer more questions. But when you're in the thick of it's also just a whole lot of, "Oh gosh, how do I do this?"
How do you find leading the research practice at Vox Media versus the practice at MailChimp?
Gregg: It's a lot different at Vox. There is a product team and that's where I live and that's where my team lives. We work within our product organization. But media is so different because you don't (at least in our case) require anybody to sign up or pay for the product. Anybody can read The Verge, anybody can listen to a Vox.com podcast. Anybody can keep up with Polygon wherever they keep up with Polygon. So there's not a true exchange of money for products, so the whole idea of there being a product changes.
One of my roles at Vox is really to help us understand how we can make it easier for journalists to write their stories. So we have a content management system we call Chorus, all of our different networks, whether it's Vox or The Verge or Eater, they use Chorus to write their stories. And then that sends their stories to our websites, but also to Apple news, to Google News, newsletters and Facebook and Twitter. Wherever the stories need to go.
There's the research into, how do we make that experience of writing the news better? How do we make the experience of consuming the news better? What would make a podcast listener have a better experience and find more podcasts? How does somebody who watches us only on YouTube discover other YouTube channels that we create content on?
So it's a very different type of research. I try to help all of our teams make better decisions, whether it's the podcast team with how to market the podcast, or our product team with how to make it easier to write a story. And now I’m working on a new line of business which is how do we sell our content management system to other newsrooms? So, I don't know if you're familiar with Funny Or Die or The Ringer, those are other media companies, but they’re running on our CMS. And so there's research into how do we make our products usable for other organizations that we don't work with day to day.
Is research centralized at Vox or do each of the websites/sub-brands have their own teams and do their own research?
Gregg: They don't have their own research teams. I mean they are all journalists, they all know how to conduct their own investigations. But when it comes to the user experience research, I was the first hire in a company with that skillset and I still help all of our different sub brands when they have questions. Let's say we’re interested in starting up a new newsletter focused on a very specific topic. What they might come to me to understand is the context around that topic. So how do people currently satisfy their need to get information on that topic? Where do they go? Do they pay for it? At what time of day do they read it or watch it or consume it. Those are the types of studies where I will partner with The Verge or Vox or Curbed or whoever it is, and help them get that information.
My primary research audience is our product teams. There are always questions around how can we make the editorial or audience experience better. That's always going to be my first responsibility, but that's 70% of the work. The other 30% is how do I help our other colleagues around the company that are in these sub-brands get answers to their questions too.
Would you say you prefer this type of work that you do at Vox to what you were doing at MailChimp?
Gregg: I prefer any type of job where I'm helping people make better decisions. I think that's really the job of the researcher is to help people make better decisions. So whether it's helping people understand what the YouTube audience for vox.com looks like, or how we make MailChimp easier to use for a small business owner? That doesn't really matter as long as I feel like I’m giving people better information to make better decisions.
That ties nicely into the topic of your UX New Zealand talk, which is research being everyone's job. Do you feel like this is starting to gain traction? Does it feel like this is the case at Vox?
Gregg: It does because there are only 4 researchers at Vox right now, soon to be 3 because one is returning to graduate school. So there's few researchers, but there's no shortage of questions, which means part of the job of research is to help everyone understand where they can get information to make better decisions. If you look at LinkedIn right now, you'll see that there's something like 30,000 UX engineer positions open, but only 4,000 UX research positions open.
There's a shortage of researchers. There's not a lot of demand for the role, but there is a demand for information. So you kind of have to give people the skills or a playbook to understand, there's information out there, here's where you can find it. But not only that, you have to give them the means to get that information in a way where it's not going to disrupt their normal deadlines. So research can't be some giant thing that you're asking people to adopt. You have to give people the skills to become their own researchers.
At Vox we've put together a website that has examples of the work we've done, resources on how to do it and how somebody can do it themselves. A form people can fill out if they need help with a project.
So we're really trying to be as transparent as possible and saying, "these are things that you could do. Here are examples of things that we've done. Here are people you can talk to." There's also Slack channels that we host where anybody can ask us questions. So if I can't do the work myself or if my team can't do it, people will still know that there are options available to them.
What would your advice be for researchers who need to foster a research culture if they're in a very small team or even if they’re by themselves?
Gregg: The first thing you can do is go on a listening tour and just understand how people make decisions now. What information they use to make those decisions and what the opportunities are. Just get that context.
That's step 1, step 2 is to pick one small tightly scoped project that is going to be easy to accomplish but also is going to be meaningful to a lot of people. So what's the one thing that everybody's confused about in your product? Quickly do that research to help illuminate the context of that problem space and offer some scenarios.
And the reason you pick one tightly scoped project is then you can point to it and say, this is what user research can do. This didn't take long, it didn't cost a lot, but we've learned a ton. So I think the starting point is just creating that evidence that people can point to and say, "Hey, look what we did. We could be doing this every day." So you just have to make the case that research is achievable and prove that it's not impossible to put into place.
Do you see this culture taking hold at Vox?
Gregg: I think I'm making progress within Vox. I think people are seeing that research is not hard to incorporate, that it should be a consideration for any project.
I think once people see that they can do their own research, that's step one of a longer process. Like you want everyone to be aware of research and starting to do their own research, but that's a stopgap. Ideally, you want it to get to the point where everyone is saying we need more research and then you can hire dedicated experts who can do the research all the time. And that's where we got to at Vox a year ago where I was able to hire more people, or a year and a half ago, I could hire more people because there was a demand for it and I couldn't be in every meeting and I couldn't take on every project. But the projects were important and we were going to make big decisions based on research. We needed to have more people who were experts doing this work.
So I think everyone being a researcher is the first of a long process to get to having a dedicated research staff. But you have to start with something small, which is everyone could do their own research.
Last question. What are you looking forward to about the conference and/or New Zealand?
Gregg: The thing I'm most looking forward to about the conference itself is I get so much out of meeting attendees and hearing what challenges they're facing. Whether they're a designer or developer or just somebody who works in user experience in any capacity. I want to hear what work looks like for them and how their teams are growing or how their organizations are growing.
In addition to the speakers, that's what I want to hear, is the audience. And then Wellington, I've never been there. I'm super excited to spend a day just walking around and seeing everything and eating some food and having a good time. It doesn't take much to satisfy me so just being there is going to be enough.
Thanks for your time Gregg, and see you at UX New Zealand!
UX New Zealand is just around the corner. Whether you're new to UX or a seasoned professional, you'll gain valuable insights and inspiration - and have fun along the way! Learn more on the UX New Zealand website.
3 ways you can combine OptimalSort and Chalkmark in your design process
As UX professionals we know the value of card sorting when building an IA or making sense of our content and we know that first clicks and first impressions of our designs matter. Tools like OptimalSort and Chalkmark are two of our wonderful design partners in crime, but did you also know that they work really well with each other? They have a lot in common and they also complement each other through their different strengths and abilities. Here are 3 ways that you can make the most of this wonderful team up in your design process.
1. Test the viability of your concepts and find out which one your users prefer most
Imagine you’re at a point in your design process where you’ve done some research and you’ve fed all those juicy insights into your design process and have come up with a bunch of initial visual design concepts that you’d love to test.
You might approach this by following this 3 step process:
- Test the viability of your concepts in Chalkmark before investing in interaction design work
- Iterate your design based on your findings in Step 1
- Finish by running a preference test with a closed image based card sort in OptimalSort to find out which of your concepts is most preferred by your users
There are two ways you could run this approach: remotely or in person. The remote option is great for when you’re short on time and budget or for when your users are all over the world or otherwise challenging to reach quickly and cheaply. If you’re running it remotely, you would start by popping images of your concepts in whatever state of fidelity they are up to into Chalkmark and coming up with some scenario based tasks for your participants to complete against those flat designs. Chalkmark is super nifty in the way that it gets people to just click on an image to indicate where they would start out when completing a task. That image can be a rough sketch or a screenshot of a high fidelity prototype or live product — it could be anything! Chalkmark studies are quick and painless for participants and great for designers because the results will show if your design is setting your users up for success from the word go. Just choose the most common tasks a user would need to complete on your website or app and send it out.
Next, you would review your Chalkmark results and make any changes or iterations to your designs based on your findings. Choose a maximum of 3 designs to move forward with for the last part of this study. The point of this is to narrow your options down and figure out through research, which design concept you should focus on. Create images of your chosen 3 designs and build a closed card sort in OptimalSort with image based cards by selecting the checkbox for ‘Add card images’ in the tool (see below).

The reason why you want a closed card sort is because that’s how your participants will indicate their preference for or against each concept to you. When creating the study in OptimalSort, name your categories something along the lines of ‘Most preferred’, ‘Least preferred’ and ‘Neutral’. Totally up to you what you call them — if you’re able to, I’d encourage you to have some fun with it and make your study as engaging as possible for your participants!

Limit the number of cards that can be sorted into each category to 1 and uncheck the box labelled ‘Randomize category order’ so that you know exactly how they’re appearing to participants — it’s best if the negative one doesn’t appear first because we’re mostly trying to figure out what people do prefer and the only way to stop that is to switch the randomization off. You could put the neutral option at the end or in the middle to balance it out — totally up to you.
It’s also really important that you include a post study questionnaire to dig into why they made the choices they did. It’s one thing to know what people do and don’t prefer, but it’s also really important to additionally capture the reasoning behind their thinking. It could be something as simple as “Why did you chose that particular option as your most preferred?” and given how important this context is, I would set that question to ‘required’. You may still end up with not-so helpful responses like ‘Because I like the colors’ but it’s still better than nothing — especially if your users are on the other side of the world or you’re being squeezed by some other constraint! It’s something to be mindful of and remember that studies like these contribute to the large amount of research that goes on throughout a project and are not the only piece of research you’ll be running. You’re not pinning all your design’s hopes and dreams on this one study! You’re just trying to quickly find out what people prefer at this point in time and as your process continues, your design will evolve and grow.
You might also ask the same context gathering question for the least preferred option and consider also including an optional question that allows them to share any other thoughts they might have on the activity they just completed — you never know what you might uncover!
If you were running this in person, you could use it to form the basis for a moderated codesign session. You would start your session by running the Chalkmark study to gauge their first impressions and find out where those first clicks are landing and also have a conversation about what your participants are thinking and feeling while they’re completing those tasks with your concepts. Next, you could work with your participants to iterate and refine your concepts together. You could do it digitally or you could just draw them out on paper — it doesn't have to be perfect! Lastly, you could complete your codesign session by running that closed card sort preference test as a moderated study using barcodes printed from OptimalSort (found under the ‘Cards’ tab during the build process) giving you the best of both worlds — conversations with your participants plus analysis made easy! The moderated approach will also allow you to dig deeper into the reasoning behind their preferences.
2. Test your IA through two different lenses: non visual and visual
Your information architecture (IA) is the skeleton structure of your website or app and it can be really valuable to evaluate it from two different angles: non-visual and visual. The non-visual elements of an IA are: language, content, categories and labelling and these are great because they provide a clear and clean starting point. There’s no visual distractions and getting that content right is rightfully so a high priority. The visual elements come along later and build upon that picture and help provide context and bring your design to life. It's a good idea to test your IA through both lenses throughout your design process to ensure that nothing is getting lost or muddied as your design evolves and grows.
Let’s say you’ve already run an open card sort to find out how your users expect your content to be organised and you’ve created your draft IA. You may have also tested and iterated that IA in reverse through a tree test in Treejack and are now starting to sketch up some concepts for the beginnings of the interaction design stages of your work.
At this point in the process, you might run a closed card sort with OptimalSort on your growing IA to ensure that those top level category labels are aligning to user expectations while also running a Chalkmark study on your early visual designs to see how the results from both approaches compare.
When building your closed card sort study, you would set your predetermined categories to match your IA’s top level labels and would then have your participants sort the content that lies beneath into those groups. For your Chalkmark study, think about the most common tasks your users will need to complete using your website or app when it eventually gets released out into the world and base your testing tasks around those. Keep it simple and don’t stress if you think this may change in the future — just go with what you know today.
Once you’ve completed your studies, have a look at your results and ask yourself questions like: Are both your non-visual and visual IA lenses telling the same story? Is the extra context of visual elements supporting your IA or is it distracting and/or unhelpful? Are people sorting your content into the same places that they’re going looking for it during first-click testing? Are they on the same page as you when it’s just words on an actual page but are getting lost in the visual design by not correctly identifying their first click? Has your Chalkmark study unearthed any issues with your IA? Have a look at the Results matrix and the Popular placements matrix in OptimalSort and see how they stack up against your clickmaps in Chalkmark.


3. Find out if your labels and their matching icons make sense to users
A great way to find out if your top level labels and their matching icons are communicating coherently and consistently is to test them by using both OptimalSort and Chalkmark. Icons aren’t the most helpful or useful things if they don’t make sense to your users — especially in cases where label names drop off and your website or app homepage relies solely on that image to communicate what content lives below each one e.g., sticky menus, mobile sites and more.
This approach could be useful when you’re at a point in your design process where you have already defined your IA and are now moving into bringing it to life through interaction design. To do this, you might start by running a closed card sort in OptimalSort as a final check to see if the top level labels that you intend to make icons for are making sense to users. When building the study in OptimalSort, do exactly what we talked about earlier in our non-visual vs visual lens study and set your predetermined categories in the tool to match your level 1 labels. Ask your participants to sort the content that lies beneath into those groups — it’s the next part that’s different for this approach.
Once you’ve reviewed your findings and are confident your labels are resonating with people, you can then develop their accompanying icons for concept testing. You might pop these icons into some wireframes or a prototype of your current design to provide context for your participants or you might just test the icons on their own as they would appear on your future design (e.g., in a row, as a block or something else!) but without any of the other page elements. It’s totally up to you and depends entirely upon what stage you’re at in your project and the thing you’re actually designing — there might be cases where you want to zero in on just the icons and maybe the website header e.g., a sticky menu that sits above a long scrolling, dynamic social feed. In an example taken from a study we recently ran on Airbnb and TripAdvisor’s mobile apps, you might use the below screen on the left but without the icon labels or you might use the screen on the right that shows the smaller sticky menu version of it that appears on scroll.

The main thing here is to test the icons without their accompanying text labels to see if they align with user expectations. Choose the visual presentation approach that you think is best but lose the labels!
When crafting your Chalkmark tasks, it’s also a good idea to avoid using the label language in the task itself. Even though the labels aren’t appearing in the study, just using that language still has the potential to lead your participants. Treat it the same way you would a Treejack task — explain what participants have to do without giving the game away e.g., instead of using the word ‘flights’ try ‘airfares’ or ‘plane tickets’ instead.
Choose one scenario based task question for each level 1 label that has an icon and consider including post study questions to gather further context from your participants — e.g., did they have any comments about the activity they completed? Was anything confusing or unclear and if so, what and why?
Once you’ve completed your Chalkmark study and have analysed the results, have a look at how well your icons tested. Did your participants get it right? If not, where did they go instead? Are any of your icons really similar to each other and is it possible this similarity may have led people down the wrong path?
Alternatively, if you’ve already done extensive work on your IA and are feeling pretty confident in it, you might instead test your icons by running an image card sort in OptimalSort. You could use an open card sort and limit the cards per category to just one — effectively asking participants to name each card rather than a group of cards. An open card sort will allow you to learn more about the language they use while also uncovering what they associate with each one without leading them. You’d need to tweak the default instructions slightly to make this work but it’s super easy to do! You might try something like:
Part 1:
Step 1
- Take a quick look at the images to the left.
- We'd like you to tell us what you associate with each image.
- There is no right or wrong answer.
Step 2
- Drag an image from the left into this area to give it a name.
Part 2:
Step 3
- Click the title to give the image a name that you feel best describes what you associate that image with.
Step 4
- Repeat step 3 for all the images by dropping them in unused spaces.
- When you're done, click "Finished" at the top right. Have fun!
Test out your new instructions in preview mode on a colleague from outside of your design team just to be sure it makes sense!
So there’s 3 ideas for ways you might use OptimalSort and Chalkmark together in your design process. Optimal Workshop’s suite of tools are flexible, scalable and work really well with each other — the possibilities of that are huge!
Further reading
When to compromise on depth of research and when to push back
Time, money, people, access, reach. The resources we have at our disposal can also become constraints. In the real world, research projects don’t always follow a perfect plan. There are times when we have to be pragmatic and work with what we have, but there are limits. Knowing where those limits are and when to push back can be really challenging. If we don’t push back in the right way, our research results and design decisions could be compromised and if we push back in the wrong way, we may be inviting a whole host of new resourcing constraints that might just make life harder for us.
Let’s take a look at some research approach compromises that you should push back on, some examples of useful workarounds that will still allow you to gain the insights you need for your project and some constructive ways to lead those push back conversations.
4 research depth compromises that you should push back on
When you’re asked (or told) to talk to experts and frontline staff instead of users or customers
We know we’re not our users and this is definitely one of those moments where we have a responsibility to speak up and try to find a better way. Experts and frontline staff who interact with users or customers all day long certainly have a lot of value to contribute to the design process, however you really do need to gather insights from the people you’re designing for. Failing to include users or customers in your research has a high likelihood of coming back to bite you in the form of poorly designed products, services and experiences that will need to be redesigned costing you more time and money. If you do happen to get away with it and produce something that is fit for purpose, it’s because you were lucky. Don’t base your design decisions on luck and don’t let your stakeholders and team do it either.
When you’re told to just run a focus group (and nothing else)
Focus groups are a pain for a number of reasons, but one of the biggest issues is that the information that you’ll gather through them more often than not lacks depth, context and sometimes even authenticity. When you bring a group of people together into a room, instead of useful and useable insights, you’re more likely to end up with a pile of not-so-helpful opinions and you open your research up to a delightful thing called groupthink where your participants may say they agree with something when they actually don’t. Also, the things that people say they do in a focus group might not align to what they actually do in reality. It’s not their fault – they most likely think they’re being helpful but they’re really just giving you a bunch data you can’t be sure of.
When you’re told to just run a survey (and nothing else)
There’s a time and a place for when a survey might be appropriate, but a standalone research study isn’t it. A survey on its own isn’t enough to gain an appropriate level of depth to inform complex design decisions – it’s more of a starting point or a study to complement a round of user interviews. Surveys don’t allow you to dig deeper into participant responses – you can’t ask follow up questions in that moment and keep asking questions until you get the insight you need. You also don’t know what they’re doing or where they are when they complete your survey. You have no context or control over their environment – they might not complete the whole thing in one sitting and may leave it open on their device while they go off and complete other non-survey related tasks.
Surveys function best when they’re brief and don’t take up too much of your participant’s time because if they’re too long or require in-depth detail to be shared, people might just start providing quick or less than helpful responses just to get through it and finish. If there’s an incentive on offer, you also run the risk of participants providing nonsense responses just to complete the study to obtain the reward or they might just tell you what they think you want to hear so they don’t miss out.
When you’re told to skip discovery research
Skipping this very important early step in the design process in the hopes of saving time or money can end up being quite costly. If you launch into the design stage of a new product or a major redesign of an existing product without conducting UX research upfront, you’ll likely end up designing something that isn’t needed, wanted or fit for purpose. When this happens, all that time and money you apparently ‘saved’ – and then some – will get spent anyway trying to clean up the mess like I mentioned earlier. Start your design journey out on the right foot and work with your team and stakeholders to find a way to not skip this critical piece of research.
4 research depth compromises that won’t kill your project
Talking to a smaller group of users when the only other alternative is doing no research at all
If you have to choose between talking to 5 users or customers or no one at all, always pick the former. Talking to a smaller group is far better than talking to absolutely no one and essentially designing off your and your team’s opinion and not much else. Research is scalable. You don’t have to run 20+ user interviews to gather useful and deep insights – in many cases patterns tend to appear around the 5-10 participants mark. You can run your research in smaller bites and more often to save on time and keep your project moving along. If you’re short on time or money or your customers are hard to reach location wise, run your user interviews over the phone!
Guerrilla research
I’ve met people who aren’t a fan of the term ‘guerilla research’. I’ve been told it’s a negative term that can imply that you’re doing something you don’t have permission to be doing. Well guess what? Sometimes you are! We’re not all in privileged positions where UX research is an accepted and willingly supported practice. UX maturity comes in many shapes and sizes and some of us still need to prove the value of UX research to our stakeholders and organisations.
Hitting the streets or a customer facing environment (e.g., a store) with a mobile device for a few hours one afternoon is a good way to gather research insights quickly. While you will have to limit your interactions with participants to under 3 to 5 minutes, it can be a good way to get a lot of responses to a handful of big burning questions that you might be tackling during your discovery research.
As always, research begets research and this approach might give you the insights you need to secure buy in for a much larger piece of research. You might also use this technique to gather quantitative data or run a quick usability test a new feature. First-click testing tools like Chalkmark for example, are great for this because all the participant has to do is click on an image on a screen. It takes seconds for them to complete and you can include post study questions in the tool for them to answer or you can just have a conversation with them then and there.
Remote research
When it comes to remote research there are a lot of different methods and techniques covering the entire design process from start to finish. It’s super flexible and scalable and the level of depth you can achieve in a short space of time and effort can be significant. The depth compromise here is not being in the same room as your participants. For example if you’re running a remote card sort with OptimalSort, you won’t get to hear a conversation about why certain cards were placed where they were, however you will gather a decent amount of solid quantitative data quickly and most of the analysis work is done for you saving even more time. You can also fill in any qualitative research gaps by including pre and post study questions and you could also use your findings to help prove the need for resources to conduct face to face research to complement your remote study.
Live A/B testing
Also called split testing, live A/B testing on a website or app is a quick way to test out a new feature or an idea for a new feature. Much like with remote research, you won’t get to ask why your research participants did what they did, but you will obtain quantitative evidence of what they did in real time while attempting to complete a real task. It’s a quick and dirt cheap way to find out what does and doesn’t work. You could always ask your website visitors to complete a quick exit survey when they leave your website or app or you could consider positioning a quick poll that appears in the moment that they’re completing the task e.g., during checkout. You can test anything from a whole page to the language used in a Call to Action (CTA), and while the results are largely quantitative, you’ll always learn something new that you can use to inform your next iterative design decision.
How to constructively push back
When approaching push back conversations it can be helpful to try to understand where these requests or constraints are coming from and why. Why are you being told to just run a focus group? Why isn’t there any funding for participant recruitment or a reasonable amount of time for you to complete the research? Why has it been suggested that you skip talking to actual users or customers? And so on. Talk to your stakeholders. Consider framing it as you trying to understand their needs and goals better so that you can help them achieve them – after all, that is exactly what you’re trying to do.
Talk to your team and colleagues as well. If you can find out what is driving the need for the research depth compromise, you might just be able to meet any constraints halfway. For example, maybe you could pitch the option of running a mixed methods research approach to bridge any resourcing gaps. You might run a survey and 5 x 20 minute user interviews over the phone or a video call if you’re short on time for example. It’s also possible that there might be a knowledge gap or a misunderstanding around how long research takes and how much it costs. A little education can go a very long way in convincing others of the importance of UX research. Take your stakeholders along for the journey and do research together where possible. Build those relationships and increased UX maturity may follow.
Pushing back might feel intimidating or impossible, but it’s something that every UX researcher has had to do in their career. User and research advocacy is a big part of the job. Have confidence in your abilities and view these conversations as an opportunity to grow. It can take some practice to get it right, but we have a responsibility to our users, customers, team, stakeholders and clients to do everything we can to ensure that design decisions are supported by solid evidence. They’re counting on us to gather and provide the insights that deliver amazing experiences and it’s not unreasonable to have a conversation about how we can all work better together to achieve awesome things. It’s not about ensuring your research follows a pitch perfect plan. Compromise and pragmatism are completely normal parts of the process and these conversations are all about finding the right way to do that for your project.

How many participants do I need for qualitative research?
For those new to the qualitative research space, there’s one question that’s usually pretty tough to figure out, and that’s the question of how many participants to include in a study. Regardless of whether it’s research as part of the discovery phase for a new product, or perhaps an in-depth canvas of the users of an existing service, researchers can often find it difficult to agree on the numbers. So is there an easy answer? Let’s find out.
Here, we’ll look into the right number of participants for qualitative research studies. If you want to know about participants for quantitative research, read Nielsen Norman Group’s article.
Getting the numbers right
So you need to run a series of user interviews or usability tests and aren’t sure exactly how many people you should reach out to. It can be a tricky situation – especially for those without much experience. Do you test a small selection of 1 or 2 people to make the recruitment process easier? Or, do you go big and test with a series of 10 people over the course of a month? The answer lies somewhere in between.
It’s often a good idea (for qualitative research methods like interviews and usability tests) to start with 5 participants and then scale up by a further 5 based on how complicated the subject matter is. You may also find it helpful to add additional participants if you’re new to user research or you’re working in a new area.
What you’re actually looking for here is what’s known as saturation.
Understanding saturation
Whether it’s qualitative research as part of a master’s thesis or as research for a new online dating app, saturation is the best metric you can use to identify when you’ve hit the right number of participants.
In a nutshell, saturation is when you’ve reached the point where adding further participants doesn’t give you any further insights. It’s true that you may still pick up on the occasional interesting detail, but all of your big revelations and learnings have come and gone. A good measure is to sit down after each session with a participant and analyze the number of new insights you’ve noted down.
Interestingly, in a paper titled How Many Interviews Are Enough?, authors Greg Guest, Arwen Bunce and Laura Johnson noted that saturation usually occurs with around 12 participants in homogeneous groups (meaning people in the same role at an organization, for example). However, carrying out ethnographic research on a larger domain with a diverse set of participants will almost certainly require a larger sample.
Ensuring you’ve hit the right number of participants
How do you know when you’ve reached saturation point? You have to keep conducting interviews or usability tests until you’re no longer uncovering new insights or concepts.
While this may seem to run counter to the idea of just gathering as much data from as many people as possible, there’s a strong case for focusing on a smaller group of participants. In The logic of small samples in interview-based, authors Mira Crouch and Heather McKenzie note that using fewer than 20 participants during a qualitative research study will result in better data. Why? With a smaller group, it’s easier for you (the researcher) to build strong close relationships with your participants, which in turn leads to more natural conversations and better data.
There's also a school of thought that you should interview 5 or so people per persona. For example, if you're working in a company that has well-defined personas, you might want to use those as a basis for your study, and then you would interview 5 people based on each persona. This maybe worth considering or particularly important when you have a product that has very distinct user groups (e.g. students and staff, teachers and parents etc).
How your domain affects sample size
The scope of the topic you’re researching will change the amount of information you’ll need to gather before you’ve hit the saturation point. Your topic is also commonly referred to as the domain.
If you’re working in quite a confined domain, for example, a single screen of a mobile app or a very specific scenario, you’ll likely find interviews with 5 participants to be perfectly fine. Moving into more complicated domains, like the entire checkout process for an online shopping app, will push up your sample size.
As Mitchel Seaman notes: “Exploring a big issue like young peoples’ opinions about healthcare coverage, a broad emotional issue like postmarital sexuality, or a poorly-understood domain for your team like mobile device use in another country can drastically increase the number of interviews you’ll want to conduct.”
In-person or remote
Does the location of your participants change the number you need for qualitative user research? Well, not really – but there are other factors to consider.
- Budget: If you choose to conduct remote interviews/usability tests, you’ll likely find you’ve got lower costs as you won’t need to travel to your participants or have them travel to you. This also affects…
- Participant access: Remote qualitative research can be a lifesaver when it comes to participant access. No longer are you confined to the people you have physical access to, instead you can reach out to anyone you’d like.
- Quality: On the other hand, remote research does have its downsides. For one, you’ll likely find you’re not able to build the same kinds of relationships over the internet or phone as those in person, which in turn means you never quite get the same level of insights.
Is there value in outsourcing recruitment?
Recruitment is understandably an intensive logistical exercise with many moving parts. If you’ve ever had to recruit people for a study before, you’ll understand the need for long lead times (to ensure you have enough participants for the project) and the countless long email chains as you discuss suitable times.
Outsourcing your participant recruitment is just one way to lighten the logistical load during your research. Instead of having to go out and look for participants, you have them essentially delivered to you in the right number and with the right attributes.
We’ve got one such service at Optimal, which means it’s the perfect accompaniment if you’re also using our platform of UX tools. Read more about that here.
Wrap-up
So that’s really most of what there is to know about participant recruitment in a qualitative research context. As we said at the start, while it can appear quite tricky to figure out exactly how many people you need to recruit, it’s actually not all that difficult in reality.
Overall, the number of participants you need for your qualitative research can depend on your project among other factors. It’s important to keep saturation in mind, as well as the locale of participants. You also need to get the most you can out of what’s available to you. Remember: Some research is better than none!

6 tips for making the most of Reframer
Summary: The notetaking side of qualitative research is often one of the most off-putting parts of the process. We developed Reframer to make this easier, so here are 6 tips to help you get the most out of this powerful tool.
In 2018, a small team at Optimal Workshop set out to entirely revamp our approach of providing learning resources to our users and community. We wanted to practice what we preached, and build a new blog website from the ground up with a focus on usability and accessibility. As you can probably imagine, this process involved a fair amount of user research.
While we certainly ran our fair share of quantitative research, our primary focus was on speaking to our users directly, which meant carrying out a series of user interviews – and (of course) using Reframer.
There’s really no overselling the value of qualitative user research. Sure, it can be off-putting for new researchers due to its perceived effort and cost, but the insights you’ll gain about your users can’t be found anywhere else.
We knew of the inherent value in qualitative research, but we were also aware that things like interviews and usability testing would be put off due to the time required to both carry out the tests and time spent hours in workshops trying to pull insights out of the data.
So, with that in mind, here are 6 tips to make the most out of our recently released from beta tool, Reframer!
1. How to create good observations
Observations are a core piece of the puzzle when it comes to effectively using Reframer. Observations are basically anything you see or hear during the course of your interview, usability test or study. It could be something like the fact that a participant struggled with the search bar or that they didn’t like the colors on the homepage.
Once you’ve collected a number of observations you can dive into the behaviors of your users and draw out patterns and themes – more on this further on in the article.
As for creating good observations using Reframer, here are a few tips:
- Record your sessions (audio or video): If you can, record the audio and video from your session. You’ll be able to listen or watch the session after the fact and pick up on anything you may have missed. Plus, recordings make for a good point of reference if you need to clarify anything.
- Note down timestamps during the session: Make a note of the time whenever something interesting happens. This will help you to jump back into the recording later and listen or watch the part again.
- Write your observations during the session: If you can’t, try and write everything down as soon as the session finishes. It’s a good idea to ask around and see if you can get someone else to act as a notetaker.
- Make a note of everything – even if it doesn’t seem to matter: Sometimes even the smallest things can have a significant impact on how a participant performs in a usability test. Note down if they’re having trouble with the keyboard, for example.
2. Tips for using tags correctly
The ability to tag different observations is one of the most powerful aspects of Reframer, and can significantly speed up the analysis side of research. You can think of tags as variables that you can use to filter your data later. For example, if you have a tag labeled “frustrated”, you can apply it to all of the relevant observations and then quickly view every instance when a participant was feeling frustrated after you’ve concluded your test.
When it comes to user interviews and usability tests, however, there are a couple of things to keep in mind when tagging.
For user interviews, it’s best not to apply tags until after you’ve finished the session. If you go and preload in a number of tags, you’ll likely (if unintentionally) introduce bias.
For usability tests, on the other hand, it’s best to set up your tags prior to going into a session. As just one example, you might have a number of tags relating to sentiment or to the tasks participants will perform. Setting up these types of tags upfront can speed up analysis later on.
If there’s one universal rule to keep in mind when it comes to Reframer tags, it’s that less is more. You can use Reframer’s merge feature to consolidate your tags, which is especially useful if you’ve got multiple people adding observations to your study. You can also set up groups to help manage large groups of tags.

3. After a session, take the time to review your data
Yes, it’s tempting to just shut your laptop and focus on something else for a while after finishing your session – but here’s an argument for spending just a little bit of time tidying up your data.
The time straight after a session has finished is also the best time to take a quick pass over your observations. This is the time when everything about the interview or usability test is still fresh in your mind, and you’ll be able to more easily make corrections to observations.
In Reframer, head over to the ‘Review’ tab and you’ll be presented with a list of your observations. If you haven’t already, or you think you’ve missed some important ones, now is also a good time to add tags.
You can also filter your observations to make the process of reviewing data a little easier. You can filter by the study member who wrote the observation as well as any starred observations that study members have created. If you know what you’re looking for, the keyword search is another useful tool.
Taking the time to make corrections to tags and observations now will mean you’ll be able to pull much more useful insights later on.
4. Create themes using the theme builder
With all of your observations tidied up and tags correctly applied, it’s time to turn our attention to the theme builder. This is one of the most powerful parts of Reframer. It allows you to see all of the different relationships between your tagged observations and then create themes based on the relationships.
The really neat thing with the theme builder is that as you continue to work on your study by feeding in new observations, the top 5 themes will display on the Results Overview page. This means you can constantly refer back to this page throughout your project. Note that while the theme builder will update automatically, it’s best to tag as many observations as possible to get the most useful data.
You can read a detailed guide of how to actually create themes using the theme builder in our Knowledge Base article.
5. Take advantage of Reframer’s built-in visualization functionality
So, whether your experience with Reframer starts with this article or you’ve already had a play around with the tool, it should be clear that tags are important. This functionality is really what enables you to get the kind of analysis and insight that you can out of your data in Reframer.
But you can actually take this a step further and transform the data from your tagging into visualizations – perfect for demonstrating your results to your team or to stakeholders. There are 2 visualization options in Reframer.
First of all, there’s the chord diagram. As you can see from the picture below, the chord diagram allows you to explore the relationships between different tagged observations, which in turn helps you to spot themes. The different colored chord lines connect different tag nodes, with thicker lines representing the more times 2 tags appear on the same observation. Again, the more data you have (observations and tags), the richer or more in-depth the visualization.

The bubble chart is a little different. This visualization simply shows the frequency of your tags as ‘bubbles’ or circles. The larger the bubble, the more frequently that tag appears in your observations.
6. Import all of your qualitative research notes
Reframer works best when it’s used as the one repository for all of your qualitative research. After all, we designed the tool to replace the clutter and mess that’s typically associated with qualitative research.
You can easily import all of your existing observations from either spreadsheets or text documents using the ‘Import’ function. It’s also possible to just enter your observations directly into Reframer at any point.
You’ll likely find that by using Reframer in this way, there’ll be little chance of you losing track of research data over time. One of the biggest issues for research or UX teams is information loss when someone leaves the organization. Keep everything in Reframer, and you can avoid that loss of data if someone ever leaves.
Wrap-up
While quantitative research is often considered easiest to wrap your head around, qualitative research is also well-worth adding into to your workflow to ensure you're seeing the whole picture as you make important design decisions. This article is really just a surface-level overview of some of the neat things you can do with Reframer. We’ve got some other articles on our blog about how you can best use this tool, but the best place to really dig into the detail is in the Optimal Workshop Knowledge Base.
How to make the case for bigger research projects
Summary: You’ve run some user interviews and a couple of cards sorts, now it’s time to learn how to make the case for larger research projects.
In many ways, the work you do as a researcher is the fuel that product and design teams use to build great products. Or, as writer Gene Luen Yang once put it: “Creativity requires input, and that's what research is. You're gathering material with which to build”.
One of the toughest challenges for a user researcher is making the case for a bigger project. That is, one potentially involving more participants, a larger number of research methods, more researchers and even new tools. Sure, you’ve done small studies, but now you’ve identified a need for some bigger (and likely more expensive) research. So how exactly do you make a case for a larger project? How do you broach the subject of budget and possibly even travel, if it’s required? And, perhaps most importantly, who do you make the case to?
By understanding how to pitch your case, you can run the research project that needs to be run – not whatever you’re able to scrape together.
What’s your research question?
You know how important the research question is. After all, this is what you center your research around. It’s the clear, concise, focused and yet also complex heart of any research project. The importance of a good research question holds true from the smallest of studies right through to the massive research projects that require large teams of researchers and involve usability tests in real-world locations.
We’ve written about user research questions before, but needless to say, keep your question top-of-mind as you think about scaling your research. While most other aspects of your research will get bigger the larger your research project grows (think things like budget, number of participants and possibly even locations), your research question should remain concise and easy to understand. ‘Say it in a sentence’ – This is a good rule to keep in mind as you start to meet with stakeholders and other interested parties. Always have the detail ready if people want it, but your question is basically an elevator pitch.
Your research question will also form an important part of your pitch document – something we’ll come back to later on in this article.
Why do you need to scale up your research?
With your research question in hand (or more likely in a Google Drive document), you have to start asking yourself the tough questions. It’s time to map out the why of this entire process. Why exactly do you need to run a larger research project? Nailing all of this detail is going to be critical to getting the support you need to actually scale up your research.
To help get you thinking about this, we’ve put together some of the most common reasons to scale a user research project. Keep in mind that there’s a lot of crossover in the below sections simply due to the fact that methods/tools and participants are essentially 2 sides of the same coin.
You need more participants
Recruiting participants can be quite expensive, and it’s often a limiting factor for many researchers. In many cases, the prospect of remunerating even something like 10 participants can blow out a small research budget. While certain types of testing have ideal maximums for the number of participants you should use, scaling up the number of people feeding into your research can be quite fruitful. This could either running more tests with a lower number of participants or running a few tests with more.
By bringing in more people, you’re able to run more tests over the course of your project. For example, you could run 5 card sorts with different groups. Or, you could carry out a larger series of usability tests and user interviews with groups of different demographics and in different locations.
It’s easy to see how useful a larger or even unrestricted recruitment budget can be. You can go from needing to scrounge up whoever you can find to recruiting the ideal participants for your particular project.
You want to investigate new tools and methods
User research has been around in one form or another for decades (and possibly even longer if you loosen your definition of ‘user research’). With this in mind, the types of tools available to researchers have come along way since the days of paper prototypes – and even paper card sorts. Now, a cursory Google search will throw up dozens of UX tools tailored for different research methods and parts of the research process. There are tools for validating prototypes, getting fast feedback on UX copy and running complex, branching surveys. As just one example, we (at Optimal Workshop) offer a number of tools as part of our platform, focusing on areas including:
- Tree testing: Tree testing is a usability technique that can help you evaluate the findability of topics on a website.
- Card sorting: A research technique that can show you how people understand and categorize information.
You can read more about the Optimal Workshop platform here on our features page if you’re interested in learning more.
Integrating more tools is a strong reason to scale a research project, as tools can both lighten your workload and open up entirely new avenues of research. Plus, you’ll often find tools will automate difficult parts of the research process, like recruitment and analysis. Using one of our tools as an example, with Reframer, you can take notes during a user interview, tag them, and then Reframer will help you pull out various themes and insights for you to review.
A bigger project usually means a bigger budget, allowing you to spend time investigating possible new methods, tools and research techniques. Always wanted to investigate tree testing but could never quite find the time to sit down and assess the method and the various tools available? Now could be the perfect time.
You want to do on-location research
Remote testing has fast become one of the most practical options for user researchers who are short on time and budget. Instead of needing to go out and physically sit down with participants, researchers can instead recruit, run tests and analyze results as long as they have a decent internet connection. It makes sense – on-location research is expensive. But there are definitely some major advantages.
It is possible to conduct user interviews and usability tests remotely, but the very nature of these types of research means you’ll get a lot more out of doing the work in person. Take a user interview as just one example. By sitting down face to face with someone, you can read their facial expressions, better pick up on their tone and establish a stronger rapport from the outset.
Being able to go to wherever your users are means you’re not constrained by technology. If you need to study people living in rural villages in China, for example, you’re unlikely to find many of these people online. Furthermore, in this example, you could bring along a translator and actually get a real feel for your audience. The same applies to countless other demographics all over the world. Scaling up your research project means you can start to look at traveling to the people who can give you the best information.
Who has a stake in this project?
One of the most important (and potentially difficult) parts of scaling a research project is getting buy-in from your stakeholders. As we’ve mentioned previously, your pitch document is an essential tool in getting this buy-in, but you also need to identify all of your stakeholders. Knowing who all of your stakeholders are will mean you get input from every relevant area in your organization, and it also means you’ll likely have a larger support base when making your pitch to scale your research project.
Start by considering the wider organization and then get granular. Who is your research project likely to impact? Consider more than just product and design teams – how is your larger project likely to impact the budget? Again, capturing all of this detail in the pitch document will help you to build a much stronger case when it comes to convincing the people who have the final say.
Note: Building strong relationships with C-level and other people with influence is always going to be useful – especially in a field like UX where many people are still unsure of the value. Investing time into both educating these people (where appropriate) and creating a strong line of communication will likely pay dividends in future.
Create your pitch document
If you’re from the world of academia, the idea of pitching your research is likely second nature. But, to a user researcher, the ‘pitch’ often takes many forms. In ideal circumstances, there’s a good enough relationship between researchers and UX teams that research is just something that’s done as part of the design process. In organizations that haven’t fully embraced research, the process is often much more stilted and difficult.
When you’re trying to create a strong case for scaling your research project, it can help to consolidate all of the detail we’ve covered above (research question, high-level reasons for a running a larger project, etc.) and assemble this information in the form of a pitch document. But what exactly should this document look like? Well, borrowing again from the world of institutional research (but slightly tweaked), the main purpose of a research proposal is to explain that:
- The research question is of significance
- The planned methods are appropriate
- The results will make a useful contribution to the organization
With this in mind, it’s time to take a look at what you should include in your user research pitch document:
- Your research question: The core of any research project, your research question should query a problem that needs to be solved.
- The key stakeholders: Here’s where you list out every team, agency and individual with a stake in your research, as well as what this involvement entails.
- Data: What information do you currently have on this subject? Pull in any relevant details you can find by talking to other teams in your organization. If you’re researching your customers, for example, have a chat to sales and customer support staff.
- Tools/methods: How will you execute your research? List all of the tools and research methods you plan to use. If you can see that you’re going to need new tools, list these here. This leads on nicely to...
- Budget: What are the financial implications of your larger research project? List the new tools you’ll need and your estimates for recruitment costs.
You don’t necessarily need to take your pitch document and present it to stakeholders, but as a tool for ensuring you’ve covered all your bases, it’s invaluable. Doing this exercise means you’ll have all of the relevant information you require in one place.
Happy researching!
Read more elsewhere
- Asking the right questions during user research, interviews and testing – This is a great guide for crafting research questions from UX Collective. It’s full of helpful tips and considerations.
- How to create use cases – Use cases are one of the best tools to map out particular tasks your users perform and find problems they might encounter along their journey.
- How to become a more valuable UX professional: 3 factors to increase your worth – One way to have more influence and better effect change is to increase your value as a UX professional.
- Do I need a degree to become a good UX Designer? – You can have a wonderful career in the field of user experience without a degree in a related area of study.