Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Qualitative Research

Learn more
1 min read

What gear do I need for qualitative user testing?

Summary: The equipment and tools you use to run your user testing sessions can make your life a lot easier. Here’s a quick guide.

It’s that time again. You’ve done the initial scoping, development and internal testing, and now you need to take the prototype of your new design and get some qualitative data on how it works and what needs to be improved before release. It’s time for the user testing to begin.

But the prospect of user testing raises an important question, and it’s one that many new user researchers often deliberate over: What gear or equipment should I take with me? Well, never fear. We’re going to break down everything you need to consider in terms of equipment, from video recording through to qualitative note-taking.

Recording: Audio, screens and video

The ability to easily record usability tests and user interviews means that even if you miss something important during a session, you can go back later and see what you’ve missed. There are 3 types of recording to keep in mind when it comes to user research: audio, video and screen recording. Below, we’ve put together a list of how you can capture each. You shouldn’t have to buy any expensive gear – free alternatives and software you can run on your phone and laptop should suffice.

  • Audio – Forget dedicated sound recorders; recording apps for smartphones (iOS and Android) allow you to record user interviews and usability tests with ease and upload the recordings to Google Drive or your computer. Good options include Sony’s recording app for Android and the built-in Apple recording app on iOS.
  • Transcription – Once you’ve created a recording, you’ll no doubt want a text copy to work with. For this, you’ll need transcription software to take the audio and turn it into text. There are companies that will make transcriptions for you, but software like Transcribe means you can carry out the process yourself.
  • Screen recording – Very useful during remote usability tests, screen recording software can show you exactly how participants react to the tasks you set out for them, even if you’re not in the room. OBS Studio is a good option for both Mac and Windows users. You can also use Quicktime (free) if you’re running the test in person.
  • Video – Recording your participants as they make their way through the various tasks in a usability test can provide useful reference material at the end of your testing sessions. You can refer back to specific points in a video to capture any detail you may have missed, and you can share video with stakeholders to demonstrate a point. If you don’t have access to a dedicated camera, consider mounting your smartphone on a tripod and recording that way.

Taking (and making use of) notes

Notetaking and qualitative user testing go hand in hand. For most user researchers, notetaking during a research session means busting out the Post-it notes and Sharpie pens, rushing to take down every observation and insight and then having to arduously transcribe these notes after the session – or spend hours in workshops trying to identify themes and patterns. This approach still has merit, as it’s often one of the best ways to get people who aren’t too familiar with user research involved in the process. With physical notes, you can gather people around a whiteboard and discuss what you’re looking at. What’s more, you can get them to engage with the material directly.

But there are digital alternatives. Qualitative notetaking software (like our very own Reframer) means you can bring a laptop into a user interview and take down observations directly in a secure environment. Even better, you can ask someone else to sit in as your notetaker, freeing you up to focus on running the session. Then, once you’ve run your tests, you can use the software for theme and pattern analysis, instead of having to schedule yet another full day workshop.

Scheduling your user tests

Ah, participant scheduling. Perhaps one of the most time-consuming parts of the user testing process. Thankfully, software can drastically reduce the logistical burden.

Here are some useful pieces of software:

Dedicated scheduling tool Calendly is one of the most popular options for participant scheduling in the UX community. It’s really hands-off, in that you basically let the tool know when you’re available, share the Calendly link with your prospective participants, and then they select a time (from your available slots) that works for them. There are also a host of other useful features that make it a popular option for researchers, like integrations and smart timezones.

If you’re already using the Optimal Workshop platform, you can use our  survey tool Questions as a fairly robust scheduling tool. Simply set up a study and add in prospective time slots. You can then use the multi-choice field option to have people select when they’re available to attend. You can also capture other data and avoid the usual email back and forth.

Storing your findings

One of the biggest challenges for user researchers is effectively storing and cataloging all of the research data that they start to build up. Whether it’s video recordings of usability tests, audio recordings or even transcripts of user interviews, you need to ensure that your data is A) easily accessible after the fact, and B) stored securely to ensure you’re protecting your participants.

Here are some things to ask yourself when you store any piece of customer or user data:

  • Who will have access to this data?
  • How long do I plan to keep this data?
  • Will this data be anonymized?
  • If I’m keeping physical data on hand, where will it be stored?

Don’t make the mistake of thinking user data is ‘secure enough’, whether that’s on a company server that anyone can access, or even in an unlocked filing cabinet beneath your desk. Data privacy and security should always be at the top of your list of considerations. We won’t dive into best practices for participant data protection in this article, but instead, just mention that you need to be vigilant. Wherever you end up storing information, make sure you understand who has access.

Wrap up

Hopefully, this guide has given you an overview of some of the tools and software you can use before you start your next user test. We’ve also got a number of other interesting articles that you can read right here on our blog.

Learn more
1 min read

Getting started with usability testing

Summary: Usability testing is crucial to product success, and there’s really no excuse not to do it. Here’s a quick guide to prepare you for your first usability test.

Back in 1994, an online shopping website called Amazon launched to little fanfare. With just a handful of products, it seemed like yet another internet company destined to join the rank and file like so many others in the dot-com bubble. But Amazon had a very different path to follow. The website soon grew rapidly in scale and value, and within 5 years was a leader in online shopping. As any internet user now knows, Amazon does a whole lot more than just sell books.

It wasn’t clever advertising that led to the massive success of Amazon (although that certainly helped), but a drive to place usability at the core of the company’s focus. According to Forbes, Amazon CEO Jeff Bezos invested 100 times more in the customer experience than advertising during Amazon’s first year of operation, meaning everything from how customers make their way through the website to how they get support from Amazon staff. Reinforcing the value of the customer experience, Nielsen Norman Group found that organizations that spend just 10% of their budgets on usability will see, on average, a 135% increase in their desired metrics.

You can’t afford to skip over usability testing. It’s essential to give products and services the best chance of succeeding – and here’s how you get started.

Unlike some other more exploratory user research methods, usability testing is very much focused on testing a prototype. It’s true that you can test a finished product, but it’s best practice to test prototypes as you develop them, iterate and improve, and then test again. This regular cadence of test/develop/feedback means the users’ point of view is constantly fed back to the people who need to understand it.

We won't dive into tools and how to actually design a prototype, but the key thing is that the prototype should be informed by an understanding of the customer and how they're going to be using the product/service (e.g., generative research). Again, the best way to kick off the testing loop is to design something relatively rough and conceptual, and then test and improve before developing detailed designs.

This leads in nicely to our next point, which is...

What is it that you want to find out about how people use your prototype? Setting clear goals for your testing will, in turn, help you to define some clear metrics. While sometimes it’s immediately obvious what you need to test, in other cases (especially if you’re new to the idea of usability testing) it can be a bit more of a struggle. This is where it’s best to engage with those people who have a stake in the outcome of your research.

You can also turn to your existing users. Look at how they currently use your product and use this to decide on key tasks for them to perform. Keep in mind that you want to test how a person completes a particular task – you’re not testing a particular feature. Your focus should be on how the feature supports the thing that a person is trying to do.

Now for the part that often stumps new user researchers: figuring out metrics. It’s an understandably confusing process at first. There are a number of metrics that you can use to measure usability.

  • Time on task: The amount of time participants spend on a task before successfully completing it.
  • Successful task completion: The percentage of tasks that participants successfully complete.
  • Critical errors: Errors that prevent participants from completing tasks.
  • Non-critical errors: Errors that do not affect a participant’s ability to complete a task.

Here are some examples of goals (and metrics) for usability testing:

  • Goal: Does our online shopping website’s checkout process make it easy for customers to add additional products?
  • Metric: Successful task completion. What percentage of users were able to complete this process?
  • Goal: Is it easy for customers to contact support staff through our mobile app?
  • Metric: Time on task. How long did it take participants to complete this task successfully?

It’s best to nail down your goals and metrics before you start working on the plan for how you’ll run your usability tests. You can also use usability scales to help with this. This article from UX Collective provides a great rundown on some of the most popular metrics you can use in your testing.

With an understanding of what you want to learn, it’s time to start putting together your testing plan. This is basically a list of the tasks you’ll give to participants in order to evaluate your prototype, as well as the script that you’ll follow to ensure you’re covering everything you need to during each testing session. Yes, even experienced user researchers bring along a script!

For your tasks, take a look at your prototype with a specific eye towards the areas that you’d like to test. If you want to test a sign-up flow, for example, figure out the ideal path that someone should take to successfully complete this process and break it down step by step. Why is this important? Well, if you have an idea of the steps needed to complete a task, you can evaluate the prototype by seeing how users make their way through them.

Here’s an example of what that sign-up flow could look like for a website, step by step:

  • Click ‘Register’ on the homepage, taken to the sign-up page
  • Add personal details
  • Click ‘Next’ at the bottom of page, taken to email opt-in page
  • Click to sign up for different emails
  • Click ‘Next’ at the bottom of the page, taken to account page

Chances are, there will be a number of ways to complete your task, and it’s important to account for each of them. Don’t assume that just because you’ve designed a fancy navigation, users will head there first when tasked with finding a specific page – they may head straight down to the footer. This means your list should account for these alternative steps.

It's always a good idea to run a pre-test with someone to make sure the tasks make sense, the script flows, and also time the session to make sure it fits in with what you planned.

Once you’ve come up with your tasks, it’s time to move onto the script. As we mentioned above, even experienced user researchers like to have a script on hand during usability tests. Here’s an example script that you can tweak and refine for your own purposes. Over time, you’ll probably find that you develop a template that works really well for you.

Hey [Name]. First of all, I just want to thank you for coming along to this study. Before we get started, I’d like to give you a quick overview of how everything is going to work – please let me know if you have any questions!
The format for this session will be quite simple. I’ve got a series of tasks I’d like you to work through, and I’ll ask a few questions as we go. Before each task, I’ll give you some context around why you’d even be doing this and your goal.
I just want you to know that we’re not testing you – we’re testing the [website/app]. There are no wrong answers. Everything you do is valuable learning for us.
Lastly, I just want to think aloud as much as possible as you work through the tasks. For example, if you’re looking for something, say “I’m looking for X thing”. This will just help me to better evaluate how well the [website/app] is working.
Ready to get started? Any other questions? Let’s go!

Note: You may also want to consider mentioning asking for recording consent if you plan to record the session.

If you’d like an alternative template, Steve Krug has one on his website here.

Take some time to get your script to a point where you feel comfortable going through it, and then turn your attention to the tasks you’ll be running during your session.

To start off, give a high-level explanation/story that your participant can use to get into the right frame of mind, and then add any detail about why they might be trying to complete that particular task.

Lastly, there’s the final wrap-up/debrief. Allocate some time at the end of your session to ask for any final comments and to thank the participant for coming along.

Having a solid script and a list of tasks is really just one part of the usability testing equation. It’s equally important to take good notes during sessions and then document these for future reference – and analysis.

Unfortunately, it’s often a time-consuming process. To help with capturing notes effectively, it's important to use the right tool. A qualitative notetaking tool like Reframer can help you not only capture your insights all in one place, but also make it easy to analyze your data and spot themes. Also, using tags helps you to filter observations based on each tag later in analysis, to help you identify pain points and moments of delight.

You can learn more about Reframer and the other tools on the Optimal Workshop platform here, or try them out for yourself.


Once you pull out the themes from your research, you can feed this back into the design and iterate from there. Ideally, this is a continuous loop – and it’s one of the best ways to ensure you’ve got a regular stream of customer data feeding into your product development.

There’s no need to be afraid of the usability testing process. It’s a key link in the chain of developing usable, successful products. Sure, those first few sessions can be quite nerve wracking, but there’s a good chance your participants are going to be more anxious than you are. Plus, you’ll be bringing a script, so everything you need to say is mapped out for you.

If you ever need a reminder of just how valuable usability testing can be, you only need to take one look at Amazon.

Learn more
1 min read

8 tips for running user interviews over the phone

Like all qualitative research methods in UX, user interviews conducted over the phone have their pros, cons and challenges. Sometimes they are your best or only option. We’ve all been in situations where we’ve had to do the best we can with what we have because it sure beats doing nothing at all. It might be that your participants are remotely located and maybe you don’t all have access to the same technology — access to technology is a privilege! Or maybe you do and that technology just isn’t playing the game and you have to improvise. Maybe you work in a secure environment that puts all kinds of limitations on your research or maybe it’s something else entirely.

One of the biggest challenges of running a user interview over the phone is that unless you’re doing it via video call, the communication is entirely verbal. There’s no body language for you to communicate or observe, you can’t talk with your hands or draw something out on a page and you can’t control the environment in which your participant dials in from. That said, it absolutely can be done and it is possible to gather useful and usable insights for your project by running a user interview over the phone. Here are 8 tips to help make it happen.

It’s very important that you dial into a phone-based user interview from a location that is free from distractions and excessive background noise. You don’t want to have to keep asking your participant to speak up or repeat themselves because you can’t hear them and they shouldn’t have to listen to your colleagues sharing Game of Thrones spoilers in the background. In my experience, quiet spaces aren’t always available at short notice and it pays to plan ahead. If you can, book a meeting room in advance, arrive early and make sure you book extra time either side of your interview time slot so that any meetings booked before yours will have ample time to clear out.

It can also help to headphones and try to find a meeting room that is quiet and not right next to another one with glass walls because, as I’ve learned the hard way, sound carries and it can be just as disruptive as dialing in from an open plan office environment! If a quiet space isn’t available in your workspace, consider making the call from home if you can or from some other quiet space outside of the office. This might seem obvious, but I see mistakes around call location happen all the time. Participant experience matters and the environment you dial in from can have a big impact on that.

Just like you need to plan your call location ahead of time, it’s also best if your participant does too and you’re the best person to help them do that. I once had a participant dial in from a moving car with all the windows rolled down and another from a busy call center environment — both scenarios could have been avoided. A great way to help your participants plan ahead is to politely ask them dial in from somewhere quiet when you first book the session. You might include this information in the calendar invite along with all the other helpful details you’ll be providing e.g., contact details, session details and instructions for any technology you might be using to run the call. Keep it light and friendly and maybe list it as the ‘where’ for the meeting e.g., Where: Please dial in from a quiet location so we can chat.

Sometimes despite your best efforts, a participant may still join the session from a less than desirable location. They might not be aware of how loud the background noise in their office is or they may have forgotten to go somewhere quiet. Maybe they’ve unexpectedly needed to work from home and their dog won’t stop barking. When this happens, it is best to try to see if they’re able to move to a better location away from the noise and distractions. Toughing it out rarely works and can derail the entire session wasting both your time and the participant’s. Be patient and empathetic with them and consider exploring whether it’s possible to reschedule if it can’t be resolved then and there.

Participant responses will be entirely verbal and you’ll want to factor in extra time to ask clarifying and further probing questions. If you plan to go in with a pre-prepared list of questions, keep it brief. It’s totally fine to have a larger overall list of questions you’d like to ask — especially for cases when participants are super articulate and rip through them quickly — but limit the number of ‘must-ask’ questions as much as you can, otherwise you might not get through them all.

And on that note, tip #3 does not mean you should hold an extra long session to compensate — nobody wants to sit on the phone for an hour! It can be exhausting and you don’t know if your participant is having to hold the phone up to their ear the entire time — ouch! Keep the session length to under 30 minutes in total. If you feel that’s not enough time to get the answers you need, consider diversifying your approach by running your research in multiple parts.

You might follow up with a short survey after your session or you might include a few additional questions in your screener before the interview to gather more context from your participants. Keep these brief, don’t ask anything you don’t actually need to know and be mindful of taking up too much of your participants’ time — they are giving you a gift, so don’t overdo it.

Some people are quite comfortable talking to complete strangers over the phone and some people aren’t. You’re going to come across all types of people and you may even feel a little nervous before each session — we’ve all been there! I think it’s important to recognize upfront that awkward interruptions and silences in the conversation are going to happen and to embrace it with confidence and humor. It’s not a big deal and your participant will likely be just as nervous as you are. Tackle it together. Keep the conversation light and humorous — make a joke, and if you interrupt the participant, apologize and keep going. Smiling when you talk can also help make you both feel more comfortable — you’ll feel better and they’ll hear the warmth in your voice that will put them at ease too.

With all those juicy insights being delivered entirely verbally down the phone, it’s a good idea to record the conversation if you can. You might miss something and recording the session will allow you to go back to it. I’d also recommend avoiding taking detailed notes during the call. Just immerse yourself in the conversation and type your notes up later from the recording.

Everyone has their own unique approach and style to running research. I’m someone who finds it hard to focus when talking on the phone and I’ve noticed that it’s much easier if I don’t have to do anything else! Do what works for you, but definitely consider recording those sessions where possible to help ensure research traceability and make life easier when sharing with your team. Just note that you’ll need to ask your participants if it’s OK that you record the call.

A silver lining to the challenges of running user interviews over the phone is that your participant can’t see you. You don’t have to worry about how you’re dressed or how you’re sitting or keeping your facial expressions in check. You can have your planning notes and questions lists spread out in front of you or up on your screen without having to worry about them being distracting or potentially leading. You can put yourself on mute if you have to and you get to dial in from a really comfortable place. Make the most of it! Run the session from an environment where you feel relaxed and confident. No matter how many times I’ve done this, I always feel a twinge of the jitters right before I make the call and being in a comfortable and safe space can make all the difference.

Sending a quick thank you note after a user interview conducted over the phone is a nice way to add a human touch to close out the participant experience. It’s also a good time to deliver or confirm details around how any incentives will be granted to the participant for their time.

So there are 8 tips to help you ace interviewing users over the phone! What are your top tips?

Learn more
1 min read

A beginner’s guide to qualitative and quantitative research

In the field of user research, every method is either qualitative, quantitative – or both. Understandably, there’s some confusion around these 2 approaches and where the different methods are applicable.This article provides a handy breakdown of the different terms and where and why you’d want to use qualitative or quantitative research methods.

Qualitative research

Let’s start with qualitative research, an approach that’s all about the ‘why’. It’s exploratory and not about numbers, instead focusing on reasons, motivations, behaviors and opinions – it’s best at helping you gain insight and delve deep into a particular problem. This type of data typically comes from conversations, interviews and responses to open questions.The real value of qualitative research is in its ability to give you a human perspective on a research question. Unlike quantitative research, this approach will help you understand some of the more intangible factors – things like behaviors, habits and past experiences – whose effects may not always be readily apparent when you’re conducting quantitative research.A qualitative research question could be investigating why people switch between different banks, for example.

When to use qualitative research

Qualitative research is best suited to identifying how people think about problems, how they interact with products and services, and what encourages them to behave a certain way. For example, you could run a study to better understand how people feel about a product they use, or why people have trouble filling out your sign up form. Qualitative research can be very exploratory (e.g., user interviews) as well as more closely tied to evaluating designs (e.g., usability testing).Good qualitative research questions to ask include:

  • Why do customers never add items to their wishlist on our website?
  • How do new customers find out about our services?
  • What are the main reasons people don’t sign up for our newsletter?

How to gather qualitative data

There’s no shortage of methods to gather qualitative data, which commonly takes the form of interview transcripts, notes and audio and video recordings.Here are some of the most widely-used qualitative research methods:

  • Usability test – Test a product with people by observing them as they attempt to complete various tasks.
  • User interview Sit down with a user to learn more about their background, motivations and pain points.
  • Contextual inquiry – Learn more about your users in their own environment by asking them questions before moving onto an observation activity.
  • Focus group – Gather 6 to 10 people for a forum-like session to get feedback on a product.

How many participants will you need?

You don’t often need large numbers of participants for qualitative research, with the average range usually somewhere between 5 to 10 people. You’ll likely require more if you're focusing your work on specific personas, for example, in which case you may need to study 5-10 people for each persona.While this may seem quite low, consider the research methods you’ll be using. Carrying out large numbers of in-person research sessions requires a significant time investment in terms of planning, actually hosting the sessions and analyzing your findings.

Quantitative research

On the other side of the coin you’ve got quantitative research. This type of research is focused on numbers and measurement, gathering data and being able to transform this information into statistics.Given that quantitative research is all about generating data that can be expressed in numbers, there multiple ways you make use of it. Statistical analysis means you can pull useful facts from your quantitative data, for example trends, demographic information and differences between groups. It’s an excellent way to understand a snapshot of your users.A quantitative research question could involve investigating the number of people that upgrade from a free plan to a paid plan.

When to use quantitative research

Quantitative research is ideal for understanding behaviors and usage. In many cases it's a lot less resource-heavy than qualitative research because you don't need to pay incentives or spend time scheduling sessions etc). With that in mind, you might do some quantitative research early on to better understand the problem space, for example by running a survey on your users.Here are some examples of good quantitative research questions to ask:

  • How many customers view our pricing page before making a purchase decision?
  • How many customers search versus navigate to find products on our website?
  • How often do visitors on our website change their password?

How to gather quantitative data

Commonly, quantitative data takes the form of numbers and statistics.

Here are some of the most popular quantitative research methods:

  • Card sorts – Find out how people categorize and sort information on your website.
  • First-click tests – See where people click first when tasked with completing an action.
  • A/B tests – Compare 2 versions of a design in order to work out which is more effective.
  • Clickstream analysis – Analyze aggregate data about website visits.

How many participants will you need?

While you only need a small number of participants for qualitative research, you need significantly more for quantitative research. Quantitative research is all about quantity. With more participants, you can generate more useful and reliable data you can analyze. In turn, you’ll have a clearer understanding of your research problem.This means that quantitative research can often involve gathering data from thousands of participants through an A/B test, or with 30 through a card sort. Read more about the right number of participants to gather for your research.

Mixed methods research

While there are certainly times when you’d only want to focus on qualitative or quantitative data to get answers, there’s significant value in utilizing both methods on the same research projects.Interestingly, there are a number of research methods that will generate both quantitative and qualitative data. Take surveys as an example. A survey could include questions that require written answers from participants as well as questions that require participants to select from multiple choices.

Looking back at the earlier example of how people move from a free plan to a paid plan, applying both research approaches to the question will yield a more robust or holistic answer. You’ll know why people upgrade to the paid plan in addition to how many. You can read more about mixed methods research in this article:

Where to from here?

With an understanding of qualitative and quantitative user research, the next best step would be to start learning more about the various methods that fall under each of these research approaches and how to actually conduct research effectively.

Here are some of the best articles to read next:

Learn more
1 min read

What is mixed methods research?

Whether it’s Fortune 500 companies or tiny startups, people are recognizing the value of building products with a user-first methodology.But it’s not enough to merely say “we’re doing research”, it has to be the right UX research. Research that combines richness of different people's experiences and behavioral insights with tangible numbers and metrics. Key to this is an approach called mixed methods research.

Here, we’ll dive into the what and why of mixed methods research and cover a few examples of the approach.

What is mixed methods research? 🔬

Mixed methods isn’t some overly complicated practice that’ll take years to master — it simply refers to answering research questions through a combination of qualitative and quantitative data. This might mean running both interviews and surveys as part of a research project or complementing diary study data with analytics looking at the usage of a particular feature.A basic mixed methods question could be: “What are the key tasks people perform on my website?”. To answer this, you’d look at analytics to understand how people navigate through the page and conduct user interviews to better understand why they use the page in the first place. We’ve got more examples below.

It makes sense: using both qualitative and quantitative methods to answer a single research question will mean you’re able to build a more complete understanding of the topic you’re investigating. Quantitative data will tell you what is happening and help you understand magnitude, while qualitative data can tell you why something is happening. Each type of data has its shortcomings, and by using a mixed methods approach you’re able to generate a clearer overall picture.

When should you use mixed methods? 🧐

There’s really no “time to do mixed methods research”. Ideally, for every research question you have, evaluate which qualitative and quantitative methods are most likely to give you the best data. More often than not, you’ll benefit from using both approaches.We’ve put together a few examples of mixed methods research to help you generate your own UX research questions.

Examples of mixed methods research 👨🏫

Imagine this. You’re on the user research team at BananaBank, a fictional bank. You and your team want to investigate how the bank’s customers currently use their digital banking services so your design team can make some user-focused improvements.We’ve put together a few research questions based on this goal that would best be served by a mixed methods approach.

Question 1: How does people’s usage of online banking differ between desktop and the app?

  • The value of quantitative methods: The team can view usage analytics (How many people use the desktop app versus the mobile app) and look at feature usage statistics.
  • The value of qualitative methods: Interviews with users can answer all manner of questions. For example, the research team might want to find out how customers make their way through certain parts of the interface. Usability testing is an opportunity to watch users as they attempt various tasks (for example, making a transaction).

Question 2: How might you better support people to reach their savings goals?

  • The value of quantitative methods: The team can review current saving behavior patterns, when people stop saving, the longevity of term deposits and other savings-related actions.
  • The value of qualitative methods: Like the first question, the team can carry out user interviews, or conduct a diary study to better understand how people set and manage savings goals in real life and what usually gets in the way.

Question 3: What are the problem areas in our online signup form?

  • The value of quantitative methods: The team can investigate where people get stuck on the current form, how frequently people run into error messages and the form fields that people struggle to fill out or leave blank.
  • The value of qualitative methods: The team can observe people as they make their way through the signup form.

Mixed methods = holistic understanding 🤔

As we touched on at the beginning of this article, mixed methods research isn’t a technique or methodology, it’s more a practice that you should develop to gain a more holistic understanding of the topic you’re investigating. What’s more, using both types of methods will often mean you’re able to validate the output of one method by using another.When you plan your next research activity, consider complementing it with additional data to generate a more comprehensive picture of your research problem.

Further reading 📚🎧☕

Learn more
1 min read

Arts, crafts and user feedback: How to engage your team through creative play

Doing research is one difficult task — sharing the results with your team is another. Reports can be skim read, forgotten and filed away. People can drift off into a daydream during slideshow presentations, and others may not understand what you’re trying to communicate.This is a problem that many research teams encounter, and it made me think a lot about how to make the wider team really engage in user feedback. While we at Optimal Workshop have a bunch of documents and great tools like Intercom, Evernote and Reframer to capture all our feedback, I wanted to figure out how I could make it fun and engaging to get people to read what our users tell us.How can we as designers and researchers better translate our findings into compelling insights and anecdotes for others to embrace and enjoy? After some thought and a trip to the craft store, I came up with this workshop activity that was a hit with the team.

Crafting feedback into art

Each fortnight we’ve been taking turns at running a full company activity instead of doing a full company standup (check in). Some of these activities included things like pairing up and going for a walk along the waterfront to talk about a challenge we are currently facing, or talk about a goal we each have. During my turn I came up with the idea of an arts and crafts session to get the team more engaged in reading some of our user feedback.Before the meeting, I asked every team member to bring one piece of user feedback that they found in Intercom, Evernote or Reframer. This feedback could be positive such as “Your customer support team is awesome” , a suggestion such as “It would be great to be able to hover over tags and see a tooltip with the description”, or it could be negative (opportunity) such as “I’m annoyed and confused with how recruitment works”.This meant that everyone in the team had to dig through the systems and tools we use and look for insights (nuggets) as their first task. This also helped the team gain appreciation for how much data and feedback our user researchers had been gathering.

A photo of the feedback art hung up on the walls of the office

After we all had one piece of feedback each I told everyone they get to spend the next half hour doing arts and crafts. They could use whatever they could find to create a poster, postcard, or visual interpretation of the insight they had.I provided colored card, emoji stickers, stencils, printed out memes, glitter and glue.During the next 30 minutes I stood back and saw everybody grinning and talking about their posters. The best thing was they were actually sharing their pieces of feedback with one another! We had everyone from devs, marketing, design, operations and finance all participating, which meant that people from all kinds of departments had a chance to read feedback from our customers.

More feedback art on the walls

At the end of the meeting we created a gallery in the office and we all spent time reading each poster because it was so fun to see what everyone came up with. We also hung up a few of these in spots around the office that get a lot of foot traffic, so that we can all have a reminder of some of the things our customers told us. I hope that each person took something away with them, and in the future, when working on a task they’ll remember back to a poster and think about how to tackle some of these requests!

Steve and Ania making a mess and crafting their posters

How to run a creative play feedback workshop

Time needed: 30 minutesInsights: Print off a pile of customer insights or encourage the team to find and bring in their own. Have backups as some might be hard to turn into posters.Tools: Scissors, glue sticks, blue tack for creating the gallery.Crafts: Paper, pens, stickers, stencils, googly eyes (anything goes!)

Another poster decorating the walls of Optimal HQ

Interested in other creative ways to tell stories? Our User Researcher Ania shares 8 creative ways to share your user research.If you do something similar in your team, we’d love to hear about it in the comments below!

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.