Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Qualitative Research

Learn more
1 min read

13 time-saving tips and tools for conducting great user interviews

User interviews are a great research method you can use to gain qualitative data about your users, and understand what they think and feel. But they can be quite time consuming, which can sometimes put people off doing them altogether.They can be a bit of a logistical nightmare to organize. You need to recruit participants, nail down a time and place, bring your gear, and come up with a Plan B if people don’t show up. All of this can take up a fair bit of back and forthing between your research team and other people, and it’s a real headache when you have a deadline to work to.So, how can you reap the great rewards and insights that user interviews provide, while spending less time planning and organizing them? Here are 15 tips and tools to help get you started.

Preparation

1) Come up with a checklist

Checklists can be lifesavers, especially when your brain is running 100 miles an hour and you’re wondering if you’ve forgotten to even introduce yourself to your participant.Whether you’re doing your research remotely or in person, it always helps to have a list of all the tasks you need to do so you can check them off one by one.A great checklist should include:

  • the items you need to bring to your sessions (notebooks, laptop, pens, water, and do NOT forget your laptop charger!)
  • any links you need to send to your interviewee if speaking to them remotely (Google Hangouts, webex etc.)
  • a reminder to get consent to record your interview session
  • a reminder to hit the record button

Scripts are also useful for cutting down time. Instead of “umm-ing” and “ahh-ing” your way through your interview, you can have a general idea of what you’ll talk about. Scripts will likely change between each project, but having a loose template that you can chop and change pretty easily will help you save time in the future.Some basic things you’ll want to include in your script:

  • an introduction of yourself, and some ice-breaker questions to build a rapport with your participant
  • your research goals and objectives — what/who you’re doing this research for and why
  • how your research will be used
  • the questions you’re going to ask
  • tying up loose ends — answering questions from your participant and thanking them very much for their time.

2) Build up a network of participants to choose from

This is another tip that requires a bit of legwork at the start, but saves lots of hassle later on. If you build up a great network of people willing to take part in your research, recruiting can become much easier.Perhaps you can set up a research panel that people can opt into through your website (something we’ve done here at Optimal Workshop that has been a huge help). If you’re working internally and need to interview users at your own company, you can do a similar thing. Reach out to managers or team leaders to get employees on board, get creative with incentives, reward people with thanks or cakes in public — there are loads of ideas.

3) Do your interviews remotely

Remote user research is great. It allows you to talk to all types of people anywhere in the world, without having to spend time and money for travel to get to them.There are many different tools you can use to conduct your user interview remotely.Some easy to use and free ones are Google Hangouts and Skype. As a bonus, it’s likely your participants will already have one of these installed, saving them time and hassle — just don’t forget to record your session.Here are a couple of recording tools you can use:

  • QuickTime
  • iShowU HD
  • Pamela for Skype

4) Rehearse, rehearse, rehearse

Make sure you’re not wasting any precious research time and rehearse your interview with a colleague or friend. This will help you figure out anything you’ve missed, or what could potentially go wrong that could cause you time delays and headaches on the day.

  • Do your questions make sense, and are they the right kinds of questions?
  • Test your responses — are you making sure you stay neutral so you don’t lead your participants along?
  • Does your script flow naturally? Or does it sound too scripty?
  • Are there any areas that technology could become a hindrance, and how can you make sure you avoid this?

5) Use scheduling tools to book sessions for you

Setting up meetings with colleagues can be difficult, but when you’re reaching out to participants who are volunteering their precious time it can be a nightmare.Make it easier for all involved and use an easy scheduling tool to get rid of most of the hard work.Simply enter in a few times that you’re free to host sessions, and your participants can select which ones work for them.Here are a couple of tools to get you started:

  • Calendly
  • NeedtoMeet
  • Boomerang Calendar
  • ScheduleOnce

Don’t forget to automate the reminder emails to save yourself some time. Some of the above tools can sort that out for you!

In-session

6) Avoid talking about yourself — stick to your script!

When you’re trying to build a rapport with your participant, it’s easy to go overboard, get off track and waste precious research time. Avoid talking about yourself too much, and focus on asking about your participant, how they feel, and what they think. Make sure you keep your script handy so you know if you’re heading in the wrong direction.

7) Record interviews, transcribe later

In many user interview scenarios, you’ll have a notetaker to jot down key observations as your session goes on. But if you don’t have the luxury of a notetaker, you’ll likely be relying on yourself to take notes. This can be really distracting when you’re interviewing someone, and will also take up precious research time. Instead, record your interview and only note down timestamps when you come across a key observation.

8) Don’t interrupt

Ever had something to say and started to explain it to someone, only to get interrupted then lose your train of thought? This can happen to your participants if you’re not careful, which can mean delays with getting the information you need. Stay quiet, and give your participant a few seconds before asking what they’re thinking.

9) Don’t get interrupted

If you’re hosting your interview at your office, let your coworkers know so they don’t interrupt you. Hang a sign up on the door of your meeting room and make sure you close the door. If you’re going out of your office, pick a location that’s quiet and secluded like a meeting room at a library, or a quiet corner in a cafe.

10) Take photos of the environment

If you’re interviewing users in their own environment, there are many little details that can help you with your research. But you could spend ages taking note of all these details in your session. You can get a good idea of what your participant’s day is like by snapping some images of their workstations, tech they use, and the office as a whole. Use your phone and pop these into Evernote or Dropbox to analyze later.

Analysis

11) Use Reframer to analyze your data

Qualitative research produces very powerful data, but it also produces a lot of it. It can take you and your team hours, even days, to go through it all.Use a qualitative research tool such as Reframer to tag your observations so you can easily build themes and find patterns in your data while saving hours of analysis. Tags might be related to a particular subject you’re discussing with a participant, a really valuable quote, or even certain problems your participants have encountered — it all depends on your project.

12) Make collaboration simple

Instead of spending hours writing up some of your findings on Post-it notes and sticking them up on a wall to discuss with your teammates, you can quickly and easily do this online with Trello or MURAL. This is definitely a big timesaver if you’ve got some team members who work remotely.

13) Make your findings easy to read

Presenting your findings to stakeholders can be difficult, and extremely time consuming if you need to explain it all in easy-to-understand terms. Save time and make it easier for your stakeholders by compiling your findings into an infographic, engaging data visualization, or slideshow presentation. Just make sure you bring all the stats you need to answer any questions from stakeholders.For more actionable tips and tricks from UX professionals all over the world, check out our latest ebook. Download and print out templates and checklists, and become a pro for your next user interview.Get our new ebook

Related reading

Learn more
1 min read

Collating your user testing notes

It’s been a long day. Scratch that - it’s been a long week! Admit it. You loved every second of it.

Twelve hour days, the mad scramble to get the prototype ready in time, the stakeholders poking their heads in occasionally, dealing with no-show participants and the excitement around the opportunity to speak to real life human beings about product or service XYZ. Your mind is exhausted but you are buzzing with ideas and processing what you just saw. You find yourself sitting in your war room with several pages of handwritten notes and with your fellow observers you start popping open individually wrapped lollies leftover from the day’s sessions. Someone starts a conversation around what their favourite flavour is and then the real fun begins. Sound familiar? Welcome to the post user testing debrief meeting.

How do you turn those scribbled notes and everything rushing through your mind into a meaningful picture of the user experience you just witnessed? And then when you have that picture, what do you do next? Pull up a bean bag, grab another handful of those lollies we feed our participants and get comfy because I’m going to share my idiot-proof, step by step guide for turning your user testing notes into something useful.

Let’s talk

Get the ball rolling by holding a post session debrief meeting while it’s all still fresh your collective minds. This can be done as one meeting at the end of the day’s testing or you could have multiple quick debriefs in between testing sessions. Choose whichever options works best for you but keep in mind this needs to be done at least once and before everyone goes home and forgets everything. Get all observers and facilitators together in any meeting space that has a wall like surface that you can stick post its to - you can even use a window! And make sure you use real post its - the fake ones fall off!

Mark your findings (Tagging)

Before you put sharpie to post it, it’s essential to agree as a group on how you will tag your observations. Tagging the observations now will make the analysis work much easier and help you to spot patterns and themes. Colour coding the post its is by far the simplest and most effective option and how you assign the colours is entirely up to you. You could have a different colour for each participant or testing session, you could have different colours to denote participant attributes that are relevant to your study eg senior staff and junior staff, or you could use different colours to denote specific testing scenarios that were used. There’s many ways you could carve this up and there’s no right or wrong way. Just choose the option that suits you and your team best because you’re the ones who have to look at it and understand it. If you only have one colour post it eg yellow, you could colour code the pen colours you use to write on the notes or include some kind of symbol to help you track them.

Processing the paper (Collating)

That pile of paper is not going to process itself! Your next job as a group is to work through the task of transposing your observations to post it notes. For now, just stick them to the wall in any old way that suits you. If you’re the organising type, you could group them by screen or testing scenario. The positioning will all change further down the process, so at this stage it’s important to just keep it simple. For issues that occur repeatedly across sessions, just write them down on their own post its- doubles will be useful to see further down the track.In addition to  holding a debrief meetings, you also need to round up everything that was used to capture the testing session/s. And I mean EVERYTHING.

Handwritten notes, typed notes, video footage and any audio recordings need to be reviewed just in case something was missed. Any handwritten notes should be typed to assist you with the completion of the report. Don’t feel that you have to wait until the testing is completed before you start typing up your notes because you will find they pile up very quickly and if your handwriting is anything like mine…. Well let’s just say my short term memory is often required to pick up the slack and even that has it’s limits. Type them up in between sessions where possible and save each session as it’s own document. I’ll often use the testing questions or scenario based tasks to structure my typed notes and I find that makes it really easy to refer back to.Now that you’ve processed all the observations, it’s time to start sorting your observations to surface behavioural patterns and make sense of it all.

Spotting patterns and themes through affinity diagramming

Affinity diagramming is a fantastic tool for making sense of user testing observations. In fact it’s just about my favourite way to make sense of any large mass of information. It’s an engaging and visual process that grows and evolves like a living creature taking on a life of its own. It also builds on the work you’ve just done which is a real plus!By now, testing is over and all of your observations should all be stuck to a wall somewhere. Get everyone together again as a group and step back and take it all in. Just let it sit with you for a moment before you dive in. Just let it breathe. Have you done that? Ok now as individuals working at the same time, start by grouping things that you think belong together. It’s important to just focus on the content of the labels and try to ignore the colour coded tagging at this stage, so if session one was blue post its don’t group all the blue ones together just because they’re all blue! If you get stuck, try grouping by topic or create two groups eg issues and wins and then chunk the information up from there.

You will find that the groups will change several times over the course of the process  and that’s ok because that’s what it needs to do.While you do this, everyone else will be doing the same thing - grouping things that make sense to them.  Trust me, it’s nowhere near as chaotic as it sounds! You may start working as individuals but it won’t be long before curiosity kicks in and the room is buzzing with naturally occurring conversation.Make sure you take a step back regularly and observe what everyone else is doing and don’t be afraid to ask questions and move other people’s post its around- no one owns it! No matter how silly something may seem just put it there because it can be moved again. Have a look at where your tagged observations have ended up. Are there clusters of colour? Or is it more spread out? What that means will depend largely on how you decided to tag your findings. For example if you assigned each testing session its own colour and you have groups with lot’s of different colours in them you’ll find that the same issue was experienced by multiple people.Next, start looking at each group and see if you can break them down into smaller groups and at the same time consider the overall picture for bigger groups eg can the wall be split into say three high level groups.Remember, you can still change your groups at anytime.

Thinning the herd (Merging)

Once you and your team are happy with the groups, it’s time to start condensing the size of this beast. Look for doubled up findings and stack those post its on top of each other to cut the groups down- just make sure you can still see how many there were. The point of merging is to condense without losing anything so don’t remove something just because it only happened once. That one issue could be incredibly serious. Continue to evaluate and discuss as a group until you are happy. By now clear and distinct groups of your observations should have emerged and at a glance you should be able to identify the key findings from your study.

A catastrophe or a cosmetic flaw? (Scoring)

Scoring relates to how serious the issues are and how bad the consequences of not fixing them are. There are arguments for and against the use of scoring and it’s important to recognise that it is just one way to communicate your findings.I personally rarely use scoring systems. It’s not really something I think about when I’m analysing the observations. I rarely rank one problem or finding over another. Why? Because all data is good data and it all adds to the overall picture.I’ve always been a huge advocate for presenting the whole story and I will never diminish the significance of a finding by boosting another. That said, I do understand the perspective of those who place metrics around their findings. Other designers have told me they feel that it allows them to quantify the seriousness of each issue and help their client/designer/boss make decisions about what to do next.We’ve all got our own way of doing things, so I’ll leave it up to you to choose whether or not you score the issues. If you decide to score your findings there are a number of scoring systems you can use and if I had to choose one, I quite like Jakob Nielsen’s methodology for the simple way it takes into consideration multiple factors. Ultimately you should choose the one that suits your working style best.

Let’s say you did decide to score the issues. Start by writing down each key finding on it’s own post it and move to a clean wall/ window. Leave your affinity diagram where it is. Divide the new wall in half: one side for wins eg findings that indicate things that tested well and the other for issues. You don’t need to score the wins but you do need to acknowledge what went well because knowing what you’re doing well is just as important as knowing where you need to improve. As a group (wow you must be getting sick of each other! Make sure you go out for air from time to time!) score the issues based on your chosen methodology.Once you have completed this entire process you will have everything you need to write a kick ass report.

What could possibly go wrong? (and how to deal with it)

No process is perfect and there are a few potential dramas to be aware of:

People jumping into solution mode too early

In the middle of the debrief meeting, someone has an epiphany. Shouts of We should move the help button! or We should make the yellow button smaller! ring out and the meeting goes off the rails.I’m not going to point fingers and blame any particular role because we’ve all done it, but it’s important to recognise that’s not why we’re sitting here. The debrief meeting is about digesting and sharing what you and the other observers just saw. Observing and facilitating user testing is a privilege. It’s a precious thing that deserves respect and if you jump into solution mode too soon, you may miss something. Keep the conversation on track by appointing a team member to facilitate the debrief meeting.

Storage problems

Handwritten notes taken by multiple observers over several days of testing adds up to an enormous pile of paper. Not only is it a ridiculous waste of paper but they have to be securely stored for three months following the release of the report. It’s not pretty. Typing them up can solve that issue but it comes with it’s own set of storage related hurdles. Just like the handwritten notes, they need to be stored securely. They don’t belong on SharePoint or in the share drive or any other shared storage environment that can be accessed by people outside your observer group. User testing notes are confidential and are not light reading for anyone and everyone no matter how much they complain. Store any typed notes in a limited access storage solution that only the observers have access to and if anyone who shouldn’t be reading them asks, tell them that they are confidential and the integrity of the research must be preserved and respected.

Time issues

Before the storage dramas begin, you have to actually pick through the mountain of paper. Not to mention the video footage, and the audio and you have to chase up that sneaky observer who disappeared when the clock struck 5. All of this takes up a lot of time. Another time related issue comes in the form of too much time passing in between testing sessions and debrief meetings. The best way to deal with both of these issues  is to be super organised and hold multiple smaller debriefs in between sessions where possible. As a group, work out your time commitments before testing begins and have a clear plan in place for when you will meet.  This will prevent everything piling up and overwhelming you at the end.

Disagreements over scoring

At the end of that long day/week we’re all tired and discussions around scoring the issues can get a little heated. One person’s showstopper may be another person’s mild issue. Many of the ranking systems use words as well as numbers to measure the level of severity and it’s easy to get caught up in the meaning of the words and ultimately get sidetracked from the task at hand. Be proactive and as a group set ground rules upfront for all discussions. Determine how long you’ll spend discussing an issue and what you will do in the event that agreement cannot be reached. People want to feel heard and they want to feel like their contributions are valued. Given that we are talking about an iterative process, sometimes it’s best just to write everything down to keep people happy and merge and cull the list in the next iteration. By then they’ve likely had time to reevaluate their own thinking.

And finally...

We all have our own ways of making sense of our user testing observations and there really is no right or wrong way to go about it. The one thing I would like to reiterate is the importance of collaboration and teamwork. You cannot do this alone, so please don’t try. If you’re a UX team of one, you probably already have a trusted person that you bounce ideas off. They would be a fantastic person to do this with. How do you approach this process? What sort of challenges have you faced? Let me know in the comments below.

Learn more
1 min read

Optimal vs Dovetail: Why Smart Product Teams Choose Unified Research Workflows

UX, product and design teams face growing challenges with tool proliferation, relying on different options for surveys, usability testing, and participant recruitment before transferring data into analysis tools like Dovetail. This fragmented workflow creates significant data integration issues and reporting bottlenecks that slow down teams trying to conduct smart, fast UX research. The constant switching between platforms not only wastes time but also increases the risk of data loss and inconsistencies across research projects. Optimal addresses these operational challenges by unifying the entire research workflow within a single platform, enabling teams to recruit participants, run tests and studies, and perform analysis without the complexity of managing multiple tools.

Why Choose Optimal over Dovetail? 

Fragmented Workflow vs. Unified Research Operations

  • Dovetail's Tool Chain Complexity: Dovetail requires teams to coordinate multiple platforms—one for recruitment, another for surveys, a third for usability testing—then import everything for analysis, creating workflow bottlenecks and coordination overhead.
  • Optimal's Streamlined Workflow: Optimal eliminates tool chain management by providing recruitment, testing, and analysis in one platform, enabling researchers to move seamlessly from study design to actionable insights.
  • Context Switching Inefficiency: Dovetail users constantly switch between different tools with different interfaces, learning curves, and data formats, fragmenting focus and slowing research velocity.
  • Focused Research Flow: Optimal's unified interface keeps researchers in flow state, moving efficiently through research phases without context switching or tool coordination.

Data Silos vs. Integrated Intelligence

  • Fragmented Data Sources: Dovetail aggregates data from multiple external sources, but this fragmentation can create inconsistencies, data quality issues, and gaps in analysis that compromise insight reliability.
  • Consistent Data Standards: Optimal's unified platform ensures consistent data collection standards, formatting, and quality controls across all research methods, delivering reliable insights from integrated data sources.
  • Manual Data Coordination: Dovetail teams spend significant time importing, formatting, and reconciling data from different tools before analysis can begin, delaying insight delivery and increasing error risk.
  • Automated Data Integration: Optimal automatically captures and integrates data across all research activities, enabling real-time analysis and immediate insight generation without manual data management.

Limited Data Collection vs. Global Research Capabilities

  • No Native Recruitment: Dovetail's beta participant recruitment add-on lacks the scale and reliability enterprise teams need, forcing dependence on external recruitment services with additional costs and complexity.
  • Global Participant Network: Optimal's 200+ million verified participants across 150+ countries provide comprehensive recruitment capabilities with advanced targeting and quality assurance for any research requirement.
  • Analysis-Only Value: Dovetail's value depends entirely on research volume from external sources, making ROI uncertain for teams with moderate research needs or budget constraints.
  • Complete Research ROI: Optimal delivers immediate value through integrated data collection and analysis capabilities, ensuring consistent ROI regardless of external research dependencies.

Doveetail Challenges: 

Dovetail may slow teams because of challenges with: 

  • Multi-tool coordination requiring significant project management overhead
  • Data fragmentation creating inconsistencies and quality control challenges
  • Context switching between platforms disrupting research flow and focus
  • Manual data import and formatting delaying insight delivery
  • Complex tool chain management requiring specialized technical knowledge

When Optimal is the Right Choice

Optimal becomes essential for:

  • Streamlined Workflows: Teams needing efficient research operations without tool coordination overhead
  • Research Velocity: Projects requiring rapid iteration from hypothesis to validated insights
  • Data Consistency: Studies where integrated data standards ensure reliable analysis and conclusions
  • Focus and Flow: Researchers who need to maintain deep focus without platform switching
  • Immediate Insights: Teams requiring real-time analysis and instant insight generation
  • Resource Efficiency: Organizations wanting to maximize researcher productivity and minimize tool management

Ready to move beyond basic feedback to strategic research intelligence? Experience how Optimal's analytical depth and comprehensive insights drive product decisions that create competitive advantage.

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.