August 15, 2021
5 min

Mixed methods research in 2021

User experience research is super important to developing a product that truly engages, compels and energises people. We all want a website that is easy to navigate, simple to follow and compels our users to finish their tasks. Or an app that supports and drives engagement.

We’ve talked a lot about the various types of research tools that help improve these outcomes. 

There is a rising research trend in 2021.

Mixed method research - what is more compelling than these user research quantitative tools? Combining these with awesome qualitative research! Asking the same questions in various ways can provide deeper insights into how our users think and operate. Empowering you to develop products that truly talk to your users, answer their queries or even address their frustrations.

Though it isn’t enough to simply ‘do research’, as with anything you need to approach it with strategy, focus and direction. This will funnel your time, money and energy into areas that will generate the best results.

Mixed Method UX research is the research trend of 2021

With the likes of Facebook, Amazon, Etsy, eBay, Ford and many more big organizations offering newly formed job openings for mixed methods researchers it becomes very obvious where the research trend is heading.

It’s not only good to have, but now becoming imperative, to gather data, dive deeper and generate insights that provide more information on our users than ever before. And you don't need to be Facebook to reap the benefits. Mixed method research can be implemented across the board and can be as narrow as finding out how your homepage is performing through to analysing in depth the entirety of your product design.

And with all of these massive organizations making the move to increase their data collection and research teams. Why wouldn’t you?

The value in mixed method research is profound. Imagine understanding what, where, how and why your customers would want to use your service. And catering directly for them. The more we understand our customers, the deeper the relationship and the more likely we are to keep them engaged.

Although of course by diving deep into the reasons our users like (or don’t like) how our products operate can drive your organization to target and operate better at a higher level. Gearing your energies to attracting and keeping the right type of customer, providing the right level of service and after care. Potentially reducing overheads, by not delivering to expected levels.

What is mixed method research?

Mixed methods research isn’t overly complicated, and doesn’t take years for you to master. It simply is a term used to refer to using a combination of quantitative and qualitative data. This may mean using a research tool such as card sorting alongside interviews with users. 

Quantitative research is the tangible numbers and metrics that can be gathered through user research such as card sorting or tree testing.

Qualitative research is research around users’ behaviour and experiences. This can be through usability tests, interviews or surveys.

For instance you may be asking ‘how should I order the products on my site?’. With card sorting you can get the data insights that will inform how a user would like to see the products sorted. Coupled with interviews you will get the why.

Understanding the thinking behind the order, and why one user likes to see gym shorts stored under shorts and another would like to see them under active wear. With a deeper understanding of how and why users decide how content should be sorted are made will create a highly intuitive website. 

Another great reason for mixed method research would be to back up data insights for stakeholders. With a depth and breadth of qualitative and quantitative research informing decisions, it becomes clearer why changes may need to be made, or product designs need to be challenged.

How to do mixed method research

Take a look at our article for more examples of the uses of mixed method research. 

Simply put mixed method research means coupling quantitative research, such as tree testing, card sorting or first click testing, with qualitative research such as surveys, interviews or diary entry.

Say, for instance, the product manager has identified that there is an issue with keeping users engaged on the homepage of your website. We would start with asking where they get stuck, and when they are leaving.

This can be done using a first-click tool, such as Chalkmark, which will map where users head when they land on your homepage and beyond. 

This will give you the initial qualitative data. However, it may only give you some of the picture. Coupled with qualitative data, such as watching (and reporting on) body language. Or conducting interviews with users directly after their experience so we can understand why they found the process confusing or misleading.

A fuller picture, means a better understanding.

Key is to identify what your question is and honing in on this through both methods. Ultimately, we are answering your question from both sides of the coin.

Upcoming research trends to watch

Keeping an eye on the progression of the mixed method research trend, will mean keeping an eye on these:

1. Integrated Surveys

Rather than thinking of user surveys as being a one time, in person event, we’re seeing more and more often surveys being implemented through social media, on websites and through email. This means that data can be gathered frequently and across the board. This longitude data allows organizations to continuously analyse, interpret and improve products without really ever stopping. 

Rather than relying on users' memories for events and experiences data can be gathered in the moment. At the time of purchase or interaction. Increasing the reliability and quality of the data collected. 

2. Return to the social research

Customer research is rooted in the focus group. The collection of participants in one space, that allows them to voice their opinions and reach insights collectively. This did used to be an overwhelming task with days or even weeks to analyse unstructured forums and group discussions.

However, now with the advent of online research tools this can also be a way to round out mixed method research.

3. Co-creation

The ability to use your customers input to build better products. This has long been thought a way to increase innovative development. Until recently it too has been cumbersome and difficult to wrangle more than a few participants. But, there are a number of resources in development that will make co-creation the buzzword of the decade.

4. Owned Panels & Community

Beyond community engagement in the social sphere. There is a massive opportunity to utilise these engaged users in product development. Through a trusted forum, users are far more likely to actively and willingly participate in research. Providing insights into the community that will drive stronger product outcomes.

What does this all mean for me

So, there is a lot to keep in mind when conducting any effective user research. And there are a lot of very compelling reasons to do mixed method research and do it regularly. 

To remain innovative, and ahead of the ball it remains very important to be engaged with your users and their needs. Using qualitative and qualitative research to inform product decisions means you can operate knowing a fuller picture.

One of the biggest challenges with user research can be the coordination and participant recruitment. That’s where we come in.

Taking the pain out of the process and streamlining your research. Take a look at our Qualitative Research option, Reframer. Giving you an insight into how we can help make your mixed method research easier and analyse your data efficiently and in a format that is easy to understand.

User research doesn’t need to take weeks or months. With our participant recruitment we can provide reliable and quality participants across the board that will provide data you can rely on.

Why not get in deeper with mixed method research today!

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Collating your user testing notes

It’s been a long day. Scratch that - it’s been a long week! Admit it. You loved every second of it.

Twelve hour days, the mad scramble to get the prototype ready in time, the stakeholders poking their heads in occasionally, dealing with no-show participants and the excitement around the opportunity to speak to real life human beings about product or service XYZ. Your mind is exhausted but you are buzzing with ideas and processing what you just saw. You find yourself sitting in your war room with several pages of handwritten notes and with your fellow observers you start popping open individually wrapped lollies leftover from the day’s sessions. Someone starts a conversation around what their favourite flavour is and then the real fun begins. Sound familiar? Welcome to the post user testing debrief meeting.

How do you turn those scribbled notes and everything rushing through your mind into a meaningful picture of the user experience you just witnessed? And then when you have that picture, what do you do next? Pull up a bean bag, grab another handful of those lollies we feed our participants and get comfy because I’m going to share my idiot-proof, step by step guide for turning your user testing notes into something useful.

Let’s talk

Get the ball rolling by holding a post session debrief meeting while it’s all still fresh your collective minds. This can be done as one meeting at the end of the day’s testing or you could have multiple quick debriefs in between testing sessions. Choose whichever options works best for you but keep in mind this needs to be done at least once and before everyone goes home and forgets everything. Get all observers and facilitators together in any meeting space that has a wall like surface that you can stick post its to - you can even use a window! And make sure you use real post its - the fake ones fall off!

Mark your findings (Tagging)

Before you put sharpie to post it, it’s essential to agree as a group on how you will tag your observations. Tagging the observations now will make the analysis work much easier and help you to spot patterns and themes. Colour coding the post its is by far the simplest and most effective option and how you assign the colours is entirely up to you. You could have a different colour for each participant or testing session, you could have different colours to denote participant attributes that are relevant to your study eg senior staff and junior staff, or you could use different colours to denote specific testing scenarios that were used. There’s many ways you could carve this up and there’s no right or wrong way. Just choose the option that suits you and your team best because you’re the ones who have to look at it and understand it. If you only have one colour post it eg yellow, you could colour code the pen colours you use to write on the notes or include some kind of symbol to help you track them.

Processing the paper (Collating)

That pile of paper is not going to process itself! Your next job as a group is to work through the task of transposing your observations to post it notes. For now, just stick them to the wall in any old way that suits you. If you’re the organising type, you could group them by screen or testing scenario. The positioning will all change further down the process, so at this stage it’s important to just keep it simple. For issues that occur repeatedly across sessions, just write them down on their own post its- doubles will be useful to see further down the track.In addition to  holding a debrief meetings, you also need to round up everything that was used to capture the testing session/s. And I mean EVERYTHING.

Handwritten notes, typed notes, video footage and any audio recordings need to be reviewed just in case something was missed. Any handwritten notes should be typed to assist you with the completion of the report. Don’t feel that you have to wait until the testing is completed before you start typing up your notes because you will find they pile up very quickly and if your handwriting is anything like mine…. Well let’s just say my short term memory is often required to pick up the slack and even that has it’s limits. Type them up in between sessions where possible and save each session as it’s own document. I’ll often use the testing questions or scenario based tasks to structure my typed notes and I find that makes it really easy to refer back to.Now that you’ve processed all the observations, it’s time to start sorting your observations to surface behavioural patterns and make sense of it all.

Spotting patterns and themes through affinity diagramming

Affinity diagramming is a fantastic tool for making sense of user testing observations. In fact it’s just about my favourite way to make sense of any large mass of information. It’s an engaging and visual process that grows and evolves like a living creature taking on a life of its own. It also builds on the work you’ve just done which is a real plus!By now, testing is over and all of your observations should all be stuck to a wall somewhere. Get everyone together again as a group and step back and take it all in. Just let it sit with you for a moment before you dive in. Just let it breathe. Have you done that? Ok now as individuals working at the same time, start by grouping things that you think belong together. It’s important to just focus on the content of the labels and try to ignore the colour coded tagging at this stage, so if session one was blue post its don’t group all the blue ones together just because they’re all blue! If you get stuck, try grouping by topic or create two groups eg issues and wins and then chunk the information up from there.

You will find that the groups will change several times over the course of the process  and that’s ok because that’s what it needs to do.While you do this, everyone else will be doing the same thing - grouping things that make sense to them.  Trust me, it’s nowhere near as chaotic as it sounds! You may start working as individuals but it won’t be long before curiosity kicks in and the room is buzzing with naturally occurring conversation.Make sure you take a step back regularly and observe what everyone else is doing and don’t be afraid to ask questions and move other people’s post its around- no one owns it! No matter how silly something may seem just put it there because it can be moved again. Have a look at where your tagged observations have ended up. Are there clusters of colour? Or is it more spread out? What that means will depend largely on how you decided to tag your findings. For example if you assigned each testing session its own colour and you have groups with lot’s of different colours in them you’ll find that the same issue was experienced by multiple people.Next, start looking at each group and see if you can break them down into smaller groups and at the same time consider the overall picture for bigger groups eg can the wall be split into say three high level groups.Remember, you can still change your groups at anytime.

Thinning the herd (Merging)

Once you and your team are happy with the groups, it’s time to start condensing the size of this beast. Look for doubled up findings and stack those post its on top of each other to cut the groups down- just make sure you can still see how many there were. The point of merging is to condense without losing anything so don’t remove something just because it only happened once. That one issue could be incredibly serious. Continue to evaluate and discuss as a group until you are happy. By now clear and distinct groups of your observations should have emerged and at a glance you should be able to identify the key findings from your study.

A catastrophe or a cosmetic flaw? (Scoring)

Scoring relates to how serious the issues are and how bad the consequences of not fixing them are. There are arguments for and against the use of scoring and it’s important to recognise that it is just one way to communicate your findings.I personally rarely use scoring systems. It’s not really something I think about when I’m analysing the observations. I rarely rank one problem or finding over another. Why? Because all data is good data and it all adds to the overall picture.I’ve always been a huge advocate for presenting the whole story and I will never diminish the significance of a finding by boosting another. That said, I do understand the perspective of those who place metrics around their findings. Other designers have told me they feel that it allows them to quantify the seriousness of each issue and help their client/designer/boss make decisions about what to do next.We’ve all got our own way of doing things, so I’ll leave it up to you to choose whether or not you score the issues. If you decide to score your findings there are a number of scoring systems you can use and if I had to choose one, I quite like Jakob Nielsen’s methodology for the simple way it takes into consideration multiple factors. Ultimately you should choose the one that suits your working style best.

Let’s say you did decide to score the issues. Start by writing down each key finding on it’s own post it and move to a clean wall/ window. Leave your affinity diagram where it is. Divide the new wall in half: one side for wins eg findings that indicate things that tested well and the other for issues. You don’t need to score the wins but you do need to acknowledge what went well because knowing what you’re doing well is just as important as knowing where you need to improve. As a group (wow you must be getting sick of each other! Make sure you go out for air from time to time!) score the issues based on your chosen methodology.Once you have completed this entire process you will have everything you need to write a kick ass report.

What could possibly go wrong? (and how to deal with it)

No process is perfect and there are a few potential dramas to be aware of:

People jumping into solution mode too early

In the middle of the debrief meeting, someone has an epiphany. Shouts of We should move the help button! or We should make the yellow button smaller! ring out and the meeting goes off the rails.I’m not going to point fingers and blame any particular role because we’ve all done it, but it’s important to recognise that’s not why we’re sitting here. The debrief meeting is about digesting and sharing what you and the other observers just saw. Observing and facilitating user testing is a privilege. It’s a precious thing that deserves respect and if you jump into solution mode too soon, you may miss something. Keep the conversation on track by appointing a team member to facilitate the debrief meeting.

Storage problems

Handwritten notes taken by multiple observers over several days of testing adds up to an enormous pile of paper. Not only is it a ridiculous waste of paper but they have to be securely stored for three months following the release of the report. It’s not pretty. Typing them up can solve that issue but it comes with it’s own set of storage related hurdles. Just like the handwritten notes, they need to be stored securely. They don’t belong on SharePoint or in the share drive or any other shared storage environment that can be accessed by people outside your observer group. User testing notes are confidential and are not light reading for anyone and everyone no matter how much they complain. Store any typed notes in a limited access storage solution that only the observers have access to and if anyone who shouldn’t be reading them asks, tell them that they are confidential and the integrity of the research must be preserved and respected.

Time issues

Before the storage dramas begin, you have to actually pick through the mountain of paper. Not to mention the video footage, and the audio and you have to chase up that sneaky observer who disappeared when the clock struck 5. All of this takes up a lot of time. Another time related issue comes in the form of too much time passing in between testing sessions and debrief meetings. The best way to deal with both of these issues  is to be super organised and hold multiple smaller debriefs in between sessions where possible. As a group, work out your time commitments before testing begins and have a clear plan in place for when you will meet.  This will prevent everything piling up and overwhelming you at the end.

Disagreements over scoring

At the end of that long day/week we’re all tired and discussions around scoring the issues can get a little heated. One person’s showstopper may be another person’s mild issue. Many of the ranking systems use words as well as numbers to measure the level of severity and it’s easy to get caught up in the meaning of the words and ultimately get sidetracked from the task at hand. Be proactive and as a group set ground rules upfront for all discussions. Determine how long you’ll spend discussing an issue and what you will do in the event that agreement cannot be reached. People want to feel heard and they want to feel like their contributions are valued. Given that we are talking about an iterative process, sometimes it’s best just to write everything down to keep people happy and merge and cull the list in the next iteration. By then they’ve likely had time to reevaluate their own thinking.

And finally...

We all have our own ways of making sense of our user testing observations and there really is no right or wrong way to go about it. The one thing I would like to reiterate is the importance of collaboration and teamwork. You cannot do this alone, so please don’t try. If you’re a UX team of one, you probably already have a trusted person that you bounce ideas off. They would be a fantastic person to do this with. How do you approach this process? What sort of challenges have you faced? Let me know in the comments below.

Learn more
1 min read

The ultimate reading list for new user researchers

Having a library of user research books is invaluable. Whether you’re an old hand in the field of UX research or just dipping your toes in the water, being able to reference detailed information on methods, techniques and tools will make your life much easier.

There’s really no shortage of user research/UX reading lists online, so we wanted to do something a little different. We’ve broken our list up into sections to make finding the right book for a particular topic as easy as possible.

General user research guides

These books cover everything you need to know about a number of UX/user research topics. They’re great to have on your desk to refer back to – we certainly have them on the bookshelf here at Optimal Workshop.

Observing the User Experience: A Practitioner's Guide to User Research

Mike Kuniavsky

Observing the User Experience: A Practitioner’s Guide to User Research

This book covers 13 UX research techniques in a reference format. There’s a lot of detail, making it a useful resource for people new to the field and those who just need more clarification around a certain topic. There’s also a lot of practical information that you’ll find applicable in the real world. For example, information about how to work around research budgets and tight time constraints.

Just Enough Research

Erika Hall

Just Enough Research

In Just Enough Research, author Erika Hall explains that user research is something everyone can and should do. She covers several research methods, as well as things like how to identify your biases and make use of your findings. Designers are also likely to find this one quite useful, as she clearly covers the relationship between research and design.

Research Methods in Human-Computer Interaction

Harry Hochheiser, Jonathan Lazar, Jinjuan Heidi Feng

Research Methods in Human-Computer Interaction

Like Observing the User Experience, this is a dense guide – but it’s another essential one. Here, experts on human-computer interaction and usability explain different qualitative and quantitative research methods in an easily understandable format. There are also plenty of real examples to help frame your thinking around the usefulness of different research methods.

Information architecture

If you’re new to information architecture (IA), understanding why it’s such an important concept is a great place to start. There’s plenty of information online, but there are also several well-regarded books that make great starting points.

Information Architecture for the World Wide Web: Designing Large-Scale Web Sites

Peter Morville, Louis Rosenfeld

Information Architecture for the World Wide Web: Designing Large-Scale Web Site

You’ll probably hear this book referred to as “the polar bear book”, just because the cover features a polar bear. But beyond featuring a nice illustration of a bear, this book clearly covers the process of creating large websites that are both easy to navigate and appealing to use. It’s a useful book for designers, information architects and user researchers.

How to Make Sense of Any Mess

Abby Covert

How to Make Sense of Any Mess

This is a great introduction to information architecture and serves as a nice counter to the polar bear book, being much shorter and more easily digestible. Author Abby Covert explains complex concepts in a way anyone can understand and also includes a set of lessons and exercises with each chapter.

User interviews

For those new to the task, the prospect of interviewing users is always daunting. That makes having a useful guide that much more of a necessity!

Interviewing Users: How to Uncover Compelling Insights

Steve Portigal

Interviewing Users: How to Uncover Compelling Insights

While interviewing users may seem like something that doesn’t require a guide, an understanding of different interview techniques can go a long way. This book is essentially a practical guide to the art of interviewing users. Author Steve Portigal covers how to build rapport with your participants and the art of immersing yourself in how other people see the world – both key skills for interviewers!

Usability testing

Web usability is basically the ease of use of a website. It’s a broad topic, but there are a number of useful books that explain why it’s important and outline some of the key principles.

Don't Make Me Think: A Common Sense Approach to Web Usability

Steve Krug

Don't Make Me Think: A Common Sense Approach to Web Usability

Don’t Make Me Think is the first introduction to the world of UX and usability for many people, and for good reason – it’s a concise introduction to the topics and is easy to digest. Steve Krug explains some of the key principles of intuitive navigation and information architecture clearly and without overly technical language. In the latest edition, he’s updated the book to include mobile usability considerations.

As a testament to just how popular this book is, it was released in 2000 and has since had 2 editions and sold 400,000 copies.

Design

The design–research relationship is an important one, even if it’s often misunderstood. Thankfully, authors like Don Norman and Vijay Kumar are here to explain everything.

The Design of Everyday Things

Don Norman

The Design of Everyday Things

This book, by cognitive scientist and usability engineer Don Norman, explains how design is the communication between an object and its user, and how to improve this communication as a way of improving the user experience. If nothing else, this book will force you to take another look at the design of everyday objects and assess whether or not they’re truly user-friendly.

101 Design Methods: A Structured Approach for Driving Innovation in Your Organization

Vijay Kumar

101 Design Methods: A Structured Approach for Driving Innovation in Your Organization

A guidebook for innovation in the context of product development, this book approaches the subject in a slightly different way to many other books on the same subject. The focus here is that the practice of creating new products is actually a science – not an art. Vijay Kumar outlines practical methods and useful tools that researchers and designers can use to drive innovation, making this book useful for anyone involved in product development.

See our list on Goodreads

We've put together a list of all of the above books on Goodreads, which you can access here.

Further reading

For experienced practitioners and newcomers alike, user research can often seem like a minefield to navigate. It can be tricky to figure out which method to use when, whether you bring a stakeholder into your usability test (you should) and how much you should pay participants. Take a look at some of the other articles on our blog if you’d like to learn more.

Learn more
1 min read

6 things to consider when setting up a research practice

With UX research so closely tied to product success, setting up a dedicated research practice is fast becoming important for many organizations. It’s not an easy process, especially for organizations that have had little to do with research, but the end goal is worth the effort.

But where exactly are you supposed to start? This article provides 6 key things to keep in mind when setting up a research practice, and should hopefully ensure you’ve considered all of the relevant factors.

1) Work out what your organization needs

The first and most simple step is to take stock of the current user research situation within the organization. How much research is currently being done? Which teams or individuals are talking to customers on an ongoing basis? Consider if there are any major pain points with the current way research is being carried out or bottlenecks in getting research insights to the people that need them. If research isn't being practiced, identify teams or individuals that don't currently have access to the resources they need, and consider ways to make insights available to the people that need them.

2) Consolidate your insights

UX research should be communicating with nearly every part of an organization, from design teams to customer support, engineering departments and C-level management. The insights that stem from user research are valuable everywhere. Of course, the opposite is also true: insights from support and sales are useful for understanding customers and how the current product is meeting people's needs.

When setting up a research practice, identify which teams you should align with, and then reach out. Sit down with these teams and explore how you can help each other. For your part, you’ll probably need to explain the what and why of user research within the context of your organization, and possibly even explain at a basic level some of the techniques you use and the data you can obtain.

Then, get in touch with other teams with the goal of learning from them. A good research practice needs a strong connection to other parts of the business with the express purpose of learning. For example, by working with your organization’s customer support team, you’ll have a direct line to some of the issues that customers deal with on a regular basis. A good working relationship here means they’ll likely feed these insights back to you, in order to help you frame your research projects.

By working with your sales team, they’ll be able to share issues prospective customers are dealing with. You can follow up on this information with research, the results of which can be fed into the development of your organization’s products.

It can also be fruitful to develop an insights repository, where researchers can store any useful insights and log research activities. This means that sales, customer support and other interested parties can access the results of your research whenever they need to.

When your research practice is tightly integrated other key areas of the business, the organization is likely to see innumerable benefits from the insights>product loop.

3) Figure out which tools you will use

By now you’ve hopefully got an idea of how your research practice will fit into the wider organization – now it’s time to look at the ways in which you’ll do your research. We’re talking, of course, about research methods and testing tools.

We won’t get into every different type of method here (there are plenty of other articles and guides for that), but we will touch on the importance of qualitative and quantitative methods. If you haven’t come across these terms before, here’s a quick breakdown:

  • Qualitative research – Focused on exploration. It’s about discovering things we cannot measure with numbers, and often involves speaking with users through observation or user interviews.
  • Quantitative research – Focused on measurement. It’s all about gathering data and then turning this data into usable statistics.

All user research methods are designed to deliver either qualitative or quantitative data, and as part of your research practice, you should ensure that you always try to gather both types. By using this approach, you’re able to generate a clearer overall picture of whatever it is you’re researching.

Next comes the software. A solid stack of user research testing tools will help you to put research methods into practice, whether for the purposes of card sorting, carrying out more effective user interviews or running a tree test.

There are myriad tools available now, and it can be difficult to separate the useful software from the chaff. Here’s a list of research and productivity tools that we recommend.

Tools for research

Here’s a collection of research tools that can help you gather qualitative and quantitative data, using a number of methods.

  • Treejack – Tree testing can show you where people get lost on your website, and help you take the guesswork out of information architecture decisions. Like OptimalSort, Treejack makes it easy to sort through information and pairs this with in-depth analysis features.
  • dScout – Imagine being able to get video snippets of your users as they answer questions about your product. That’s dScout. It’s a video research platform that collects in-context “moments” from a network of global participants, who answer your questions either by video or through photos.
  • Ethnio – Like dScout, this is another tool designed to capture information directly from your users. It works by showing an intercept pop-up to people who land on your website. Then, once they agree, it runs through some form of research.
  • OptimalSort – Card sorting allows you to get perspective on whatever it is you’re sorting and understand how people organize information. OptimalSort makes it easier and faster to sort through information, and you can access powerful analysis features.
  • Reframer – Taking notes during user interviews and usability tests can be quite time-consuming, especially when it comes to analyze the data. Reframer gives individuals and teams a single tool to store all of their notes, along with a set of powerful analysis features to make sense of their data.
  • Chalkmark – First-click testing can show you what people click on first in a user interface when they’re asked to complete a task. This is useful, as when people get their first click correct, they’re much more likely to complete their task. Chalkmark makes the process of setting up and running a first-click test easy. What’s more, you’re given comprehensive analysis tools, including a click heatmap.

Tools for productivity

These tools aren’t necessarily designed for user research, but can provide vital links in the process.

  • Whimsical – A fantastic tool for user journeys, flow charts and any other sort of diagram. It also solves one of the biggest problems with online whiteboards – finicky object placement.
  • Descript – Easily transcribe your interview and usability test audio recordings into text.
  • Google Slides – When it inevitably comes time to present your research findings to stakeholders, use Google Slides to create readable, clear presentations.

4) Figure out how you’ll track findings over time

With some idea of the research methods and testing tools you’ll be using to collect data, now it’s time to think about how you’ll manage all of this information. A carefully ordered spreadsheet and folder system can work – but only to an extent. Dedicated software is a much better choice, especially given that you can scale these systems much more easily.

A dedicated home for your research data serves a few distinct purposes. There’s the obvious benefit of being able to access all of your findings whenever you need them, which means it’s much easier to create personas if the need arises. A dedicated home also means your findings will remain accessible and useful well into the future.

When it comes to software, Reframer stands as one of the better options for creating a detailed customer insights repository as you’re able to capture your sessions directly in the tool and then apply tags afterwards. You can then easily review all of your observations and findings using the filtering options. Oh, and there’s obviously the analysis side of the tool as well.

If you’re looking for a way to store high-level findings – perhaps if you’re intending to share this data with other parts of your organization – then a tool like Confluence or Notion is a good option. These tools are basically wikis, and include capable search and navigation options too.

5) Where will you get participants from?

A pool of participants you can draw from for your user research is another important part of setting up a research practice. Whenever you need to run a study, you’ll have real people you can call on to test, ask questions and get feedback from.

This is where you’ll need to partner other teams, likely sales and customer support. They’ll have direct access to your customers, so make sure to build a strong relationship with these teams. If you haven’t made introductions, it can helpful to put together a one-page sheet of information explaining what UX research is and the benefits of working with your team.

You may also want to consider getting in some external help. Participant recruitment services are a great way to offload the heavy lifting of sourcing quality participants – often one of the hardest parts of the research process.

6) Work out how you'll communicate your research

Perhaps one of the most important parts of being a user researcher is taking the findings you uncover and communicating them back to the wider organization. By feeding insights back to product, sales and customer support teams, you’ll form an effective link between your organization’s customers and your organization. The benefits here are obvious. Product teams can build products that actually address customer pain points, and sales and support teams will better understand the needs and expectations of customers.

Of course, it isn’t easy to communicate findings. Here are a few tips:

  • Document your research activities: With a clear record of your research, you’ll find it easier to pull out relevant findings and communicate these to the right teams.
  • Decide who needs what: You’ll probably find that certain roles (like managers) will be best served by a high-level overview of your research activities (think a one-page summary), while engineers, developers and designers will want more detailed research findings.

Continue reading

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.