July 30, 2015
5 min

"So, what do we get for our money?" Quantifying the ROI of UX

Optimal Workshop
"Dear Optimal Workshop
How do I quantify the ROI [return on investment] of investing in user experience?"
— Brian

Dear Brian,

I'm going to answer your question with a resounding 'It depends'. I believe we all differ in what we're willing to invest, and what we expect to receive in return. So to start with, and if  you haven’t already, it's worth grabbing your stationery tools of choice and brainstorming your way to a definition of ROI that works for you, or for the people you work for.

I personally define investment in UX as time given, money spent, and people utilized. And I define return on UX as time saved, money made, and people engaged. Oh, would you look at that — they’re the same! All three (time, money, and humans) exist on both sides of the ROI fence and are intrinsically linked. You can’t engage people if you don’t first devote time and money to utilizing your people in the best possible way! Does that make sense?

That’s just my definition — you might have a completely different way of counting those beans, and the organizations you work for may think differently again.

I'll share my thoughts on the things that are worth quantifying (that you could start measuring today if you were so inclined) and a few tips for doing so. And I'll point you towards useful resources to help with the nitty-gritty, dollars-and-cents calculations.

5 things worth quantifying for digital design projects

Here are five things I think are worthy of your attention when it comes to measuring the ROI of user experience, but there are plenty of others. And different projects will most likely call for different things.

(A quick note: There's a lot more to UX than just digital experiences, but because I don't know your specifics Brian, the ideas I share below apply mainly to digital products.)

1. What’s happening in the call centre?

A surefire way to get a feel for the lay of the land is to look at customer support — and if measuring support metrics isn't on your UX table yet, it's time to invite it to dinner. These general metrics are an important part of an ongoing, iterative design process, but getting specific about the best data to gather for individual projects will give you the most usable data.

Improving an application process on your website? Get hard numbers from the previous month on how many customers are asking for help with it, go away and do your magic, get the same numbers a month after launch, and you've got yourself compelling ROI data.

Are your support teams bombarded with calls and emails? Has the volume of requests increased or decreased since you released that new tool, product, or feature? Are there patterns within those requests — multiple people with the same issues? These are just a few questions you can get answers to.

You'll find a few great resources on this topic online, including this piece by Marko Nemberg that gives you an idea of the effects a big change in your product can have on support activity.

2. Navigation vs. Search

This is a good one: check your analytics to see if your users are searching or navigating. I’ve heard plenty of users say to me upfront that they'll always just type in the search bar and that they’d never ever navigate. Funny thing is, ten minutes later I see the same users naturally navigating their way to those gorgeous red patent leather pumps. Why?

Because as Zoltán Gócza explains in UX Myth #16, people do tend to scan for trigger words to help them navigate, and resort to problem solving behaviour (like searching) when they can’t find what they need. Cue frustration, and the potential for a pretty poor user experience that might just send customers running for the hills — or to your competitors. This research is worth exploring in more depth, so check out this article by Jared Spool, and this one by Jakob Nielsen (you know you can't go wrong with those two).

3. Are people actually completing tasks?

Task completion really is a fundamental UX metric, otherwise why are we sitting here?! We definitely need to find out if people who visit our website are able to do what they came for.

For ideas on measuring this, I've found the Government Service Design Manual by GOV.UK to be an excellent resource regardless of where you are or where you work, and in relation to task completion they say:

"When users are unable to complete a digital transaction, they can be pushed to use other channels. This leads to low levels of digital take-up and customer satisfaction, and a higher cost per transaction."

That 'higher cost per transaction' is your kicker when it comes to ROI.

So, how does GOV.UK suggest we quantify task completion? They offer a simple (ish) recommendation to measure the completion rate of the end-to-end process by going into your analytics and dividing the number of completed processes by the number of started processes.

While you're at it, check the time it takes for people to complete tasks as well. It could help you to uncover a whole host of other issues that may have gone unnoticed. To quantify this, start looking into what Kim Oslob on UXMatters calls 'Effectiveness and Efficiency ratios'. Effectiveness ratios can be determined by looking at success, error, abandonment, and timeout rates. And Efficiency ratios can be determined by looking at average clicks per task, average time taken per task, and unique page views per task.

You do need to be careful not to make assumptions based on this kind of data— it can't tell you what people were intending to do. If a task is taking people too long, it may be because it’s too complicated ... or because a few people made themselves coffee in between clicks. So supplement these metrics with other research that does tell you about intentions.

4. Where are they clicking first?

A good user experience is one that gets out of bed on the right side. First clicks matter for a good user experience.

A 2009 study showed that in task-based user tests, people who got their first click right were around twice as likely to complete the task successfully than if they got their first click wrong. This year, researchers at Optimal Workshop followed this up by analyzing data from millions of completed Treejack tasks, and found that people who got their first click right were around three times as likely to get the task right.

That's data worth paying attention to.

So, how to measure? You can use software that records mouse clicks first clicks from analytics on your page, but it difficult to measure a visitor's intention without asking them outright, so I'd say task-based user tests are your best bet.

For in-person research sessions, make gathering first-click data a priority, and come up with a consistent way to measure it (a column on a spreadsheet, for example). For remote research, check out Chalkmark (a tool devoted exclusively to gathering quantitative, first-click data on screenshots and wireframes of your designs) and UserTesting.com (for videos of people completing tasks on your live website).

5. Resources to help you with the number crunching

Here's a great piece on uxmastery.com about calculating the ROI of UX.

Here's Jakob Nielsen in 1999 with a simple 'Assumptions for Productivity Calculation', and here's an overview of what's in the 4th edition of NN/G's Return on Investment for Usability report (worth the money for sure).

Here's a calculator from Write Limited on measuring the cost of unclear communication within organizations (which could quite easily be applied to UX).

And here's a unique take on what numbers to crunch from Harvard Business Review.

I hope you find this as a helpful starting point Brian, and please do have a think about what I said about defining ROI. I’m curious to know how everyone else defines and measures ROI — let me know!

Publishing date
July 30, 2015
Share this article

Related articles

min read
3 ways you can combine OptimalSort and Chalkmark in your design process

As UX professionals we know the value of card sorting when building an IA or making sense of our content and we know that first clicks and first impressions of our designs matter. Tools like OptimalSort and Chalkmark are two of our wonderful design partners in crime, but did you also know that they work really well with each other? They have a lot in common and they also complement each other through their different strengths and abilities. Here are 3 ways that you can make the most of this wonderful team up in your design process.

1. Test the viability of your concepts and find out which one your users prefer most

Imagine you’re at a point in your design process where you’ve done some research and you’ve fed all those juicy insights into your design process and have come up with a bunch of initial visual design concepts that you’d love to test.

You might approach this by following this 3 step process:

  • Test the viability of your concepts in Chalkmark before investing in interaction design work
  • Iterate your design based on your findings in Step 1
  • Finish by running a preference test with a closed image based card sort in OptimalSort to find out which of your concepts is most preferred by your users

There are two ways you could run this approach: remotely or in person. The remote option is great for when you’re short on time and budget or for when your users are all over the world or otherwise challenging to reach quickly and cheaply. If you’re running it remotely, you would start by popping images of your concepts in whatever state of fidelity they are up to into Chalkmark and coming up with some scenario based tasks for your participants to complete against those flat designs. Chalkmark is super nifty in the way that it gets people to just click on an image to indicate where they would start out when completing a task. That image can be a rough sketch or a screenshot of a high fidelity prototype or live product — it could be anything! Chalkmark studies are quick and painless for participants and great for designers because the results will show if your design is setting your users up for success from the word go. Just choose the most common tasks a user would need to complete on your website or app and send it out.

Next, you would review your Chalkmark results and make any changes or iterations to your designs based on your findings. Choose a maximum of 3 designs to move forward with for the last part of this study. The point of this is to narrow your options down and figure out through research, which design concept you should focus on. Create images of your chosen 3 designs and build a closed card sort in OptimalSort with image based cards by selecting the checkbox for ‘Add card images’ in the tool (see below).


How to add card images
Turn your cards into image based cards in OptimalSort by selecting the ‘Add card images’ checkbox on the right hand side of the screen.


The reason why you want a closed card sort is because that’s how your participants will indicate their preference for or against each concept to you. When creating the study in OptimalSort, name your categories something along the lines of ‘Most preferred’, ‘Least preferred’ and ‘Neutral’. Totally up to you what you call them — if you’re able to, I’d encourage you to have some fun with it and make your study as engaging as possible for your participants!

Naming your categories for preference testing
Naming your card categories for preference testing with an image based closed card sort study in OptimalSort

Limit the number of cards that can be sorted into each category to 1 and uncheck the box labelled ‘Randomize category order’ so that you know exactly how they’re appearing to participants — it’s best if the negative one doesn’t appear first because we’re mostly trying to figure out what people do prefer and the only way to stop that is to switch the randomization off. You could put the neutral option at the end or in the middle to balance it out — totally up to you.

It’s also really important that you include a post study questionnaire to dig into why they made the choices they did. It’s one thing to know what people do and don’t prefer, but it’s also really important to additionally capture the reasoning behind their thinking. It could be something as simple as “Why did you chose that particular option as your most preferred?” and given how important this context is, I would set that question to ‘required’. You may still end up with not-so helpful responses like ‘Because I like the colors’ but it’s still better than nothing — especially if your users are on the other side of the world or you’re being squeezed by some other constraint! It’s something to be mindful of and remember that studies like these contribute to the large amount of research that goes on throughout a project and are not the only piece of research you’ll be running. You’re not pinning all your design’s hopes and dreams on this one study! You’re just trying to quickly find out what people prefer at this point in time and as your process continues, your design will evolve and grow.

You might also ask the same context gathering question for the least preferred option and consider also including an optional question that allows them to share any other thoughts they might have on the activity they just completed — you never know what you might uncover!

If you were running this in person, you could use it to form the basis for a moderated codesign session. You would start your session by running the Chalkmark study to gauge their first impressions and find out where those first clicks are landing and also have a conversation about what your participants are thinking and feeling while they’re completing those tasks with your concepts. Next, you could work with your participants to iterate and refine your concepts together. You could do it digitally or you could just draw them out on paper — it doesn't have to be perfect! Lastly, you could complete your codesign session by running that closed card sort preference test as a moderated study using barcodes printed from OptimalSort (found under the ‘Cards’ tab during the build process) giving you the best of both worlds — conversations with your participants plus analysis made easy! The moderated approach will also allow you to dig deeper into the reasoning behind their preferences.

2. Test your IA through two different lenses: non visual and visual

Your information architecture (IA) is the skeleton structure of your website or app and it can be really valuable to evaluate it from two different angles: non-visual and visual. The non-visual elements of an IA are: language, content, categories and labelling and these are great because they provide a clear and clean starting point. There’s no visual distractions and getting that content right is rightfully so a high priority. The visual elements come along later and build upon that picture and help provide context and bring your design to life. It's a good idea to test your IA through both lenses throughout your design process to ensure that nothing is getting lost or muddied as your design evolves and grows.

Let’s say you’ve already run an open card sort to find out how your users expect your content to be organised and you’ve created your draft IA. You may have also tested and iterated that IA in reverse through a tree test in Treejack and are now starting to sketch up some concepts for the beginnings of the interaction design stages of your work.

At this point in the process, you might run a closed card sort with OptimalSort on your growing IA to ensure that those top level category labels are aligning to user expectations while also running a Chalkmark study on your early visual designs to see how the results from both approaches compare.

When building your closed card sort study, you would set your predetermined categories to match your IA’s top level labels and would then have your participants sort the content that lies beneath into those groups. For your Chalkmark study, think about the most common tasks your users will need to complete using your website or app when it eventually gets released out into the world and base your testing tasks around those. Keep it simple and don’t stress if you think this may change in the future — just go with what you know today.

Once you’ve completed your studies, have a look at your results and ask yourself questions like: Are both your non-visual and visual IA lenses telling the same story? Is the extra context of visual elements supporting your IA or is it distracting and/or unhelpful? Are people sorting your content into the same places that they’re going looking for it during first-click testing? Are they on the same page as you when it’s just words on an actual page but are getting lost in the visual design by not correctly identifying their first click? Has your Chalkmark study unearthed any issues with your IA? Have a look at the Results matrix and the Popular placements matrix in OptimalSort and see how they stack up against your clickmaps in Chalkmark.

Bananacom ppm
Clickmaps in Chalkmark and closed card sorting results in OptimalSort — are these two saying the same thing?

3. Find out if your labels and their matching icons make sense to users

A great way to find out if your top level labels and their matching icons are communicating coherently and consistently is to test them by using both OptimalSort and Chalkmark. Icons aren’t the most helpful or useful things if they don’t make sense to your users — especially in cases where label names drop off and your website or app homepage relies solely on that image to communicate what content lives below each one e.g., sticky menus, mobile sites and more.

This approach could be useful when you’re at a point in your design process where you have already defined your IA and are now moving into bringing it to life through interaction design. To do this, you might start by running a closed card sort in OptimalSort as a final check to see if the top level labels that you intend to make icons for are making sense to users. When building the study in OptimalSort, do exactly what we talked about earlier in our non-visual vs visual lens study and set your predetermined categories in the tool to match your level 1 labels. Ask your participants to sort the content that lies beneath into those groups — it’s the next part that’s different for this approach.

Once you’ve reviewed your findings and are confident your labels are resonating with people, you can then develop their accompanying icons for concept testing. You might pop these icons into some wireframes or a prototype of your current design to provide context for your participants or you might just test the icons on their own as they would appear on your future design (e.g., in a row, as a block or something else!) but without any of the other page elements. It’s totally up to you and depends entirely upon what stage you’re at in your project and the thing you’re actually designing — there might be cases where you want to zero in on just the icons and maybe the website header e.g., a sticky menu that sits above a long scrolling, dynamic social feed. In an example taken from a study we recently ran on Airbnb and TripAdvisor’s mobile apps, you might use the below screen on the left but without the icon labels or you might use the screen on the right that shows the smaller sticky menu version of it that appears on scroll.


Screenshots taken from TripAdvisor’s mobile app in 2019 showing the different ways icons present.


The main thing here is to test the icons without their accompanying text labels to see if they align with user expectations. Choose the visual presentation approach that you think is best but lose the labels!

When crafting your Chalkmark tasks, it’s also a good idea to avoid using the label language in the task itself. Even though the labels aren’t appearing in the study, just using that language still has the potential to lead your participants. Treat it the same way you would a Treejack task — explain what participants have to do without giving the game away e.g., instead of using the word ‘flights’ try ‘airfares’ or ‘plane tickets’ instead.

Choose one scenario based task question for each level 1 label that has an icon and consider including post study questions to gather further context from your participants — e.g., did they have any comments about the activity they completed? Was anything confusing or unclear and if so, what and why?

Once you’ve completed your Chalkmark study and have analysed the results, have a look at how well your icons tested. Did your participants get it right? If not, where did they go instead? Are any of your icons really similar to each other and is it possible this similarity may have led people down the wrong path?

Alternatively, if you’ve already done extensive work on your IA and are feeling pretty confident in it, you might instead test your icons by running an image card sort in OptimalSort. You could use an open card sort and limit the cards per category to just one — effectively asking participants to name each card rather than a group of cards. An open card sort will allow you to learn more about the language they use while also uncovering what they associate with each one without leading them. You’d need to tweak the default instructions slightly to make this work but it’s super easy to do! You might try something like:

Part 1:

Step 1

  • Take a quick look at the images to the left.
  • We'd like you to tell us what you associate with each image.
  • There is no right or wrong answer.

Step 2

  • Drag an image from the left into this area to give it a name.

Part 2:

Step 3

  • Click the title to give the image a name that you feel best describes what you associate that image with.

Step 4

  • Repeat step 3 for all the images by dropping them in unused spaces.
  • When you're done, click "Finished" at the top right. Have fun!

Test out your new instructions in preview mode on a colleague from outside of your design team just to be sure it makes sense!

So there’s 3 ideas for ways you might use OptimalSort and Chalkmark together in your design process. Optimal Workshop’s suite of tools are flexible, scalable and work really well with each other — the possibilities of that are huge!

Further reading

min read
What do you prioritize when doing qualitative research?

Qualitative user research is about exploration. Exploration is about the journey, not only the destination (or outcome). Gaining information and insights about your users through interviews, usability testing, contextual, observations and diary entries. Using these qualitative research methods to not only answer your direct queries, but to uncover and unravel your users ‘why’.

It can be important to use qualitative research to really dig deep, get to know your users and get inside their heads, and their reasons. Creating intuitive and engaging products that deliver the best user experience. 

What is qualitative research? 🔎

The term ‘qualitative’ refers to things that cannot be measured numerically and qualitative user research is no exception. Qualitative research is primarily an exploratory research method that is typically done early in the design process and is useful for uncovering insights into people’s thoughts, opinions, and motivations. It allows us to gain a deeper understanding of problems and provides answers to questions we didn’t know we needed to ask. 

Qualitative research could be considered the ‘why’. Where quantitative user research uncovers the how or the what users want. Qualitative user research will uncover why they make decisions (and possibly much more).

Priorities ⚡⚡⚡⚡

When undertaking user research it is great to do a mix of quantitative and qualitative research. Which will round out the numbers with human driven insights.

Quantitative user research methods, such as card sorting or tree testing, will answer the ‘what’ your users want, and provide data to support this. These insights are number driven and are based on testing direct interaction with your product. This is super valuable to report to stakeholders. Hard data is difficult to argue what changes need to be made to how your information architecture (IA) is ordered, sorted or designed. To find out more about the quantitative research options, take a read.

Qualitative user research, on the other hand, may uncover a deeper understanding of ‘why’ your users want the IA ordered, sorted or designed a certain way.  The devil is in the detail afterall and great user insights are discoverable. 

Priorities for your qualitative research needs to be less about the numbers, and more on discovering your users ‘why’. Observing, listening, questioning and looking at reasons for users decisions will provide valuable insights for product design and ultimately improve user experience.

Usability Testing - this research method is used to evaluate how easy and intuitive a product is to use.  Observing, noting and watching the participant complete tasks without interference or questions can uncover a lot of insights that data alone can’t give. This method can be done in a couple of ways, moderated or unmoderated. While it can be quicker to do unmoderated and easier to arrange, the deep insights will come out of moderated testing. 

Observational - with this qualitative research method your insights will be uncovered from observing and noting what the participant is doing, paying particular attention to their non-verbal communication. Where do they demonstrate frustration, or turn away from the task, or change their approach? Factual note taking, meaning there shouldn’t be any opinions attached to what is being observed, is important to keep the insights unbiased.

Contextual - paying attention to the context in which the interview or testing is done is important. Is it hot, loud, cold or is the screen of their laptop covered in post-its that make it difficult to see? Or do they struggle with navigating using the laptop tracker? All of this noted, in a factual manner, without personal inferring or added opinion based observations can give a window into why the participant struggled or was frustrated at any point.

These research methods can be done as purely observational research (you don’t interview or converse with your participant) and noting how they interact (more interested in the process than the outcome of their product interaction). Or, these qualitative research methods can be coupled with an

Interview - a series of questions asked around a particular task or product. Careful note taking around what the participant says as well as noting any observations. This method should allow a conversation to flow. Whilst the interviewer should be prepared with a list of questions around their topic, remain flexible enough to dig deeper where there might be details or insights of interest. An interviewer that is comfortable in getting to know their participants unpicks reservations and allows a flow of conversation, and generates amazing insights.

With an interview it can be of use to have a second person in the room to act as the note taker. This can free up the interviewer to engage with the participant and unpick the insights.

Using a great note taking side kick, like our Reframer, can take the pain out of recording all these juicy and deep insights. Time-stamping, audio or video recordings and notes all stored in one place. Easily accessed by the team, reviewed, reports generated and stored for later.

Let’s consider 🤔

You’re creating a new app to support your gym and it’s website. You’re looking to generate personal training bookings, allow members to book classes or have updates and personalise communication for your members. But before investing in final development it needs to be tested. How do your users interact with it? Why would they want to? Does it behave in a way that improves the user experience? Or does it simply not deliver? But why?

First off, using quantitative research like Chalkmark would show how the interface is working. Where are users clicking, where do they go after that. Is it simple to use? You now have direct data that supports your questions, or possibly suggests a change of design to support quicker task completion, or further engagement.

While all of this is great data for the design, does it dig deep enough to really get an understanding of why your users are frustrated? Do they find what they need quickly? Or get completely lost? Finding out these insights and improving on them can make the most of your users’ experience.

When quantitative research is coupled with robust qualitative research that prioritizes an in-depth understanding of what your users need, ultimately the app can make the most of your users’ experience.

Using moderated usability testing for your gym app, observations can be made about how the participant interacts with the interface. Where do they struggle, get lost, or where do they complete a task quickly and simply. This type of research enhances the quantitative data and gives insight into where and why the app is or isn't performing.

Then interviewing participants about why they make decisions on the app, how they use it and why they would use it. These focussed questions, with some free flow conversation will round out your research. Giving valuable insights that can be reviewed, analyzed and reported to the product team and key stakeholders. Focussing the outcome, and designing a product that delivers on not just what users need, but in-depth understand of why. 

Wrap Up 🥙

Quantitative and qualitative user research do work hand in hand, each offering a side to the same coin. Hard number driven data with quantitative user research will deliver the what needs to be addressed. With focussed quantitative research it is possible to really get a handle on why your users interact with your product in a certain way, and how. 

The Optimal Workshop platform has all the tools, research methods and even the note taking tools you need to get started with your user research, now, not next week! See you soon.

min read
Mixed methods research in 2021

User experience research is super important to developing a product that truly engages, compels and energises people. We all want a website that is easy to navigate, simple to follow and compels our users to finish their tasks. Or an app that supports and drives engagement.

We’ve talked a lot about the various types of research tools that help improve these outcomes. 

There is a rising research trend in 2021.

Mixed method research - what is more compelling than these user research quantitative tools? Combining these with awesome qualitative research! Asking the same questions in various ways can provide deeper insights into how our users think and operate. Empowering you to develop products that truly talk to your users, answer their queries or even address their frustrations.

Though it isn’t enough to simply ‘do research’, as with anything you need to approach it with strategy, focus and direction. This will funnel your time, money and energy into areas that will generate the best results.

Mixed Method UX research is the research trend of 2021

With the likes of Facebook, Amazon, Etsy, eBay, Ford and many more big organizations offering newly formed job openings for mixed methods researchers it becomes very obvious where the research trend is heading.

It’s not only good to have, but now becoming imperative, to gather data, dive deeper and generate insights that provide more information on our users than ever before. And you don't need to be Facebook to reap the benefits. Mixed method research can be implemented across the board and can be as narrow as finding out how your homepage is performing through to analysing in depth the entirety of your product design.

And with all of these massive organizations making the move to increase their data collection and research teams. Why wouldn’t you?

The value in mixed method research is profound. Imagine understanding what, where, how and why your customers would want to use your service. And catering directly for them. The more we understand our customers, the deeper the relationship and the more likely we are to keep them engaged.

Although of course by diving deep into the reasons our users like (or don’t like) how our products operate can drive your organization to target and operate better at a higher level. Gearing your energies to attracting and keeping the right type of customer, providing the right level of service and after care. Potentially reducing overheads, by not delivering to expected levels.

What is mixed method research?

Mixed methods research isn’t overly complicated, and doesn’t take years for you to master. It simply is a term used to refer to using a combination of quantitative and qualitative data. This may mean using a research tool such as card sorting alongside interviews with users. 

Quantitative research is the tangible numbers and metrics that can be gathered through user research such as card sorting or tree testing.

Qualitative research is research around users’ behaviour and experiences. This can be through usability tests, interviews or surveys.

For instance you may be asking ‘how should I order the products on my site?’. With card sorting you can get the data insights that will inform how a user would like to see the products sorted. Coupled with interviews you will get the why.

Understanding the thinking behind the order, and why one user likes to see gym shorts stored under shorts and another would like to see them under active wear. With a deeper understanding of how and why users decide how content should be sorted are made will create a highly intuitive website. 

Another great reason for mixed method research would be to back up data insights for stakeholders. With a depth and breadth of qualitative and quantitative research informing decisions, it becomes clearer why changes may need to be made, or product designs need to be challenged.

How to do mixed method research

Take a look at our article for more examples of the uses of mixed method research. 

Simply put mixed method research means coupling quantitative research, such as tree testing, card sorting or first click testing, with qualitative research such as surveys, interviews or diary entry.

Say, for instance, the product manager has identified that there is an issue with keeping users engaged on the homepage of your website. We would start with asking where they get stuck, and when they are leaving.

This can be done using a first-click tool, such as Chalkmark, which will map where users head when they land on your homepage and beyond. 

This will give you the initial qualitative data. However, it may only give you some of the picture. Coupled with qualitative data, such as watching (and reporting on) body language. Or conducting interviews with users directly after their experience so we can understand why they found the process confusing or misleading.

A fuller picture, means a better understanding.

Key is to identify what your question is and honing in on this through both methods. Ultimately, we are answering your question from both sides of the coin.

Upcoming research trends to watch

Keeping an eye on the progression of the mixed method research trend, will mean keeping an eye on these:

1. Integrated Surveys

Rather than thinking of user surveys as being a one time, in person event, we’re seeing more and more often surveys being implemented through social media, on websites and through email. This means that data can be gathered frequently and across the board. This longitude data allows organizations to continuously analyse, interpret and improve products without really ever stopping. 

Rather than relying on users' memories for events and experiences data can be gathered in the moment. At the time of purchase or interaction. Increasing the reliability and quality of the data collected. 

2. Return to the social research

Customer research is rooted in the focus group. The collection of participants in one space, that allows them to voice their opinions and reach insights collectively. This did used to be an overwhelming task with days or even weeks to analyse unstructured forums and group discussions.

However, now with the advent of online research tools this can also be a way to round out mixed method research.

3. Co-creation

The ability to use your customers input to build better products. This has long been thought a way to increase innovative development. Until recently it too has been cumbersome and difficult to wrangle more than a few participants. But, there are a number of resources in development that will make co-creation the buzzword of the decade.

4. Owned Panels & Community

Beyond community engagement in the social sphere. There is a massive opportunity to utilise these engaged users in product development. Through a trusted forum, users are far more likely to actively and willingly participate in research. Providing insights into the community that will drive stronger product outcomes.

What does this all mean for me

So, there is a lot to keep in mind when conducting any effective user research. And there are a lot of very compelling reasons to do mixed method research and do it regularly. 

To remain innovative, and ahead of the ball it remains very important to be engaged with your users and their needs. Using qualitative and qualitative research to inform product decisions means you can operate knowing a fuller picture.

One of the biggest challenges with user research can be the coordination and participant recruitment. That’s where we come in.

Taking the pain out of the process and streamlining your research. Take a look at our Qualitative Research option, Reframer. Giving you an insight into how we can help make your mixed method research easier and analyse your data efficiently and in a format that is easy to understand.

User research doesn’t need to take weeks or months. With our participant recruitment we can provide reliable and quality participants across the board that will provide data you can rely on.

Why not get in deeper with mixed method research today!

Seeing is believing

Dive into our platform, explore our tools, and discover how easy it can be to conduct effective UX research.