July 30, 2015
5 min

"So, what do we get for our money?" Quantifying the ROI of UX

"Dear Optimal Workshop
How do I quantify the ROI [return on investment] of investing in user experience?"
— Brian

Dear Brian,

I'm going to answer your question with a resounding 'It depends'. I believe we all differ in what we're willing to invest, and what we expect to receive in return. So to start with, and if  you haven’t already, it's worth grabbing your stationery tools of choice and brainstorming your way to a definition of ROI that works for you, or for the people you work for.

I personally define investment in UX as time given, money spent, and people utilized. And I define return on UX as time saved, money made, and people engaged. Oh, would you look at that — they’re the same! All three (time, money, and humans) exist on both sides of the ROI fence and are intrinsically linked. You can’t engage people if you don’t first devote time and money to utilizing your people in the best possible way! Does that make sense?

That’s just my definition — you might have a completely different way of counting those beans, and the organizations you work for may think differently again.

I'll share my thoughts on the things that are worth quantifying (that you could start measuring today if you were so inclined) and a few tips for doing so. And I'll point you towards useful resources to help with the nitty-gritty, dollars-and-cents calculations.

5 things worth quantifying for digital design projects

Here are five things I think are worthy of your attention when it comes to measuring the ROI of user experience, but there are plenty of others. And different projects will most likely call for different things.

(A quick note: There's a lot more to UX than just digital experiences, but because I don't know your specifics Brian, the ideas I share below apply mainly to digital products.)

1. What’s happening in the call centre?

A surefire way to get a feel for the lay of the land is to look at customer support — and if measuring support metrics isn't on your UX table yet, it's time to invite it to dinner. These general metrics are an important part of an ongoing, iterative design process, but getting specific about the best data to gather for individual projects will give you the most usable data.

Improving an application process on your website? Get hard numbers from the previous month on how many customers are asking for help with it, go away and do your magic, get the same numbers a month after launch, and you've got yourself compelling ROI data.

Are your support teams bombarded with calls and emails? Has the volume of requests increased or decreased since you released that new tool, product, or feature? Are there patterns within those requests — multiple people with the same issues? These are just a few questions you can get answers to.

You'll find a few great resources on this topic online, including this piece by Marko Nemberg that gives you an idea of the effects a big change in your product can have on support activity.

2. Navigation vs. Search

This is a good one: check your analytics to see if your users are searching or navigating. I’ve heard plenty of users say to me upfront that they'll always just type in the search bar and that they’d never ever navigate. Funny thing is, ten minutes later I see the same users naturally navigating their way to those gorgeous red patent leather pumps. Why?

Because as Zoltán Gócza explains in UX Myth #16, people do tend to scan for trigger words to help them navigate, and resort to problem solving behaviour (like searching) when they can’t find what they need. Cue frustration, and the potential for a pretty poor user experience that might just send customers running for the hills — or to your competitors. This research is worth exploring in more depth, so check out this article by Jared Spool, and this one by Jakob Nielsen (you know you can't go wrong with those two).

3. Are people actually completing tasks?

Task completion really is a fundamental UX metric, otherwise why are we sitting here?! We definitely need to find out if people who visit our website are able to do what they came for.

For ideas on measuring this, I've found the Government Service Design Manual by GOV.UK to be an excellent resource regardless of where you are or where you work, and in relation to task completion they say:

"When users are unable to complete a digital transaction, they can be pushed to use other channels. This leads to low levels of digital take-up and customer satisfaction, and a higher cost per transaction."

That 'higher cost per transaction' is your kicker when it comes to ROI.

So, how does GOV.UK suggest we quantify task completion? They offer a simple (ish) recommendation to measure the completion rate of the end-to-end process by going into your analytics and dividing the number of completed processes by the number of started processes.

While you're at it, check the time it takes for people to complete tasks as well. It could help you to uncover a whole host of other issues that may have gone unnoticed. To quantify this, start looking into what Kim Oslob on UXMatters calls 'Effectiveness and Efficiency ratios'. Effectiveness ratios can be determined by looking at success, error, abandonment, and timeout rates. And Efficiency ratios can be determined by looking at average clicks per task, average time taken per task, and unique page views per task.

You do need to be careful not to make assumptions based on this kind of data— it can't tell you what people were intending to do. If a task is taking people too long, it may be because it’s too complicated ... or because a few people made themselves coffee in between clicks. So supplement these metrics with other research that does tell you about intentions.

4. Where are they clicking first?

A good user experience is one that gets out of bed on the right side. First clicks matter for a good user experience.

A 2009 study showed that in task-based user tests, people who got their first click right were around twice as likely to complete the task successfully than if they got their first click wrong. This year, researchers at Optimal Workshop followed this up by analyzing data from millions of completed Treejack tasks, and found that people who got their first click right were around three times as likely to get the task right.

That's data worth paying attention to.

So, how to measure? You can use software that records mouse clicks first clicks from analytics on your page, but it difficult to measure a visitor's intention without asking them outright, so I'd say task-based user tests are your best bet.

For in-person research sessions, make gathering first-click data a priority, and come up with a consistent way to measure it (a column on a spreadsheet, for example). For remote research, check out Chalkmark (a tool devoted exclusively to gathering quantitative, first-click data on screenshots and wireframes of your designs) and UserTesting.com (for videos of people completing tasks on your live website).

5. Resources to help you with the number crunching

Here's a great piece on uxmastery.com about calculating the ROI of UX.

Here's Jakob Nielsen in 1999 with a simple 'Assumptions for Productivity Calculation', and here's an overview of what's in the 4th edition of NN/G's Return on Investment for Usability report (worth the money for sure).

Here's a calculator from Write Limited on measuring the cost of unclear communication within organizations (which could quite easily be applied to UX).

And here's a unique take on what numbers to crunch from Harvard Business Review.

I hope you find this as a helpful starting point Brian, and please do have a think about what I said about defining ROI. I’m curious to know how everyone else defines and measures ROI — let me know!

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

A short guide to personas

The word “persona” has many meanings. Sometimes the term refers to a part that an actor plays, other times it can mean a famous person, or even a character in a fictional play or book. But in the field of UX, persona has its own special meaning.

Before you get started with creating personas of your own, learn what they are and the process to create one. We'll even let you in on a great, little tip — how to use Chalkmark to refine and validate your personas.

What is a persona?

In the UX field, a persona is created using research and observations of your users, which is analyzed and then depicted in the form of a person’s profile. This individual is completely fictional, but is created based on the research you’ve conducted into your own users. It’s a form of segmentation, which Angus Jenkinson noted in his article “Beyond Segmentation” is a “better intellectual and practical tool for dealing with the interaction between the concept of the ‘individual’ and the concept of ‘group’”.

Typical user personas include very specific information in order to paint an in-depth and memorable picture for the people using them (e.g., designers, marketers etc).

The user personas you create don’t just represent a single individual either; they’ll actually represent a whole group. This allows you to condense your users into just a few segments, while giving you a much smaller set of groups to target.

There are many benefits of using personas. Here are just a few:

     
  • You can understand your clients better by seeing their pain points, what they want, and what they need
  •  
  • You can narrow your focus to a small number of groups that matter, rather than trying to design for everybody
  •  
  • They’re useful for other teams too, from product management to design and marketing
  •  
  • They can help you clarify your business or brand
  •  
  • They can help you create a language for your brand
  •  
  • You can market your products in a better, more targeted way

How do I create a persona?

There’s no right or wrong way to create a persona; the way you make them can depend on many things, such as your own internal resources, and the type of persona you want.

The average persona that you’ve probably seen before in textbooks, online or in templates isn’t always the best kind to use (picture the common and overused types like ‘Busy Barry’). In fact, the way user personas are constructed is a highly debated topic in the UX industry.

Creating good user personas

Good user personas are meaningful descriptions — not just a list of demographics and a fake name that allows researchers to simply make assumptions.

Indi Young, an independent consultant and founder of Adaptive Path, is an advocate of creating personas that aren’t just a list of demographics. In an article she penned on medium.com, Indi states: “To actually bring a description to life, to actually develop empathy, you need the deeper, underlying reasoning behind the preferences and statements-of-fact. You need the reasoning, reactions, and guiding principles.”

One issue that can stem from traditional types of personas is they can be based on stereotypes, or even reinforce them. Things like gender, age, ethnicity, culture, and location can all play a part in doing this.

In a study by Phil Turner and Susan Turner titled “Is stereotyping inevitable when designing with personas?” the authors noted: “Stereotyped user representations appear to constrain both design and use in many aspects of everyday life, and those who advocate universal design recognise that stereotyping is an obstacle to achieving design for all.”

So it makes sense to scrap the stereotypes and, in many instances, irrelevant demographic data. Instead, include information that accurately describes the persona’s struggles, goals, thoughts and feelings — all bits of meaningful data.

Creating user personas involves a lot of research and analyzing. Here are a few tips to get you started:

1) Do your research

When you’re creating personas for UX, it’s absolutely crucial you start with research; after all, you can’t just pull this information out of thin air by making assumptions! Ensure you use a mixture of both qualitative and quantitative research here in order to cast your net wide and get results that are really valuable. A great research method that falls into the realms of both qualitative and quantitative is user interviews.

When you conduct your interviews, drill down into the types of behaviors, attitudes and goals your users have. It’s also important to mention that you can’t just examine what your users are saying to you — you need to tap into what they’re thinking and how they behave too.

2) Analyze and organize your data into segments

Once you’ve conducted your research, it’s time to analyze it. Look for trends in your results — can you see any similarities among your participants? Can you begin to group some of your participants together based on shared goals, attitudes and behaviors?

After you have sorted your participants into groups, you can create your segments. These segments will become your draft personas. Try to limit the number of personas you create. Having too many can defeat the purpose of creating them in the first place.

Don’t forget the little things! Give your personas a memorable title or name and maybe even assign an image or photo — it all helps to create a “real” person that your team can focus on and remember.

3) Review and test

After you’ve finalized your personas, it’s time to review them. Take another look at the responses you received from your initial user interviews and see if they match the personas you created. It’s also important you spend some time reviewing your finalized personas to see if any of them are too similar or overlap with one another. If they do, you might want to jump back a step and segment your data again.

This is also a great time to test your personas. Conduct another set of user interviews and research to validate your personas.

User persona templates and examples

Creating your personas using data from your user interviews can be a fun task — but make sure you don’t go too crazy. Your personas need to be relevant, not overly complex and a true representation of your users.

A great way to ensure your personas don’t get too out of hand is to use a template. There are many of these available online in a number of different formats and of varying quality.

This example from UX Lady contains a number of helpful bits of information you should include, such as user experience goals, tech expertise and the types of devices used. The accompany article also provides a fair bit of guidance on how to fill in your templates too. While this template is good, skip the demographics portion and read Indi Young’s article and books for better quality persona creation.

Using Chalkmark to refine personas

Now it’s time to let you in on a little tip. Did you know Chalkmark can be used to refine and validate your personas?

One of the trickiest parts of creating personas is actually figuring out which ones are a true representation of your users — so this usually means lots of testing and refining to ensure you’re on the right track. Fortunately, Chalkmark makes the refinement and validation part pretty easy.

First, you need to have your personas finalized or at least drafted. Take your results from your persona software or template you filled in. Create a survey for each segment so that you can see if your participants’ perceptions of themselves matches each of your personas.

Second, create your test. This is a pretty simple demo we made when we were testing our own personas a few years ago at Optimal Workshop. Keep in mind this was a while ago and not a true representation of our current personas — they’ve definitely changed over time! During this step, it’s also quite helpful to include some post-test questions to drill down into your participants’ profiles.

After that, send these tests out to your identified segments (e.g., if you had a retail clothing store, some of your segments might be women of a certain age, and men of a certain age. Each segment would receive its own test). Our test involved three segments: “the aware”, “the informed”, and “the experienced” — again, this has changed over time and you’ll find your personas will change too.

Finally, analyze the results. If you created separate tests for each segment, you will now have filtered data for each segment. This is the real meaty information you use to validate each persona. For example, our three persona tests all contained the questions: “What’s your experience with user research?” And “How much of your job description relates directly to user experience work?”

Persona2 results
   Some of the questionnaire results for Persona #2

A

bove, you’ll see the results for Persona #2. This tells us that 34% of respondents identified that their job involves a lot of UX work (75-100%, in fact). In addition, 31% of this segment considered themselves “Confident” with remote user research, while a further 9% and 6% of this segment said they were “Experienced” and “Expert”.

Persona #2’s results for Task 1
   Persona #2’s results for Task 1

These results all aligned with the persona we associated with that segment: “the informed”.

When you’re running your own tests, you’ll analyze the data in a very similar way. If the results from each of your segments’ Chalkmark tests don’t match up with the personas you created, it’s likely you need to adjust your personas. However, if each segment’s results happen to match up with your personas (like our example above), consider them validated!

For a bit more info on our very own Chalkmark persona test, check out this article.

Further reading

 

Learn more
1 min read

"So, what do we get for our money?" Quantifying the ROI of UX

"Dear Optimal Workshop
How do I quantify the ROI [return on investment] of investing in user experience?"
— Brian

Dear Brian,

I'm going to answer your question with a resounding 'It depends'. I believe we all differ in what we're willing to invest, and what we expect to receive in return. So to start with, and if  you haven’t already, it's worth grabbing your stationery tools of choice and brainstorming your way to a definition of ROI that works for you, or for the people you work for.

I personally define investment in UX as time given, money spent, and people utilized. And I define return on UX as time saved, money made, and people engaged. Oh, would you look at that — they’re the same! All three (time, money, and humans) exist on both sides of the ROI fence and are intrinsically linked. You can’t engage people if you don’t first devote time and money to utilizing your people in the best possible way! Does that make sense?

That’s just my definition — you might have a completely different way of counting those beans, and the organizations you work for may think differently again.

I'll share my thoughts on the things that are worth quantifying (that you could start measuring today if you were so inclined) and a few tips for doing so. And I'll point you towards useful resources to help with the nitty-gritty, dollars-and-cents calculations.

5 things worth quantifying for digital design projects

Here are five things I think are worthy of your attention when it comes to measuring the ROI of user experience, but there are plenty of others. And different projects will most likely call for different things.

(A quick note: There's a lot more to UX than just digital experiences, but because I don't know your specifics Brian, the ideas I share below apply mainly to digital products.)

1. What’s happening in the call centre?

A surefire way to get a feel for the lay of the land is to look at customer support — and if measuring support metrics isn't on your UX table yet, it's time to invite it to dinner. These general metrics are an important part of an ongoing, iterative design process, but getting specific about the best data to gather for individual projects will give you the most usable data.

Improving an application process on your website? Get hard numbers from the previous month on how many customers are asking for help with it, go away and do your magic, get the same numbers a month after launch, and you've got yourself compelling ROI data.

Are your support teams bombarded with calls and emails? Has the volume of requests increased or decreased since you released that new tool, product, or feature? Are there patterns within those requests — multiple people with the same issues? These are just a few questions you can get answers to.

You'll find a few great resources on this topic online, including this piece by Marko Nemberg that gives you an idea of the effects a big change in your product can have on support activity.

2. Navigation vs. Search

This is a good one: check your analytics to see if your users are searching or navigating. I’ve heard plenty of users say to me upfront that they'll always just type in the search bar and that they’d never ever navigate. Funny thing is, ten minutes later I see the same users naturally navigating their way to those gorgeous red patent leather pumps. Why?

Because as Zoltán Gócza explains in UX Myth #16, people do tend to scan for trigger words to help them navigate, and resort to problem solving behaviour (like searching) when they can’t find what they need. Cue frustration, and the potential for a pretty poor user experience that might just send customers running for the hills — or to your competitors. This research is worth exploring in more depth, so check out this article by Jared Spool, and this one by Jakob Nielsen (you know you can't go wrong with those two).

3. Are people actually completing tasks?

Task completion really is a fundamental UX metric, otherwise why are we sitting here?! We definitely need to find out if people who visit our website are able to do what they came for.

For ideas on measuring this, I've found the Government Service Design Manual by GOV.UK to be an excellent resource regardless of where you are or where you work, and in relation to task completion they say:

"When users are unable to complete a digital transaction, they can be pushed to use other channels. This leads to low levels of digital take-up and customer satisfaction, and a higher cost per transaction."

That 'higher cost per transaction' is your kicker when it comes to ROI.

So, how does GOV.UK suggest we quantify task completion? They offer a simple (ish) recommendation to measure the completion rate of the end-to-end process by going into your analytics and dividing the number of completed processes by the number of started processes.

While you're at it, check the time it takes for people to complete tasks as well. It could help you to uncover a whole host of other issues that may have gone unnoticed. To quantify this, start looking into what Kim Oslob on UXMatters calls 'Effectiveness and Efficiency ratios'. Effectiveness ratios can be determined by looking at success, error, abandonment, and timeout rates. And Efficiency ratios can be determined by looking at average clicks per task, average time taken per task, and unique page views per task.

You do need to be careful not to make assumptions based on this kind of data— it can't tell you what people were intending to do. If a task is taking people too long, it may be because it’s too complicated ... or because a few people made themselves coffee in between clicks. So supplement these metrics with other research that does tell you about intentions.

4. Where are they clicking first?

A good user experience is one that gets out of bed on the right side. First clicks matter for a good user experience.

A 2009 study showed that in task-based user tests, people who got their first click right were around twice as likely to complete the task successfully than if they got their first click wrong. This year, researchers at Optimal Workshop followed this up by analyzing data from millions of completed Treejack tasks, and found that people who got their first click right were around three times as likely to get the task right.

That's data worth paying attention to.

So, how to measure? You can use software that records mouse clicks first clicks from analytics on your page, but it difficult to measure a visitor's intention without asking them outright, so I'd say task-based user tests are your best bet.

For in-person research sessions, make gathering first-click data a priority, and come up with a consistent way to measure it (a column on a spreadsheet, for example). For remote research, check out Chalkmark (a tool devoted exclusively to gathering quantitative, first-click data on screenshots and wireframes of your designs) and UserTesting.com (for videos of people completing tasks on your live website).

5. Resources to help you with the number crunching

Here's a great piece on uxmastery.com about calculating the ROI of UX.

Here's Jakob Nielsen in 1999 with a simple 'Assumptions for Productivity Calculation', and here's an overview of what's in the 4th edition of NN/G's Return on Investment for Usability report (worth the money for sure).

Here's a calculator from Write Limited on measuring the cost of unclear communication within organizations (which could quite easily be applied to UX).

And here's a unique take on what numbers to crunch from Harvard Business Review.

I hope you find this as a helpful starting point Brian, and please do have a think about what I said about defining ROI. I’m curious to know how everyone else defines and measures ROI — let me know!

Learn more
1 min read

What gear do I need for qualitative user testing?

Summary: The equipment and tools you use to run your user testing sessions can make your life a lot easier. Here’s a quick guide.

It’s that time again. You’ve done the initial scoping, development and internal testing, and now you need to take the prototype of your new design and get some qualitative data on how it works and what needs to be improved before release. It’s time for the user testing to begin.

But the prospect of user testing raises an important question, and it’s one that many new user researchers often deliberate over: What gear or equipment should I take with me? Well, never fear. We’re going to break down everything you need to consider in terms of equipment, from video recording through to qualitative note-taking.

Recording: Audio, screens and video

The ability to easily record usability tests and user interviews means that even if you miss something important during a session, you can go back later and see what you’ve missed. There are 3 types of recording to keep in mind when it comes to user research: audio, video and screen recording. Below, we’ve put together a list of how you can capture each. You shouldn’t have to buy any expensive gear – free alternatives and software you can run on your phone and laptop should suffice.

  • Audio – Forget dedicated sound recorders; recording apps for smartphones (iOS and Android) allow you to record user interviews and usability tests with ease and upload the recordings to Google Drive or your computer. Good options include Sony’s recording app for Android and the built-in Apple recording app on iOS.
  • Transcription – Once you’ve created a recording, you’ll no doubt want a text copy to work with. For this, you’ll need transcription software to take the audio and turn it into text. There are companies that will make transcriptions for you, but software like Transcribe means you can carry out the process yourself.
  • Screen recording – Very useful during remote usability tests, screen recording software can show you exactly how participants react to the tasks you set out for them, even if you’re not in the room. OBS Studio is a good option for both Mac and Windows users. You can also use Quicktime (free) if you’re running the test in person.
  • Video – Recording your participants as they make their way through the various tasks in a usability test can provide useful reference material at the end of your testing sessions. You can refer back to specific points in a video to capture any detail you may have missed, and you can share video with stakeholders to demonstrate a point. If you don’t have access to a dedicated camera, consider mounting your smartphone on a tripod and recording that way.

Taking (and making use of) notes

Notetaking and qualitative user testing go hand in hand. For most user researchers, notetaking during a research session means busting out the Post-it notes and Sharpie pens, rushing to take down every observation and insight and then having to arduously transcribe these notes after the session – or spend hours in workshops trying to identify themes and patterns. This approach still has merit, as it’s often one of the best ways to get people who aren’t too familiar with user research involved in the process. With physical notes, you can gather people around a whiteboard and discuss what you’re looking at. What’s more, you can get them to engage with the material directly.

But there are digital alternatives. Qualitative notetaking software (like our very own Reframer) means you can bring a laptop into a user interview and take down observations directly in a secure environment. Even better, you can ask someone else to sit in as your notetaker, freeing you up to focus on running the session. Then, once you’ve run your tests, you can use the software for theme and pattern analysis, instead of having to schedule yet another full day workshop.

Scheduling your user tests

Ah, participant scheduling. Perhaps one of the most time-consuming parts of the user testing process. Thankfully, software can drastically reduce the logistical burden.

Here are some useful pieces of software:

Dedicated scheduling tool Calendly is one of the most popular options for participant scheduling in the UX community. It’s really hands-off, in that you basically let the tool know when you’re available, share the Calendly link with your prospective participants, and then they select a time (from your available slots) that works for them. There are also a host of other useful features that make it a popular option for researchers, like integrations and smart timezones.

If you’re already using the Optimal Workshop platform, you can use our  survey tool Questions as a fairly robust scheduling tool. Simply set up a study and add in prospective time slots. You can then use the multi-choice field option to have people select when they’re available to attend. You can also capture other data and avoid the usual email back and forth.

Storing your findings

One of the biggest challenges for user researchers is effectively storing and cataloging all of the research data that they start to build up. Whether it’s video recordings of usability tests, audio recordings or even transcripts of user interviews, you need to ensure that your data is A) easily accessible after the fact, and B) stored securely to ensure you’re protecting your participants.

Here are some things to ask yourself when you store any piece of customer or user data:

  • Who will have access to this data?
  • How long do I plan to keep this data?
  • Will this data be anonymized?
  • If I’m keeping physical data on hand, where will it be stored?

Don’t make the mistake of thinking user data is ‘secure enough’, whether that’s on a company server that anyone can access, or even in an unlocked filing cabinet beneath your desk. Data privacy and security should always be at the top of your list of considerations. We won’t dive into best practices for participant data protection in this article, but instead, just mention that you need to be vigilant. Wherever you end up storing information, make sure you understand who has access.

Wrap up

Hopefully, this guide has given you an overview of some of the tools and software you can use before you start your next user test. We’ve also got a number of other interesting articles that you can read right here on our blog.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.