August 8, 2022
4 min

Usability Testing: what, how and why?

Knowing and understanding why and how your users use your product can be invaluable for getting to the nitty gritty of usability. Where they get stuck and where they fly through. Delving deep with probing questions into motivation or skimming over looking for issues can equally be informative.

Usability testing can be done in several ways, each way has its benefits. Put super simply, usability testing literally is testing how useable your product is for your users. If your product isn't useable users will not stick around or very often complete their task, let alone come back for more.

What is usability testing? 🔦

Usability testing is a research method used to evaluate how easy something is to use by testing it with representative users.

These tests typically involve observing a participant as they work through a series of tasks involving the product being tested. Having conducted several usability tests, you can analyze your observations to identify the most common issues.

We go into the three main methods of usability testing:

  1. Moderated and unmoderated
  2. Remote or in person
  3. Explorative, assessment or comparative

1. Moderated or unmoderated usability testing 👉👩🏻💻

Moderated usability testing is done in-person or remotely by a researcher who introduces the test to participants, answers their queries, and asks follow-up questions. Often these tests are done in real time with participants and can involve other research stakeholders. Moderated testing usually produces more in-depth results thanks to the direct interaction between researchers and test participants. However, this can be expensive to organize and run.

Top tip: Use moderated testing to investigate the reasoning behind user behavior.

Unmoderated usability testing is done without direct supervision; likely participants are in their own homes and/or using their own devices to browse the website that is being tested. And often at their own pace.  The cost of unmoderated testing is lower, though participant answers can remain superficial and making follow-up questions can be difficult.

Top tip: Use unmoderated testing to test a very specific question or observe and measure behavior patterns.

2. Research or in-person usability testing 🕵

Remote usability testing is done over the internet or by phone. Allowing the participants to have the time and space to work in their own environment and at their own pace. This however doesn’t give the researcher much in the way of contextual data because you’re unable to ask questions around intention or probe deeper if the participant makes a particular decision. Remote testing doesn’t go as deep into a participant’s reasoning, but it allows you to test large numbers of people in different geographical areas using fewer resources.

Top tip: Use remote testing when a large group of participants are needed and the questions asked can be direct and unambiguous.

In-person usability testing, as the name suggests, is done in the presence of a researcher. In-person testing does provide contextual data as researchers can observe and analyze body language and facial expressions. You’re also often able to converse with participants and find out more about why they do something. However, in-person testing can be expensive and time-consuming: you have to find a suitable space, block out a specific date, and recruit (and often pay) participants.

Top tip: In-person testing gives researchers more time and insight into motivation for decisions.

3. Explorative, Assessment or comparative testing 🔍

These three usability testing methods generate different types of information:

Explorative testing is open-ended. Participants are asked to brainstorm, give opinions, and express emotional impressions about ideas and concepts. The information is typically collected in the early stages of product development and helps researchers pinpoint gaps in the market, identify potential new features, and workshop new ideas.

Assessment research is used to test a user's satisfaction with a product and how well they are able to use it. It's used to evaluate general functionality.

Comparative research methods involve asking users to choose which of two solutions they prefer, and they may be used to compare a product with its competitors.

Top tip: Depending on what research is being done, and how much qualitative or quantitative data is wanted.

Which method is right for you? 🧐

Whether the testing is done in-person, remote, moderated or unmoderated will depend on your purpose, what you want out of the testing, and to some extent your budget. 

Depending on what you are testing, each of the usability testing methods we explored here can offer an answer. If you are at the development stage of a product it can be useful to conduct a usability test on the entire product. Checking the intuitive usability of your website, to ensure users can make the best decisions, quickly. Or adding, changing or upgrading a product can also be the moment to check on a specific question around usability. Planning and understanding your objectives are key to selecting the right usability testing option for your project.

Let's take a look at a couple of examples of usability testing.

1. Lab based, in-person moderated testing - mid-life website

Imagine you have a website that sells sports equipment. Over time your site has become cluttered and disorganized, much like a bricks and mortar store may. You’ve noticed a drop in sales in certain areas. How do you find out what is going wrong or where users are getting lost? Having an in-person, lab (or other controlled environment), moderated usability test with users you can set tasks, watch (and record) what they do.

The researcher can literally be standing or sitting next to the participant throughout, recording contextual information such as how they interacted with the mouse, laptop or even the seat. Watching for cues as to the comfort of the participant and asking questions about why they make decisions can provide richer insights. Maybe they wanted purple yoga pants, but couldn’t find the ‘yoga’ section which was listed under gym rather than a clothing section.

Meaning you can look at how your stock is organised, or even investigate undertaking a card sort. This provides robust and fully rounded feedback on users behaviours, expectations and experiences. Providing data that can directly be turned into actionable directives when redeveloping the website. 

2. Remote, moderated assessment testing - app product development

You are looking at launching an app for parents to access for information and updates for the school. It’s still in development stage and at this point you want to know how easy the app is to use. Setting some very specific set tasks for participants to complete the app can be sent to them and they can be left to complete (or not). Providing feedback and comments around the usability.

The next step may be to use first click testing to see how and where the interface is clicked and where participants may be spending time, or becoming lost. Whilst the feedback and data gathered from this testing can be light, it will be very direct to the questions asked. And will provide data to back up (or possibly not) what assumptions were made.

3. Moderated, In-person, explorative testing - new product development

You’re right at the start of the development process. The idea is new and fresh and the basics are being considered. What better way to get an understanding of what your users’ truly want than an explorative study.

Open-ended questions with participants in a one-on-one environment (or possibly in groups) can provide rich data and insights for the development team. Imagine you have an exciting new promotional app that you are developing for a client. There are similar apps on the market but none as exciting as what your team has dreamt up. By putting it (and possibly the competitors) to participants they can give direct feedback on what they like, love and loathe.

They can also help brainstorm ideas or better ways to make the app work, or improve the interface. All of this done, before there is money sunk in development.

Wrap up 🌯

Key objectives will dictate which usability testing method will deliver the answers to your questions.

Whether it’s in-person, remote, moderated or comparative with a bit of planning you can gather data around your users very real experience of your product. Identify issues, successes and failures. Addressing your user experience with real data, and knowledge can but lead to a more intuitive product.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

5 key areas for effective ResearchOPs

Simply put, ResearchOps is about making sure your research operations are robust, thought through and managed. 

Having systems and processes around your UX research and your team keep everyone (and everything) organized. Making user research projects quicker to get started and more streamlined to run. And robust sharing, socializing, and knowledge storage means that everyone can understand the research insights and findings and put these to use - across the organization. And even better, find these when they need them. 

Using the same tools across the team allows the research team to learn from each other, and previous research projects and be able to compare apples with apples, with everyone included. Bringing the team together across tools, research and results.

We go into more detail in our ebook ResearchOps Checklist about exactly what you can do to make sure your research team is running at its best. Let’s take a quick look at 5 way to ensure you have the grounding for a successful ResearchOps team.

1. Knowledge management 📚

What do you do with all of the insights and findings of a user research project? How do you store them, how do you manage the insights, and how do you share and socialize?

Having processes in place that manage this knowledge is important to the longevity of your research. From filing to sharing across platforms, it all needs to be standardized so everyone can search, find and share.

2. Guidelines and process templates 📝

Providing a framework for how to run research projects is are important. Building on the knowledge base from previous research can improve research efficiencies and cut down on groundwork and administration. Making research projects quicker and more streamlined to get underway.

3. Governance 🏛

User research is all about people, real people. It is incredibly important that any research be legal, safe, and ethical. Having effective governance covered is vital.

4. Tool stack 🛠

Every research team needs a ‘toolbox’ that they can use whenever they need to run card sorts, tree tests, usability tests, user interviews, and more. But which software and tools to use?

Making sure that the team is using the same tools also helps with future research projects, learning from previous projects, and ensuring that the information is owned and run by the organization (rather than whichever individuals prefer). Reduce logins and password shares, and improve security with organization-wide tools and platforms. 

5. Recruitment 👱🏻👩👩🏻👧🏽👧🏾

Key to great UX research is the ability to recruit quality participants - fast! Having strong processes in place for screening, scheduling, sampling, incentivizing, and managing participants needs to be top of the list when organizing the team.

Wrap Up 💥

Each of these ResearchOps processes are not independent of the other. And neither do they flow from one to the other. They are part of a total wrap around for the research team, creating processes, systems and tools that are built to serve the team. Allowing them to focus on the job of doing great research and generating insights and findings that develop the very best user experience. 

Afterall, we are creating user experiences that keep our users engaged and coming back. Why not look at the teams user experience and make the most of that. Freeing time and space to socialize and share the findings with the organization. 

Learn more
1 min read

User research and agile squadification at Trade Me

Hi, I’m Martin. I work as a UX researcher at Trade Me having left Optimal Experience (Optimal Workshop's sister company) last year. For those of you who don’t know, Trade Me is New Zealand’s largest online auction site that also lists real estate to buy and rent, cars to buy, jobs listings, travel accommodation and quite a few other things besides. Over three quarters of the population are members and about three quarters of the Internet traffic for New Zealand sites goes to the sites we run.

Leaving a medium-sized consultancy and joining Trade Me has been a big change in many ways, but in others not so much, as I hadn’t expected to find myself operating in a small team of in-house consultants. The approach the team is taking is proving to be pretty effective, so I thought I’d share some of the details of the way we work with the readers of Optimal Workshop’s blog. Let me explain what I mean…

What agile at Trade Me looks like

Over the last year or so, Trade Me has moved all of its development teams over to Agile following a model pioneered by Spotify. All of the software engineering parts of the business have been ‘squadified’. These people produce the websites & apps or provide and support the infrastructure that makes everything possible.Across Squads, there are common job roles in ‘Chapters’ (like designers or testers) and because people are not easy to force into boxes, and why should they be, there are interest groups called ‘Guilds’.The squads are self-organizing, running their own processes and procedures to get to where they need to. In practice, this means they use as many or as few of the Kanban, Scrum, and Rapid tools they find useful. Over time, we’ve seen that squads tend to follow similar practices as they learn from each other.

How our UX team fits in

Our UX team of three sits outside the squads, but we work with them and with the product owners across the business.How does this work? It might seem counter-intuitive to have UX outside of the tightly-integrated, highly-focused squads, sometimes working with product owners working on stuff that might have little to do with what’s being currently developed in the squads. This comes down to the way Trade Me divides down the UX responsibilities within the organization. Within each squad there is a designer. He or she is responsible for how that feature or app looks, and, more importantly, how it acts — interaction design as well as visual design.Then what do we do, if we are the UX team?

We represent the voice of Trade Me’s users

By conducting research with Trade Me’s users we can validate the squads’ day-to-day decisions, and help frame decisions on future plans. We do this by wearing two hats. Wearing the pointy hats of structured, detailed researchers, we look into long-term trends: the detailed behaviours and goals of our different audiences. We’ve conducted lots of one-on-one interviews with hundreds of people, including top sellers, motor parts buyers, and job seekers, as well as running surveys, focus groups and user testing sessions of future-looking prototypes. For example, we recently spent time with a number of buyers and sellers, seeking to understand their motivations and getting under their skin to find out how they perceive Trade Me.

This kind of research enables Trade Me to anticipate and respond to changes in user perception and satisfaction.Swapping hats to an agile beanie (and stretching the metaphor to breaking point), we react to the medium-term, short-term and very short-term needs of the squads testing their ideas, near-finished work and finished work with users, as well as sometimes simply answering questions and providing opinion, based upon our research. Sometimes this means that we can be testing something in the afternoon having only heard we are needed in the morning. This might sound impossible to accommodate, but the pace of change at Trade Me is such that stuff is getting deployed pretty much every day, many of which affects our users directly. It’s our job to ensure that we support our colleagues to do the very best we can for our users.

How our ‘drop everything’ approach works in practice

Screen Shot 2014-07-11 at 10.00.21 am

We recently conducted five or six rounds (no one can quite remember, we did it so quickly) of testing of our new iPhone application (pictured above) — sometimes testing more than one version at a time. The development team would receive our feedback face-to-face, make changes and we’d be testing the next version of the app the same or the next day. It’s only by doing this that we can ensure that Trade Me members will see positive changes happening daily rather than monthly.

How we prioritize what needs to get done

To help us try to decide what we should be doing at any one time we have some simple rules to prioritise:

  • Core product over other business elements
  • Finish something over start something new
  • Committed work over non-committed work
  • Strategic priorities over non-strategic priorities
  • Responsive support over less time-critical work
  • Where our input is crucial over where our input is a bonus

Applying these rules to any situation makes the decision whether to jump in and help pretty easy.At any one time, each of us in the UX team will have one or more long-term projects, some medium-term projects, and either some short-term projects or the capacity for some short-term projects (usually achieved by putting aside a long-term project for a moment).

We manage our time and projects on Trello, where we can see at a glance what’s happening this and next week, and what we’ve caught sniff of in the wind that might be coming up, or definitely is coming up.On the whole, both we and the squads favour fast response, bulleted list, email ‘reports’ for any short-term requests for user testing.  We get a report out within four hours of testing (usually well within that). After all, the squads are working in short sprints, and our involvement is often at the sharp end where delays are not welcome. Most people aren’t going to read past the management summary anyway, so why not just write that, unless you have to?

How we share our knowledge with the organization

Even though we mainly keep our reporting brief, we want the knowledge we’ve gained from working with each squad or on each product to be available to everyone. So we maintain a wiki that contains summaries of what we did for each piece of work, why we did it and what we found. Detailed reports, if there are any, are attached. We also send all reports out to staff who’ve subscribed to the UX interest email group.

Finally, we send out a monthly email, which looks across a bunch of research we’ve conducted, both short and long-term, and draws conclusions from which our colleagues can learn. All of these latter activities contribute to one of our key objectives: making Trade Me an even more user-centred organization than it is.I’ve been with Trade Me for about six months and we’re constantly refining our UX practices, but so far it seems to be working very well.Right, I’d better go – I’ve just been told I’m user testing something pretty big tomorrow and I need to write a test script!

Learn more
1 min read

Using User Engagement Metrics to Improve Your Website's User Experience

Are your users engaged in your website? The success of your website will largely depend on your answer. After all, engaged users are valuable users; they keep coming back and will recommend your site to colleagues, friends, and family. So, if you’re not sure if your users are engaged or not, consider looking into your user engagement metrics.

User engagement can be measured using a number of key metrics provided by website analytics platforms. Metrics such as bounce rate, time on page, and click-through rate all provide clues to user engagement and therefore overall website user experience.

This article will help you understand user engagement and why it’s important to measure. We’ll also discuss how to apply user engagement insights to improve website success. Combining a little bit of data with some user research is a powerful thing, so let’s get into it.

Understanding User Engagement Metrics 📐

User engagement metrics provide valuable insight for both new and existing websites. They should be checked regularly as a sort of ‘pulse check’ for website user experience and performance. So, what metrics should you be looking at? Website metrics can be overwhelming; there are hundreds if not thousands to analyze, so let’s focus on three:

Bounce rate


Measures the percentage of users that visit just one page on your site before leaving. If your bounce rate is high it suggests that users aren’t finding the content relevant, engaging, or useful. It points to a poor initial reaction to your site and means that users are arriving, making a judgment about your design or content, and then leaving.

Time on page


Calculated by the time difference between the point when a person lands on the page and when they move on to the next one. It indicates how engaging or relevant individual pages on your website are. Low time on page figures suggest that users aren’t getting what they need from a certain page, either in terms of the content, the aesthetics, or both.

Click-through rate


Click-through rate compares the number of times someone clicks on your content, to the number of impressions you get (how many times an internal link or ad was viewed). The higher the rate, the better the engagement and performance of that element. User experience design can influence click-through rates through copywriting, button contrasts, heading structure, navigation, etc.

Conversion rate


Conversion rates are perhaps the pinnacle of user engagement metrics. Conversion rate is the percentage of users that perform specific tasks you define. They are therefore dictated by your goals, which could include form submissions, transactions, etc. If your website has high conversion rates, you can be fairly confident that your website is matching your users’ needs, requirements, and expectations.

But how do these metrics help? Well, they don’t give you an answer directly. The metrics point to potential issues with website user experience. They guide further research and subsequent updates that lead to website improvement. In the next section, we’ll discuss how these and others can support better website user experiences.

Identifying Areas for Improvement 💡

So, you’ve looked at your website’s user engagement metrics and discovered some good, and some bad. The good news is, there’s value in discovering both! The catch? You just need to find it. Remember, the metrics on their own don’t give you answers; they provide you direction.

The ‘clues’ that user engagement metrics provide are the starting point for further research. Remember, we want to make data-driven decisions. We want to avoid making assumptions and jumping to conclusions about why our website is reporting certain metrics. Fortunately, there are a bunch of different ways to do this.

User research data can be gathered by using both qualitative and quantitative research techniques. Insights into user behavior and needs can reveal why your website might be performing in certain ways. Research can include both qualitative and quantitative techniques.

Qualitative research techniques

  • Usability test – Test a product with people by observing them as they attempt to complete various tasks.
  • User interview – Sit down with a user to learn more about their background, motivations and pain points.
  • Contextual inquiry – Learn more about your users in their own environment by asking them questions before moving onto an observation activity.
  • Focus group – Gather 6 to 10 people for a forum-like session to get feedback on a product.

Quantitate research techniques

  • Card sorts – Find out how people categorize and sort information on your website.
  • First-click tests – See where people click first when tasked with completing an action.
  • A/B tests – Compare 2 versions of a design in order to work out which is more effective.
  • Clickstream analysis – Analyze aggregate data about website visits.
  • Tree-testing - Test your site structure using text-only categorization and labels

The type of research depends on what question you want to answer. Being specific about your question will help you identify what research technique(s) to deploy and ultimately the quality of your answer. If you’re serious about website improvement; identify problem areas with user engagement metrics, and investigate how to fix them with user research.

Optimizing Content and Design

If you have conducted user research and found weak areas on your website, there are many things to consider. Three good places to start are navigation, content, and website layout. Combined, these have a huge impact on user experience and can be leveraged to address disappointing engagement metrics.

Navigation


Navigation is a crucial aspect of creating a good user experience since it fundamentally connects pages and content which allows users to find what they need. Navigation should be simple and easy to follow, with important information/actions at the top of menus. Observing the results of card sorting, tree testing, and user testing can be particularly useful in website optimization efforts. You may find that search bars, breadcrumb trails, and internal links can also help overcome navigation issues.

Content


Are users seeing compelling or relevant content when they arrive on your site? Is your content organized in a way that encourages further exploration? Card sorting and content audits are useful in answering these questions and can help provide you with the insights required to optimize your content. You should identify what content might be redundant, out of date, or repetitive, as well as any gaps that may need filling.

Layout


A well-designed layout can improve the overall usability of a website, making it easier for users to find what they're looking for, understand the content, and engage with it. Consider how consistent your heading structures are and be sure to use consistent styling throughout the site, such as similar font sizes and colors. Don’t be afraid to use white space; it’s great at breaking up sections and making content more readable.

An additional factor related to layout is mobile optimization. Mobile-first design is necessary for apps, but it should also factor into your website design. How responsive is your website? How easy is it to navigate on mobile? Is your font size appropriate? You might find that poor mobile experience is negatively impacting user engagement metrics.

Measuring Success 🔎

User experience design is an iterative, ongoing process, so it’s important to keep a record of your website’s user experience metrics at various points of development. Fortunately, website analytics platforms will provide you with historic user data and key metrics; but be sure to keep a separate record of what improvements you make along the way. This will help you pinpoint what changes impacted different metrics.

Define your goals and create a website optimization checklist that monitors key metrics on your site. For example, whenever you make an update, ensure bounce rates don’t exceed a certain number during the days following; check that your conversion rates are performing as they should be; check your time on sites hasn’t dropped. Be sure to compare metrics between desktop and mobile too.

User’s needs and expectations change over time, so keep an eye on how new content is performing. For example, which new blog posts have attracted the most attention? What pages or topics have had the most page views compared to the previous period? Tracking such changes can help to inform what your users are currently engaged in, and will help guide your user experience improvements.

Conclusion 🤗

User engagement metrics allow you to put clear parameters around user experience. They allow you to measure where your website is performing well, and where your website might need improving. Their main strength is in how accessible they are; you can access key metrics on website analytics platforms in moments. However, user engagement metrics on their own may not reveal how and why certain website improvements should be made. In order to understand what’s going on, you often need to dig a little deeper.

Time on page, bounce rate, click-through rate, and conversion rates are all great starting points to understand your next steps toward website improvement. Use them to define where further research may be needed. Not sure why your average pages per session is two? Try conducting first-click testing; where are they heading that seems to be a dead end? Is your bounce rate too high? Conduct a content audit to find out if your information is still relevant, or look into navigation roadblocks. Whatever the question; keep searching for the answer.

User engagement metrics will keep you on your toes, but that’s a good thing. They empower you to make ongoing website improvements and ensure that users are at the heart of your website design. 

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.