March 20, 2023
10

Using User Engagement Metrics to Improve Your Website's User Experience

Are your users engaged in your website? The success of your website will largely depend on your answer. After all, engaged users are valuable users; they keep coming back and will recommend your site to colleagues, friends, and family. So, if you’re not sure if your users are engaged or not, consider looking into your user engagement metrics.

User engagement can be measured using a number of key metrics provided by website analytics platforms. Metrics such as bounce rate, time on page, and click-through rate all provide clues to user engagement and therefore overall website user experience.

This article will help you understand user engagement and why it’s important to measure. We’ll also discuss how to apply user engagement insights to improve website success. Combining a little bit of data with some user research is a powerful thing, so let’s get into it.

Understanding User Engagement Metrics 📐

User engagement metrics provide valuable insight for both new and existing websites. They should be checked regularly as a sort of ‘pulse check’ for website user experience and performance. So, what metrics should you be looking at? Website metrics can be overwhelming; there are hundreds if not thousands to analyze, so let’s focus on three:

Bounce rate


Measures the percentage of users that visit just one page on your site before leaving. If your bounce rate is high it suggests that users aren’t finding the content relevant, engaging, or useful. It points to a poor initial reaction to your site and means that users are arriving, making a judgment about your design or content, and then leaving.

Time on page


Calculated by the time difference between the point when a person lands on the page and when they move on to the next one. It indicates how engaging or relevant individual pages on your website are. Low time on page figures suggest that users aren’t getting what they need from a certain page, either in terms of the content, the aesthetics, or both.

Click-through rate


Click-through rate compares the number of times someone clicks on your content, to the number of impressions you get (how many times an internal link or ad was viewed). The higher the rate, the better the engagement and performance of that element. User experience design can influence click-through rates through copywriting, button contrasts, heading structure, navigation, etc.

Conversion rate


Conversion rates are perhaps the pinnacle of user engagement metrics. Conversion rate is the percentage of users that perform specific tasks you define. They are therefore dictated by your goals, which could include form submissions, transactions, etc. If your website has high conversion rates, you can be fairly confident that your website is matching your users’ needs, requirements, and expectations.

But how do these metrics help? Well, they don’t give you an answer directly. The metrics point to potential issues with website user experience. They guide further research and subsequent updates that lead to website improvement. In the next section, we’ll discuss how these and others can support better website user experiences.

Identifying Areas for Improvement 💡

So, you’ve looked at your website’s user engagement metrics and discovered some good, and some bad. The good news is, there’s value in discovering both! The catch? You just need to find it. Remember, the metrics on their own don’t give you answers; they provide you direction.

The ‘clues’ that user engagement metrics provide are the starting point for further research. Remember, we want to make data-driven decisions. We want to avoid making assumptions and jumping to conclusions about why our website is reporting certain metrics. Fortunately, there are a bunch of different ways to do this.

User research data can be gathered by using both qualitative and quantitative research techniques. Insights into user behavior and needs can reveal why your website might be performing in certain ways. Research can include both qualitative and quantitative techniques.

Qualitative research techniques

  • Usability test – Test a product with people by observing them as they attempt to complete various tasks.
  • User interview – Sit down with a user to learn more about their background, motivations and pain points.
  • Contextual inquiry – Learn more about your users in their own environment by asking them questions before moving onto an observation activity.
  • Focus group – Gather 6 to 10 people for a forum-like session to get feedback on a product.

Quantitate research techniques

  • Card sorts – Find out how people categorize and sort information on your website.
  • First-click tests – See where people click first when tasked with completing an action.
  • A/B tests – Compare 2 versions of a design in order to work out which is more effective.
  • Clickstream analysis – Analyze aggregate data about website visits.
  • Tree-testing - Test your site structure using text-only categorization and labels

The type of research depends on what question you want to answer. Being specific about your question will help you identify what research technique(s) to deploy and ultimately the quality of your answer. If you’re serious about website improvement; identify problem areas with user engagement metrics, and investigate how to fix them with user research.

Optimizing Content and Design

If you have conducted user research and found weak areas on your website, there are many things to consider. Three good places to start are navigation, content, and website layout. Combined, these have a huge impact on user experience and can be leveraged to address disappointing engagement metrics.

Navigation


Navigation is a crucial aspect of creating a good user experience since it fundamentally connects pages and content which allows users to find what they need. Navigation should be simple and easy to follow, with important information/actions at the top of menus. Observing the results of card sorting, tree testing, and user testing can be particularly useful in website optimization efforts. You may find that search bars, breadcrumb trails, and internal links can also help overcome navigation issues.

Content


Are users seeing compelling or relevant content when they arrive on your site? Is your content organized in a way that encourages further exploration? Card sorting and content audits are useful in answering these questions and can help provide you with the insights required to optimize your content. You should identify what content might be redundant, out of date, or repetitive, as well as any gaps that may need filling.

Layout


A well-designed layout can improve the overall usability of a website, making it easier for users to find what they're looking for, understand the content, and engage with it. Consider how consistent your heading structures are and be sure to use consistent styling throughout the site, such as similar font sizes and colors. Don’t be afraid to use white space; it’s great at breaking up sections and making content more readable.

An additional factor related to layout is mobile optimization. Mobile-first design is necessary for apps, but it should also factor into your website design. How responsive is your website? How easy is it to navigate on mobile? Is your font size appropriate? You might find that poor mobile experience is negatively impacting user engagement metrics.

Measuring Success 🔎

User experience design is an iterative, ongoing process, so it’s important to keep a record of your website’s user experience metrics at various points of development. Fortunately, website analytics platforms will provide you with historic user data and key metrics; but be sure to keep a separate record of what improvements you make along the way. This will help you pinpoint what changes impacted different metrics.

Define your goals and create a website optimization checklist that monitors key metrics on your site. For example, whenever you make an update, ensure bounce rates don’t exceed a certain number during the days following; check that your conversion rates are performing as they should be; check your time on sites hasn’t dropped. Be sure to compare metrics between desktop and mobile too.

User’s needs and expectations change over time, so keep an eye on how new content is performing. For example, which new blog posts have attracted the most attention? What pages or topics have had the most page views compared to the previous period? Tracking such changes can help to inform what your users are currently engaged in, and will help guide your user experience improvements.

Conclusion 🤗

User engagement metrics allow you to put clear parameters around user experience. They allow you to measure where your website is performing well, and where your website might need improving. Their main strength is in how accessible they are; you can access key metrics on website analytics platforms in moments. However, user engagement metrics on their own may not reveal how and why certain website improvements should be made. In order to understand what’s going on, you often need to dig a little deeper.

Time on page, bounce rate, click-through rate, and conversion rates are all great starting points to understand your next steps toward website improvement. Use them to define where further research may be needed. Not sure why your average pages per session is two? Try conducting first-click testing; where are they heading that seems to be a dead end? Is your bounce rate too high? Conduct a content audit to find out if your information is still relevant, or look into navigation roadblocks. Whatever the question; keep searching for the answer.

User engagement metrics will keep you on your toes, but that’s a good thing. They empower you to make ongoing website improvements and ensure that users are at the heart of your website design. 

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Usability Testing: what, how and why?

Knowing and understanding why and how your users use your product can be invaluable for getting to the nitty gritty of usability. Where they get stuck and where they fly through. Delving deep with probing questions into motivation or skimming over looking for issues can equally be informative.

Usability testing can be done in several ways, each way has its benefits. Put super simply, usability testing literally is testing how useable your product is for your users. If your product isn't useable users will not stick around or very often complete their task, let alone come back for more.

What is usability testing? 🔦

Usability testing is a research method used to evaluate how easy something is to use by testing it with representative users.

These tests typically involve observing a participant as they work through a series of tasks involving the product being tested. Having conducted several usability tests, you can analyze your observations to identify the most common issues.

We go into the three main methods of usability testing:

  1. Moderated and unmoderated
  2. Remote or in person
  3. Explorative, assessment or comparative

1. Moderated or unmoderated usability testing 👉👩🏻💻

Moderated usability testing is done in-person or remotely by a researcher who introduces the test to participants, answers their queries, and asks follow-up questions. Often these tests are done in real time with participants and can involve other research stakeholders. Moderated testing usually produces more in-depth results thanks to the direct interaction between researchers and test participants. However, this can be expensive to organize and run.

Top tip: Use moderated testing to investigate the reasoning behind user behavior.

Unmoderated usability testing is done without direct supervision; likely participants are in their own homes and/or using their own devices to browse the website that is being tested. And often at their own pace.  The cost of unmoderated testing is lower, though participant answers can remain superficial and making follow-up questions can be difficult.

Top tip: Use unmoderated testing to test a very specific question or observe and measure behavior patterns.

2. Research or in-person usability testing 🕵

Remote usability testing is done over the internet or by phone. Allowing the participants to have the time and space to work in their own environment and at their own pace. This however doesn’t give the researcher much in the way of contextual data because you’re unable to ask questions around intention or probe deeper if the participant makes a particular decision. Remote testing doesn’t go as deep into a participant’s reasoning, but it allows you to test large numbers of people in different geographical areas using fewer resources.

Top tip: Use remote testing when a large group of participants are needed and the questions asked can be direct and unambiguous.

In-person usability testing, as the name suggests, is done in the presence of a researcher. In-person testing does provide contextual data as researchers can observe and analyze body language and facial expressions. You’re also often able to converse with participants and find out more about why they do something. However, in-person testing can be expensive and time-consuming: you have to find a suitable space, block out a specific date, and recruit (and often pay) participants.

Top tip: In-person testing gives researchers more time and insight into motivation for decisions.

3. Explorative, Assessment or comparative testing 🔍

These three usability testing methods generate different types of information:

Explorative testing is open-ended. Participants are asked to brainstorm, give opinions, and express emotional impressions about ideas and concepts. The information is typically collected in the early stages of product development and helps researchers pinpoint gaps in the market, identify potential new features, and workshop new ideas.

Assessment research is used to test a user's satisfaction with a product and how well they are able to use it. It's used to evaluate general functionality.

Comparative research methods involve asking users to choose which of two solutions they prefer, and they may be used to compare a product with its competitors.

Top tip: Depending on what research is being done, and how much qualitative or quantitative data is wanted.

Which method is right for you? 🧐

Whether the testing is done in-person, remote, moderated or unmoderated will depend on your purpose, what you want out of the testing, and to some extent your budget. 

Depending on what you are testing, each of the usability testing methods we explored here can offer an answer. If you are at the development stage of a product it can be useful to conduct a usability test on the entire product. Checking the intuitive usability of your website, to ensure users can make the best decisions, quickly. Or adding, changing or upgrading a product can also be the moment to check on a specific question around usability. Planning and understanding your objectives are key to selecting the right usability testing option for your project.

Let's take a look at a couple of examples of usability testing.

1. Lab based, in-person moderated testing - mid-life website

Imagine you have a website that sells sports equipment. Over time your site has become cluttered and disorganized, much like a bricks and mortar store may. You’ve noticed a drop in sales in certain areas. How do you find out what is going wrong or where users are getting lost? Having an in-person, lab (or other controlled environment), moderated usability test with users you can set tasks, watch (and record) what they do.

The researcher can literally be standing or sitting next to the participant throughout, recording contextual information such as how they interacted with the mouse, laptop or even the seat. Watching for cues as to the comfort of the participant and asking questions about why they make decisions can provide richer insights. Maybe they wanted purple yoga pants, but couldn’t find the ‘yoga’ section which was listed under gym rather than a clothing section.

Meaning you can look at how your stock is organised, or even investigate undertaking a card sort. This provides robust and fully rounded feedback on users behaviours, expectations and experiences. Providing data that can directly be turned into actionable directives when redeveloping the website. 

2. Remote, moderated assessment testing - app product development

You are looking at launching an app for parents to access for information and updates for the school. It’s still in development stage and at this point you want to know how easy the app is to use. Setting some very specific set tasks for participants to complete the app can be sent to them and they can be left to complete (or not). Providing feedback and comments around the usability.

The next step may be to use first click testing to see how and where the interface is clicked and where participants may be spending time, or becoming lost. Whilst the feedback and data gathered from this testing can be light, it will be very direct to the questions asked. And will provide data to back up (or possibly not) what assumptions were made.

3. Moderated, In-person, explorative testing - new product development

You’re right at the start of the development process. The idea is new and fresh and the basics are being considered. What better way to get an understanding of what your users’ truly want than an explorative study.

Open-ended questions with participants in a one-on-one environment (or possibly in groups) can provide rich data and insights for the development team. Imagine you have an exciting new promotional app that you are developing for a client. There are similar apps on the market but none as exciting as what your team has dreamt up. By putting it (and possibly the competitors) to participants they can give direct feedback on what they like, love and loathe.

They can also help brainstorm ideas or better ways to make the app work, or improve the interface. All of this done, before there is money sunk in development.

Wrap up 🌯

Key objectives will dictate which usability testing method will deliver the answers to your questions.

Whether it’s in-person, remote, moderated or comparative with a bit of planning you can gather data around your users very real experience of your product. Identify issues, successes and failures. Addressing your user experience with real data, and knowledge can but lead to a more intuitive product.

Learn more
1 min read

5 tips for running an effective usability test

Usability testing is one of the best ways to measure how easy and intuitive to use something is by testing it with real people. You can read about the basics of usability testing here.

Earlier this year, a small team within Optimal Workshop completely redesigned the company blog. More than anything, we wanted to create something that was user-friendly for our readers and would give them a reason to return. I was part of that team, and we ran numerous sessions interviewing regular readers as well as people unfamiliar with our blog. We also ran card sorts, tree tests and other studies to find out all we could about how people search for UX content. Unsurprisingly, one of the most valuable activities we did was usability testing – sitting down with representative users and watching them as they worked through a series of tasks we provided. We asked general questions like “Where would you go to find information about card sorting”, and we also observed them as they searched through our website for learning content.

By stripping away any barriers between ourselves and our users and observing them as they navigated through our website and learning resources, as well as those of other companies, we were able to build a blog with these people’s behaviors and motivations in mind.

Usability testing is an invaluable research method, and every user researcher should be able to run sessions effectively. Here are 5 tips for doing so, in no particular order.

1. Clarify your goals with stakeholders

Never go into a usability test blind. Before you ever sit down with a participant, make sure you know exactly what you want to get out of the session by writing down your research goals. This will help to keep you focused, essentially giving you a guiding light that you can refer back as you go about the various logistical tasks of your research. But you also need to take this a step further. It’s important to make sure that the people who will utilize the results of your research – your stakeholders – have an opportunity to give you their input on the goals as early as possible.

If you’re running usability tests with the aim of creating marketing personas, for example, meet with your organization’s marketing team and figure out the types of information they need to create these personas. In some cases, it’s also helpful to clarify how you plan to gather this data, which can involve explaining some of the techniques you’re going to use.

Lastly, find out how your stakeholders plan to use your findings. If there are a lot of objectives, organize your usability test so you ask the most important questions first. That way, if you end up going off track or you run out of time you’ll have already gathered the most important data for your stakeholders.

2. Be flexible with your questions

A list of pre-prepared questions will help significantly when it comes time to sit down and run your usability testing sessions. But while a list is essential, sometimes it can also pay to ‘follow your nose’ and steer the conversation in a (potentially) more fruitful direction.

How many times have you been having a conversation with a friend over a drink or dinner, only for you both to completely lose track of time and find yourselves discussing something completely unrelated? While it’s not good practice to let your usability testing sessions get off track to this extent, you can surface some very interesting insights by paying close attention to a user’s behavior and answers during a testing session and following interesting leads.

Ideally, and with enough practice, you’ll be able to answer your core (prepared) questions and ask a number of other questions that spring to mind during the session. This is a skill that takes time to master, however.

3. Write a script for your sessions

While a usability test script may sound like a fancy name for your research questions, it’s actually a document that’s much more comprehensive. If you prepare it correctly (we’ll explain how below), you’ll have a document that you can use to capture in-depth insights from your participants.

Here are some of the key things to keep in mind when putting together your script:

  • Write a friendly introduction – It may sound obvious, but taking the time to come up with a friendly, warm introduction will get your sessions off to a much better start. The bonus of writing it down is that you’re far less likely to forget it!
  • Ask to record the session – It’s important to record your session (whether through video or audio), as you’ll want to go back later and analyze any details you may have missed. This means asking for explicit permission to record participants. In addition to making them feel more comfortable, it’s just good practice to do so.
  • Allocate time for the basics – Don’t dive into the complex questions first, use the first few minutes to gather basic data. This could be things like where they work and their familiarity with your organization and/or product.
  • Encourage them to explain their thought process – “I’d like you to explain what you’re doing as you make your way through the task”. This simple request will give you an opportunity to ask follow-up questions that you otherwise may not have thought to ask.
  • Let participants know that they’re not being tested – Whenever a participant steps into the room for a test, they’re naturally going to feel like they’re being tested. Explain that you’re testing the product, not them. It’s also helpful to let them know that there are no right or wrong answers. This is an important step if you want to keep them relaxed.

It’s often easiest to have a document with your script printed out and ready to go for each usability test.

4. Take advantage of software

You’d never see a builder without a toolbox full of a useful assortment of tools. Likewise, software can make the life of a user research that much easier. The paper-based ways of recording information are still perfectly valid, but introducing custom tools can make both the logistics of user research and the actual sessions themselves much easier to manage.

Take a tool like Calendly, for example. This is a powerful piece of scheduling software that almost completely takes over the endless back and forth of scheduling usability tests. Calendly acts as a middle man between you and your participants, allowing you to set the times you’re free to host usability tests, and then allowing participants to choose a session that suits them from these times.

Our very own Reframer makes the task of running usability tests and analyzing insights that much easier. During your sessions, you can use Reframer to take comprehensive notes and apply tags like “positive” or “struggled” to different observations. Then, after you’ve concluded your tests, Reframer’s analysis function will help you understand wider themes that are present across your participants.

There’s another benefit to using a tool like Reframer. Keeping all of your notes in place will mean you easily pull up data from past research sessions whenever you need to.

5. Involve others

Usability tests (and user interviews, for that matter) are a great opportunity to open up research to your wider organization. Whether it’s stakeholders, other members of your immediate team or even members of entirely different departments, giving them the chance to sit down with users will show them how their products are really being used. If nothing else, these sessions will help those within your organization build empathy with the people they’re building products for.

There are quite a few ways to bring others in, such as:

  • To help you set up the research – This can be a helpful exercise for both you (the researcher) and the people you’re bringing in. Collaborate on the overarching research objectives, ask them what types of results they’d like to see and what sort of tasks they think could be used to gather these results.
  • As notetakers – Having a dedicated notetaker will make your life as a researcher significantly easier. This means you’ll have someone to record any interesting observations while you focus on running the session. Just let them know what types of notes you’d like to see.
  • To help you analyze the data – Once you’ve wrapped up your usability testing sessions, bring others in to help analyze the findings. There’s a good chance that an outside perspective will catch something you may miss. Also, if you’re bringing stakeholders into the analysis stage, they'll get a clearer picture of what it means and where the data came from.

There are myriad other tips and best practices to keep in mind when usability testing, many of which we cover in our introductory page. Important considerations include taking good quality notes, carefully managing participants during the session (not giving them too much guidance) and remaining neutral throughout when answering their questions. If you feel like we’ve missed any really important points, feel free to leave a comment!

Read more

Learn more
1 min read

Ella Stoner: A three-step-tool to help designers break down the barriers of technical jargon

Designing in teams with different stakeholders can be incredibly complex. Each person looks at projects through their own lens, and can potentially introduce jargon and concepts that are confusing to others. Simplicity advocate Ella Stoner knows this scenario all too well. It’s what led her to create an easy three-step tool for recognizing problems and developing solutions. By getting everyone on the same page and creating an understanding of what the simplest solution is, designers can create products with customer needs in mind.

Ella’s background

Ella Stoner is a CX Designer at Spark in New Zealand. She is a creative thought leader and a talented designer who has facilitated over 50 Human Centered Design Workshops. Ella and her team have developed a cloud product that enables businesses to connect with Public Cloud Services such as Amazon, Google and Azure in a human-centric way. She brings a simplistic approach to her work that is reflected in her UX New Zealand talk. It’s about cutting out complex details to establish an agreed starting point that is easily understood by all team members.

Contact Details:

You can find Ella on LinkedIn.

Improving creative confidence 🤠

Ella is confident that she is not the only designer who has felt overwhelmed with technical and industry specific jargon in product meetings. For example, on Ella’s first day as a designer with Spark, she attended a meeting about an HSNS (High Speed Network Services) tool. Ella attempted to use context clues to try and predict what HSNS could mean. However, as the meeting went on, the technical and industry-specific jargon built on each other and Ella struggled to follow what was being said. At one point Ella asked the team to clarify this mysterious term:

“What’s an HSNS and why would the customer use it?” she asked. Much to her surprise, the room was completely silent. The team struggled to answer a basic question, about a term that appeared to be common knowledge during the meeting. There’s a saying, “Why do something simply when you can make it as complicated as possible?”. This happens all too often, where people and teams struggle to communicate with each other, and this results in projects and products that customers don’t understand and can’t use. Ella’s In A Nutshell tool is designed to cut through all that. It creates a base level starting point that’s understood by all, cuts out jargon, and puts the focus squarely on the customer. It:

  • condenses down language and jargon to its simplest form
  • translates everything into common language
  • flips it back to the people who’ll be using it.

Here’s how it works:

First, you complete this phrase as it pertains to your work: “In a nutshell, (project/topic) is (describe what the project or topic is in a few words), that (state what the project/topic does) for (indicate key customer/users and why). In order for this method to work, each of the four categories you insert must be simple and understandable. All acronyms, complex language, and technical jargon must be avoided.  In a literal sense, anyone reading the statement should be able to understand what is being said “in a nutshell.” When you’ve done this, you’ll have a statement that can act as a guide for the goals your project aims to achieve.

Why it matters 🤔

Applying the “In A Nutshell” tool doesn’t take long. However, it's important to write this statement as a team. Ideally, it’s best to write the statement at the start of a project, but you can also write it in the middle if you need to create a reference point, or any time you feel technical jargon creeping in.

Here’s what you’ll need to get started:

  • People with three or more role types (this accommodates varying perspectives to ensure it’s as relevant as possible)
  • A way to capture text - i.e. whiteboard, Slack channel, Miro board
  • An easy voting system - i.e., thumbs up in a chat

Before you start, you may need to pitch the idea to someone in a technical role. If you’re feeling lost or confused, chances are someone else will be too. Breaking down the technical concepts into easy-to-understand and digestible language is of utmost importance:

  1. Explain the Formula to the team..
  2. Individually brainstorm possible answers for each gap for three minutes.
  3. Put every idea up on the board or channel and vote on the best one.

Use the most popular answers as your final “In a Nutshell” statement.

Side note: Keep all the options that come through the brainstorm. They can still be useful in the design process to help form a full picture of what you’re working on, what it should do, who it should be for etc.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.