First-click Testing 101

Your guide to creating and running effective first-click tests


When people get their first click right on a website, they are two to three times as likely to find what they're looking for than if they went in the wrong direction with that first click. First-click testing results can tell you if your customers are able to find what they're looking for quickly and easily when they land on your webpage, and if not, where they click instead.

To create a first-click test, you'll upload screenshots, sketches, or wireframes of the webpages you want to test, and invite people to complete tasks by clicking where they think they'd find the right information.

First-click testing results enable you to make user-friendly decisions for your webpages about:

  • the content and visual elements to prioritize for your audience
  • the language you use for labels, links, and content
  • where you place things like buttons, shopping cart icons, and menus.
An example first-click test

Here's an example task from start to finish to show you what first-click testing is and what it can tell you.

For this task, we selected Yelp's homepage as our image to find out if most people relied on the search bar to find information, or mainly on the menus or search filters.

Participants were presented with this task:

Example task start

To complete the task, participants clicked on the screenshot where they thought they'd find the right information. After one click, they then moved on to the next task:

Example task image

Overall, 38 people completed this task, taking an average 13.8 seconds.

Chalkmark results are presented as various clickmaps that display where people clicked, with each map being customizable depending on the data that matters the most to you — you might want to see individual clicks, or large sections showing percentages only, for example.

For this task, we wanted to compare the ‘search bar' and ‘menu' data, so we used the Custom Selection clickmap to show just that data:

Example task selection

In answer to our question, we found that 13% (5 people) went to the search bar, and 87% (33 people) clicked on either menu items or search filters.

G fc create

How to create your first-click test

A first-click test involves two main things: your webpages, and the tasks you write to test the webpages. You could write 10 tasks to test one webpage, or write one task to test 10 webpages — how you piece the test together is up to you.

Beyond the webpages and tasks, you can refine your data by tailoring your instructions, and if you want the ability to segment your participants for in-depth analysis, you can choose to ask pre and post survey questions.

Select the webpages you want to test

When you come to create a first-click test, keep the goal in mind: you want to know if people take the right first step when they land on your page, because they're twice or three times as likely to get what they came for.

Select images that reflect the intention and of your existing or planned website. Some ideas for things to test include:

  • screenshots of whole webpages (like the homepage, product pages, pricing pages, knowledge base pages, shopping cards, and so on)
  • screenshots of parts of webpages (like top or side menus, mega-menus, account options, any part of a webpage you want to focus on in particular)
  • wireframes created using products like Axure and Invision
  • wireframes created using the back of the nearest piece of paper to you right now
  • app designs
  • internet of things things (like remote controls, game consoles, and so on).

Inspiration from other first-click tests

The screen and remote control for a video-streaming service:

Example image gui

An About Us page from a company:

Example image screenshot

A sketched wireframe for a website design

Example wireframe sketch

And a fancier version of the same wireframe:

Example wireframe highdef

Upload JPEGs or PNG files of any size, and preview before launching

You can upload JPEG (jpg) or PNG files of any size or dimension. If you upload high-density retina images, they'll be scaled down to fit a normal-sized screen. No other image types will be automatically resized to fit. You might want all your images to be consistent across the whole test. If so, resize them before you upload.

The ‘preview' feature is your best friend when creating a first-click test. Once you've uploaded your images and added tasks, preview the test on the devices your participants will use. For example, if you are asking participants to complete the activity on a mobile device, preview your survey on a mobile device.

Establish what you want to find out

First-click testing can help you make informed decisions at any stage of a web design or market research project. The key to getting useful data is to be clear why you need it, and exactly what it can help you with.

Gather evidence for the ‘why' from things like:

  • analytics or user testing that shows people leave before purchasing, or spend a long time on certain pages
  • requirements and briefs from clients and managers
  • data from your support call centre on particularly common things people call for (that they could instead find on the website)
  • data on low sales or disappointing engagement from users.

Every task you set will answer variations on the following questions about your webpage, so keep these in mind as you're figuring out what you want to achieve. And the answers to these questions (your first-click test data) will help you make most user-focused decisions about your designs.

  • How many people got their first click correct? How many failed?
  • Where did people click incorrectly, and why might that be?
  • Where did most people click when there's more than one correct answer?
  • How long did it take people to click on average? Is this too long?
Write relevant, effective tasks

There's a knack for getting user research tasks right, and it's about understanding how people naturally think and behave. First-click tests happen away from a live website: people receive an invitation, read the instructions, and complete the tasks knowing it's a test and not the real thing. And we know this for certain: in general, if someone is given a test, they'll want to pass.

You have two main weapons to combat this instinctive behavior: your instructions, and your tasks. The default instructions in Chalkmark make clear that the webpage is being tested, not the person. And you can get your tasks right by taking care with the language you use.

Write your tasks as action-oriented scenarios

The data you get from a first-click test is about the link between task and success.

People tend to be action-oriented when they visit a website: everything they do, from scanning the page to clicking on a menu, is in pursuit of a goal. This may seem an obvious point for large service-based websites, such as government or university websites, but even people ‘just browsing' are on the hunt for something, and primarily scan until they find it.

You want your tasks to mimic the thought a person might have when they visit your website. So write in a natural, plain English style, and introduce a true-to-life scenario for people to bring to mind.

Instead of writing ‘Find our office location' you could write ‘You've booked a meeting with one of our staff, and you want to find out where to go.' Keeping the tone conversational will make the task less ‘Click the thing' and more ‘Find the information you need to get something done.'

Use different language than your webpages

In light of what we know about people — that they're inclined to want to get it ‘right' — they'll scan the wording of your tasks searching for clues to help them do just that. If you set a task that includes “Find out about career options at our company”, and the word “Careers” appears on the webpage, your participants will probably play a game of ‘Snap' instead of reacting to the task more naturally.

The first solution is to craft your tasks with the webpages out of sight. Though you might find it useful to use synonyms of your webpage words — a good place to start to get ideas flowing — it's risky. Once a word or phrase gets in your head, in can be hard to shake, and this may limit the variations you could come up with. If you hide the webpage and brainstorm a bunch of ideas based on your knowledge of the website and your users, your tasks have a better shot at authenticity.

For example, let's say you want to find out if people can find a funding application form on your website. Instead of writing a task that includes the verb ‘apply', picture a user finding the link to the form and thinking ‘This is it, I found it.' Then brainstorm a bunch of ideas for what they might have had in mind for what they might have had in mind, like:

  • What do I have to do to get funding for this project?
  • Can I send them my information online, or do I have to call them?
  • What kind of evidence do I need so I can apply for this funding?

An example

In the following example, we wanted to find out whether or not people could find the Events link, and, if they could, which link they would select (because there were two!). So we worded our task thus:

“You've heard that there's a street festival coming up in your city. You don't know much about it, and would like to see if Yelp has any information about it.”

Example task language

After 38 people had completed the task, we found out that 39% of people went to the search bar, 37% to one Events link, and 16% to the other.

Example task language selection

If we'd used 'events' instead of 'street festival', we'd have more conclusive but quite useless data.

G fc recruit

Recruit quality participants

Recruiting quality participants is important. You want participants who are as close to your intended user demographic as possible, and who are willing to put the time and thought into giving authentic responses.

Source participants who represent your users

If you have access to a pool of participants (like employees if you're working on an internal product, or your customer mailing list) then these people are the best first choice. In fact, building up a loyal customer base that is willing to participate in user research is the best thing you can do to ensure quality data over time. Make use of various channels when inviting these people: email, website pop ups, social media, and so on.

You can also make use of reputable recruitment panels, which can be effective if you want trustworthy data with minimal effort. You can recruit participants from quite specific demographics, and be confident that the participants will take your survey seriously (they are getting paid, after all). You can choose to do this within Chalkmark from your account using our in-app recruitment service.

Give them a reason to participate

The more people feel they personally get from an activity, the more likely they'll be to take it seriously. Here's some insights into the psychology of survey respondents that you'll find useful (and that you can read more about on our blog):

  • Make it interesting to people
  • Offer them money or products
  • Encourage a sense of ‘I want to help'
  • Make them feel their contribution is important.

As Scott Smith wrote in his insightful piece on why people participate in surveys, we can learn a lot from theories of human behavior.

Aim for 50+ quality participants

Aiming for 50–100 completed first-click tests will give you reliable data that you can use to make informed decisions.

From what we've seen, the trends in participant responses should start to come clear with around 50 participants.The more participants you recruit from this point, the more confident you can be that the data is representative of your users.

Don't fret if timing or budget means you can't get 50+ completed first-click tests. If you can get between 30 and 50 participants, you'll still see patterns that will lead you in the right direction and kick-start important conversations with stakeholders and designers. And fewer than 30? You'll be surprised at how much you find out (plus data from a first-click test with 10 participants is way more useful than no first-click test at all).

G fc analysis

Interpret your first-click testing results

First-click testing results are presented in clickmaps you can poke and prod and adjust for a whole bunch of different views and insights. You'll also get detailed participant information, the results of your questionnaire, and links for sharing and downloading your results.

Overall, how you approach the results will depend on your style (you could call it your data personality, perhaps). You might perform a data smash-n-grab: dive in, pull out the quantitative data that'll convince your client their website needs work, throw it up on a slide, and run. Or you might stroll through it, with clear questions in mind, but with an exploratory and creative approach — chances are you'll want to do a bit of both.

Results Overview

The results Overview tells you all the need-to-know information about your first-click test. It updates in real time so you can see relevant dates, the number of people who've completed the test so far (and number who've abandoned), and the time taken.

Results overview
Participant and questionnaire results

The Participants table displays useful information about every participant who started your first-click test, and can be used to narrow and broaden the range of data you want to analyze.

You can use the participants table to:

  • include or exclude participants based on time taken or tasks completed
  • assess the validity of the responses and exclude or include particular participants (you might exclude participants who finished too quickly, or who failed to give a requested identifier, for example)
  • segment your results based on identifiers, URL tokens, or answers to pre and post survey questions.

You can do any of these things any time during your analysis. If you plan to analyze data from different demographics separately, then you'd begin each demographic analysis at the Participant table.

Segment results based on answers to questions

If you’re interested in segmenting your results in the participants table based on answers to these questions, visiting the Questionnaire results first will help you spot the potentially significant and interesting patterns.

Here's where you can see how participants responded to your pre and post survey questions overall. If you've asked questions with multi-choice answers, you'll see a bar graph, percentages of participants, and number of participants who selected each answer.

An example

In our Yelp study we were interested in comparing the responses of people who used Yelp multiple times a week (18.4%) to people who used it rarely or never (15.8).

Results questionnaire

To do this, we segmented the participant looked at the results for each group separately, starting with the people who used Yelp multiple times a week. We selected them using the filter on the participant table, and then reloaded the results. Once we'd analyzed that group, we went back and reloaded the results to show the ‘Rarely or never' group.

Results filter

To compare like with like, we looked at the same task, and used the same clickmap type, the selection view. We found there was a few small differences between the two groups, but most of them went to the same place:

Use Yelp multiple times a week:

Results filter selection 1

Use Yelp rarely or never:

Results filter selection 2
Exploring the clickmap results

First-click testing measures just that: individual clicks in response to tasks. How you measure things like success will depend on how you've set your test up and what your objectives are. You may not be interested in success rates at all, but instead just be curious about where people clicked, and thus how they perceive your content.

Bring to mind — and to hand — the questions you set out to ask at the start, like the following:

  • How many people got their first click correct? How many failed?
  • Where did people click instead, and why might that be?
  • Where will most people click when there's two potential correct answers?
  • How long did it take people to click on average?

Example task: When there are many ways to get information, where do most people go?

Our main question: On the Yelp website, When people need to edit their account profile, will people go for the profile icon with the drop-down menu on the top right corner, or will they go to the link to do so on the left profile summary?

Our task: “You want to change the password for your account.”

Number of completed responses: 38

Average time taken: 20.2 seconds

Our webpage and results (Three different views of the same data):

Results heatmap Results grid Results selection

Our analysis: 47% (18 people) did go to the profile icon on the top right corner, which we'd half-predicted because it's a common feature on websites these days. But the other option was pretty close behind: 37% (14 people) went to the link on the profile summary. But perhaps the biggest insight is how long it took people to click on average: 20.2 seconds! That's like weeks on the web!

Where to next?

Creating a first-click test is the best way to learn the technique and start using the data to improve your designs — so go for it! You'll also find it useful to check out our sample studies (from both the participant's and the researcher's perspective). And you'll find a bunch of useful and inspirational information on our blog.

Happy first-click testing!

Try first-click-testing for yourself