Writing relevant, effective tasks

Jump to section

There’s a knack for getting user research tasks right, and it’s about understanding how people naturally think and behave. First-click tests happen away from a live website or app: people receive an invitation, read the instructions, and complete the tasks knowing it’s a test and not the real thing. We know this for certain: in general, if someone is given a test, they’ll want to pass.

You have two main weapons to combat this instinctive behavior: your instructions, and your tasks. The default instructions in Chalkmark make clear that the page is being tested, not the person. And you can get your tasks right by taking care with the language you use.

Write your tasks as action-oriented scenarios

The data you get from a first-click test is about the link between task and success.

People tend to be action-oriented when they visit a website: everything they do, from scanning the page to clicking on a menu, is in pursuit of a goal. This may seem an obvious point for large service-based websites, such as government or university websites, but even people ‘just browsing’ are on the hunt for something, and primarily scan until they find it. The same goes when using an app, or when reviewing the cover of the magazine; people are working out what to click on or pay close attention to in order to reach their goal.

You want your tasks to mimic the thought a person might have when they visit your website. So write in a natural, plain English style, and introduce a true-to-life scenario for people to bring to mind.

Instead of writing ‘Find our office location’ you could write ‘You’ve booked a meeting with one of our staff, and you want to find out where to go.’ Keeping the tone conversational will make the task less ‘Click the thing’ and more ‘Find the information you need to get something done.’

Use different language than your webpages

In light of what we know about people — that they’re inclined to want to get it ‘right’ — they’ll scan the wording of your tasks searching for clues to help them do just that. If you set a task that includes “Find out about career options at our company”, and the word “Careers” appears on the webpage, your participants will probably play a game of ‘Snap’ instead of reacting to the task more naturally. Humans are very good at pattern matching and if the opportunity is presented to short-cut a problem or life task with a quick bit of pattern matching, we will all take it.

The first solution to avoiding pattern-matching tasks  is to craft your tasks with the webpages out of sight. Though you might find it useful to use synonyms of your webpage words — a good place to start to get ideas flowing — it’s risky. Once a word or phrase gets in your head, it can be hard to shake, and this may limit the variations you could come up with. If you hide the webpage and brainstorm a bunch of ideas based on your knowledge of the website and your users, your tasks have a better shot at authenticity.

For example, let’s say you want to find out if people can find a funding application form on your website. Instead of writing a task that includes the verb ‘apply’, picture a user finding the link to the form and thinking ‘This is it, I found it.’ Then brainstorm a bunch of ideas for what they might have had in mind for what they might have had in mind, like:

  • What do I have to do to get funding for this project?
  • Can I send them my information online, or do I have to call them?
  • What kind of evidence do I need so I can apply for this funding?

An example

In the following example, we wanted to find out whether or not people could find how to sign up for a bank account, and if they could, which link would they select (because there were three!). So we worded our task thus:

“You want to sign up to start banking with the Bank of America, where do you start this process?”

After 38 people had completed the task, we found out that 37% of people clicked on ‘Enrol’ and another 37% on ‘Open an account’. 16% clicked on the Call to Action (CTA) at the bottom of the page ‘Open a checking account – Get started now’, and the remaining 10% were scattered around ‘Schedule an appointment’ and ‘About us’. 

*please note this study data is only for example purposes
*please note this study data is only for example purposes

If we had used ‘Enrol’ or ‘Open an account’ in our task, instead of ‘Sign up’, we’d have more ‘correct’ answers but it would be quite useless data, as users would most likely have been pattern matching the words in the task with words they saw on screen.

Randomizing the order

Chalkmark offers you the option to randomise the order of the pages shown to users. We recommend that you do this to minimise the effects of order on your results. Obviously, if you intend to test a sequence of pages, this won’t really be possible, but that sequence will reflect the user’s real life experience in any case.

How many tasks

How many tasks depends upon a few factors:

  • How hard each task is – i.e. are you annoying your respondents with lots of super-tricky tasks?
  • How ‘learnable’ the design is – i.e. we start to get different results as people learn the interface and so responses are faster and more accurate for later tasks – that’s also a good argument for randomizing the order
  • How engaged the respondents are i.e. the more engaged, the more likely they are to complete.

Generally, we feel 8 – 10 tasks is a reasonable limit, but this depends on the noted factors.

Over familiarity

Whilst it’s possible to conduct lots of tests with a single page, we don’t really recommend it. The purpose of a click test is to test the findability of things on the webpage presented. If you present the same image to users repeatedly, they become so familiar with it that you cease to test findability and instead, test recall. Generally, never show the same page more than two, three or four times in a study and don’t show the same page in different sequential tasks – intersperse the same webpage with others.