April 15, 2024
4

Chris Green: Jobs To Be Done methodology and its role in driving customer choice

Innovation is at the core of revenue growth - finding new ways to create and capture value. The reason most innovations fail is not because they don’t work (organizations are very good at building products and services with features and benefits), they fail because they don’t create value on dimensions that drive customer choice. If you don’t understand the causal drivers of customer choice, then you’re largely shooting in the dark and at risk of creating something that customers don’t choose above the alternative market solutions.

Chris Green, Head of CX and Innovation at Purple Shirt, recently spoke at UX New Zealand, the leading UX and IA conference in New Zealand hosted by Optimal Workshop, about the Jobs to be Done (JTBD) methodology and uncovering the causal drivers of customer choice in innovation.

In his talk, Chris talks us through JTBD methodology, how to use it, and how it will change the way you think about markets and competition.

Background on Chris Green

Chris has a long and deep background in strategy and innovation. Chris cut his strategy teeth in the UK before moving to New Zealand in 2000 where he led various strategy teams for organisations like Vodafone, Vector, and TelstraClear. He moved to Australia in 2011 where he started to develop his expertise in the emerging field of innovation. He sharpened his innovation knowledge and skills by studying under Professor Clayton Christensen (the godfather of modern innovation theory) at Harvard University and went on to lead one of Australia's leading innovation consultancies where he helped organizations run innovation projects and build innovation capability.

Chris returned to New Zealand at the end of 2021 to lead the innovation practice of Purple Shirt, a UX design consultancy with offices in Auckland and Christchurch. In his spare time, you'll find Chris out on the water learning about foiling boats and boards.

Contact Details:

Email: chris@purpleshirt.co.nz

LinkedIn: https://www.linkedin.com/in/chris-green-kiwi/

Jobs To Be Done methodology and its role in driving customer choice

In this talk, Chris is specifically speaking about UX research in the context of building new products and services, not optimizing existing ones. He answers a critical question - how can we improve our odds of success when we launch something new to market?

Performing UX research for products and services that already exist is very different from totally new ones. Why? Generally, it’s because customers of existing products are good at recommending improvements for things that they already know and use. They are good at this because they have user experience to draw from. The famous Henry Ford quote illustrates this well; “If I’d asked our customers what they wanted, they would have told me faster horses.”

Just because customers are giving researchers helpful and constructive feedback on a product or service, it doesn’t mean you should implement these improvements. In a user-focused discipline, this can sound counterintuitive, but when it comes to new products and services, UX researchers should be careful in relying on user feedback absolutely. 

Chris argues that customer feedback can sometimes lead us in the wrong direction. Assuming that a customer will choose our product if we simply implement their feedback is problematic. Chris stresses the difference between implementing changes that drive improvement versus implementing changes that drive customer choice. They aren’t the same thing. Many businesses continually release new features, but rarely do these new features drive or improve consumer choice. Yes, a new feature may make the product better than before, but does it make it so much better that it makes customers choose your product over others? 

Causal drivers of choice 🤔

When researching new products the most critical thing to understand is this - what causes someone to choose one product over another? If you don’t know the answer, you’re guessing about your product design from the very beginning. 

Traditionally, market research (typically driven by marketing departments) has been poor at finding causation. Market research tends to find correlations between customer attributes and customer behavior (e.g. people in a certain age bracket buy a certain product), but these correlations are quite shallow and do little to inform true drivers of choice. A lack of causal studies can be explained by the fact that they are difficult to conduct. They need to uncover deeper, root causes of human behavior, rather than high-level trends to be truly useful.

So, how can find causal drivers of choice? And why does it matter?

Why it matters 🔥

The best method for uncovering the causal drivers of choice was invented by Professor Clayton Christensen. Chris describes him as the guru of modern innovation theory. He invented Disruption Theory and Jobs to be Done (JTBD) methodology. His fundamental insight was this – researchers shouldn’t be worried about the customer, instead, they should be interested in what they’re trying to achieve. 

Christensen’s JTBD methodology is about understanding the various things that people need to complete in certain contexts. He argues that we, as consumers and customers, all look to “hire” products and services from businesses to get things done. We make a decision to buy, hire, or lease products or services into our lives in order to make progress on something we’re trying to achieve. 

These jobs to be done can be split broadly into three categories (which aren’t mutually exclusive):

  • Functional: Tasks that I want to complete
  • Emotional: How I want to feel
  • Social: How I want to be seen

Value creation opportunities arise when the currently available solutions (products/services in the market) are not getting the jobs done well. This “gap” essentially represents struggles and challenges that get in the way of progress. The gap is our opportunity to build something new that helps people get their jobs done better.

Chris uses Dropbox as a good example of an innovative company filling the gap and addressing a need for people. People found themselves “hiring” different solutions or workarounds to access their files anywhere (e.g. by emailing themselves and using USBs). Dropbox created a solution that addressed this by allowing people to store their files online and access them anywhere. This solution got the job done better by being more convenient, secure, and reliable.

The strategic relevance of “jobs” 🙌💼🎯

Using the JTBD methodology helps to change how you see the competitive landscape, thereby providing an opportunity to see growth where none might have seemed possible. 

Chris uses Snickers and MilkyWay chocolate bars as examples of similar products that on the surface seem to compete against each other. Both seem to sit in the same category, are bought in the same aisle, and have similar ingredients. However, looking at them through a “jobs” lens, they address two slightly different jobs. A Snickers is bought when you need fuel and is more a replacement for a sandwich, apple, or Red Bull (i.e. it is a product “hired” to prepare for the future/get an energy hit). A MilkyWay on the other hand is bought to make people feel better, eat emotionally, and is more of a replacement for ice cream or wine (i.e. a product “hired” to cope with the past).

Chris’s talk helps us to think more strategically about our design journey. To develop truly new and innovative products and services, don’t just take your users' feedback at face value. Look beyond what they’re telling you and try to see the jobs that they’re really trying to accomplish. 

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Ruth Hendry: Food recalls, fishing rules, and forestry: creating an IA strategy for diverse audience needs

The Ministry for Primary Industry’s (MPI) customers have some of the most varied information needs — possibly the most varied in New Zealand. MPI provides information on how to follow fishing rules, what the requirements are to sell dairy products at the market, and how to go about exporting honey to Asia. Their website mpi.govt.nz has all the information.

However the previous website was dense and complicated, and MPI’s customers were struggling to find the information they needed, often calling the contact center instead — one of several indicators that people were lost and confused on the website.

Ruth Hendry, Head of Strategic Growth at Springload, recently spoke at UX New Zealand, the leading UX and IA conference in New Zealand hosted by Optimal Workshop, about how new IA helped MPI’s broad range of customers find the information they needed.

In her talk, Ruth takes us through the tips and techniques used to create an IA that met a wide variety of user needs. She covers the challenges they faced, what went well, what didn’t go so well, and what her team would do differently next time.

Background on Ruth Hendry 💃🏻

Ruth was Springload’s Content Director; now she’s Head of Strategic Growth. She has broad experience in content, UX, and customer-led design. A data nerd at heart, she uses analytics, research and testing to drive decision-making, resulting in digital experiences that put the customer at the forefront.

At Springload Ruth has worked on large-scale content and information architecture projects for organisations including Massey University, Vodafone and Air New Zealand. She got into the world of websites in her native UK, working on Wildscreen's ARKive project. After she arrived in Aotearoa, she spent four years looking after Te Papa's digital content, including the live broadcast of the colossal squid dissection. She's Springload's resident cephalopod expert.

She finds joy in a beautiful information architecture, but her desk is as messy as her websites are tidy.

Contact Details:

Email address: ruthbhendry@gmail.com

LinkedIn URL: https://www.linkedin.com/in/ruth-hendry-658a0455/

Food recalls, fishing rules, and forestry: creating an IA strategy for diverse audience needs 🎣

Ruth begins her talk by defining IA. She says, “If IA is the way information is organized, structured, and labeled, then an IA strategy is the plan for how you achieve an effective, sustainable, people-focused IA.”

Considering this, applying an IA strategy to the Ministry of Primary Industries (MPI) website was a challenge due to its diverse user groups. MPI is responsible for a range of things, such as publishing food recalls, looking after New Zealand’s biosecurity, outlining how much fish can be caught, how to export products, and even how to move pets between countries. Needless to say, the scope of this IA project was huge.

The current state of the website was challenging to navigate. In fact, one customer said, “It’s hard to find what you need and hard to understand”. MPI Contact Center staff often found themselves simply guiding customers to the right information online over the phone. 

So, in solving such a massive problem, does having an IA strategy work? Ruth says yes! And it can have a huge impact. She backs up her strategy with the results of this project before broadly outlining how she and her team achieved the following improvements.

The project achieved:

  • 37% decrease in time spent on the home page and landing pages
    • Customers found where they needed to go, faster, using the new IA and navigation elements
  • 21% decrease in on-page searches
    • People could find the content they need more easily
  • 53% reduction in callers to MPI saying that they couldn’t find what they needed on the website
    • Users could more easily get information online

Developing an IA strategy 🗺️

Ruth attempts to summarize 14 weeks' worth of work that she and her team delivered in this project.

Step one: Understanding the business

During this step, Ruth and her team looked at finding out exactly what MPI wanted to achieve, what its current state is, what its digital maturity is, what its current IA was like (and the governance of it), how the site got to be in the way that it was, and what their hopes and aspirations were for their digital channels. They conducted:

  • Stakeholder interviews and focus groups
  • Reviewing many, many documents
  • Domain and analogous search
  • Website review

Step two: Understand the customers

In this step, the team looked at what people want to achieve on the site, their mental models (how they group and label information), their main challenges, and whether or not they understood what MPI does. They conducted:

  • A review on website analytics and user needs
  • In-person interviews and prototype testing
  • Card sorts
  • Intercepts
  • Users surveys
  • Treejack testing

Step three: Create the strategy

This talk doesn’t cover strategy development in depth, but Ruth shares some of the most interesting things she learned (outlined below) throughout this project that she’ll take into other IA strategy projects.

Why it matters 🔥

Throughout the project, Ruth felt that there were eight fundamental things that she would advise other teams to do when creating an IA strategy for large organizations with massively diverse customer needs. 

  1. Understand the business first: Their current IA is a window into their soul. It tells us what they value, what’s important to them, and also the stories that they want to tell their customers. By understanding the business, Ruth and her team were able to pinpoint what it was about the current IA that wasn’t working.
  2. Create a customer matrix: Find the sweet spot of efficient and in-depth research. When an organization has a vast array of users and audience needs, it can often seem overwhelming. A customer matrix really helps to nail down who needs what information.
  3. Card sort, then card sort again: They are the best way to understand how people’s mental model works. They are critical to understanding how information should be organized and labeled. They are particularly useful when dealing with large and diverse audiences! In the case of the MPI project, card sorts revealed a clear difference between business needs and personal needs, helping to inform the IA.
  4. Involve designers: The earlier the better! User Interface (UI) decisions hugely influence the successful implementation of new IA and the overall user journey. Cross-discipline collaboration is the key to success!
  5. Understand the tech: Your IA choice impacts design and tech decisions (and vice versa). IA and tech choices are becoming increasingly interrelated. Ruth stresses the importance of understanding the tech platforms involved before making IA recommendations and working with developers to ensure your recommendations are feasible.
  6. Stakeholders can be your biggest and best advocates: Build trust with stakeholders early. They really see IA as a reflection of their organization and they care a lot about how it is presented.
  7. IA change drives business change: You can change the story a business tells about itself. Projects like this, which are user-centric and champion audience thinking, can have a positive effect throughout the business, not just the customer. Sometimes internal business stakeholders' thinking needs to change before the final product can change.
  8. IA is more than a menu: And your IA strategy should reflect that. IA captures design choices, content strategy, how technical systems can display content, etc.

Your IA strategy needs to consider

  • Content strategy: How is content produced, governed, and maintained sustainably going forward?
  • Content design: How is content designed and does it support a customer-focused IA?
  • UI and visual design: Does UI and visual design support a customer-focused IA?
  • Technical and functional requirements: Are they technically feasible in the CMS? And what do we need to support the changes, now and into the future?
  • Business process change: How will business processes adapt to maintain IA changes sustainably in the long term?
  • Change management and comms plan: How can we support the dissemination of key changes throughout the business, to key stakeholders, and to customers?

Finally, Ruth reemphasizes that AI is more than just designing a new menu! There’s a lot more to consider when delivering a successful IA strategy that meets the needs of the customer - approach the project in a way that reflects this.

Learn more
1 min read

Chris Green: Jobs To Be Done methodology and its role in driving customer choice

Innovation is at the core of revenue growth - finding new ways to create and capture value. The reason most innovations fail is not because they don’t work (organizations are very good at building products and services with features and benefits), they fail because they don’t create value on dimensions that drive customer choice. If you don’t understand the causal drivers of customer choice, then you’re largely shooting in the dark and at risk of creating something that customers don’t choose above the alternative market solutions.

Chris Green, Head of CX and Innovation at Purple Shirt, recently spoke at UX New Zealand, the leading UX and IA conference in New Zealand hosted by Optimal Workshop, about the Jobs to be Done (JTBD) methodology and uncovering the causal drivers of customer choice in innovation.

In his talk, Chris talks us through JTBD methodology, how to use it, and how it will change the way you think about markets and competition.

Background on Chris Green

Chris has a long and deep background in strategy and innovation. Chris cut his strategy teeth in the UK before moving to New Zealand in 2000 where he led various strategy teams for organisations like Vodafone, Vector, and TelstraClear. He moved to Australia in 2011 where he started to develop his expertise in the emerging field of innovation. He sharpened his innovation knowledge and skills by studying under Professor Clayton Christensen (the godfather of modern innovation theory) at Harvard University and went on to lead one of Australia's leading innovation consultancies where he helped organizations run innovation projects and build innovation capability.

Chris returned to New Zealand at the end of 2021 to lead the innovation practice of Purple Shirt, a UX design consultancy with offices in Auckland and Christchurch. In his spare time, you'll find Chris out on the water learning about foiling boats and boards.

Contact Details:

Email: chris@purpleshirt.co.nz

LinkedIn: https://www.linkedin.com/in/chris-green-kiwi/

Jobs To Be Done methodology and its role in driving customer choice

In this talk, Chris is specifically speaking about UX research in the context of building new products and services, not optimizing existing ones. He answers a critical question - how can we improve our odds of success when we launch something new to market?

Performing UX research for products and services that already exist is very different from totally new ones. Why? Generally, it’s because customers of existing products are good at recommending improvements for things that they already know and use. They are good at this because they have user experience to draw from. The famous Henry Ford quote illustrates this well; “If I’d asked our customers what they wanted, they would have told me faster horses.”

Just because customers are giving researchers helpful and constructive feedback on a product or service, it doesn’t mean you should implement these improvements. In a user-focused discipline, this can sound counterintuitive, but when it comes to new products and services, UX researchers should be careful in relying on user feedback absolutely. 

Chris argues that customer feedback can sometimes lead us in the wrong direction. Assuming that a customer will choose our product if we simply implement their feedback is problematic. Chris stresses the difference between implementing changes that drive improvement versus implementing changes that drive customer choice. They aren’t the same thing. Many businesses continually release new features, but rarely do these new features drive or improve consumer choice. Yes, a new feature may make the product better than before, but does it make it so much better that it makes customers choose your product over others? 

Causal drivers of choice 🤔

When researching new products the most critical thing to understand is this - what causes someone to choose one product over another? If you don’t know the answer, you’re guessing about your product design from the very beginning. 

Traditionally, market research (typically driven by marketing departments) has been poor at finding causation. Market research tends to find correlations between customer attributes and customer behavior (e.g. people in a certain age bracket buy a certain product), but these correlations are quite shallow and do little to inform true drivers of choice. A lack of causal studies can be explained by the fact that they are difficult to conduct. They need to uncover deeper, root causes of human behavior, rather than high-level trends to be truly useful.

So, how can find causal drivers of choice? And why does it matter?

Why it matters 🔥

The best method for uncovering the causal drivers of choice was invented by Professor Clayton Christensen. Chris describes him as the guru of modern innovation theory. He invented Disruption Theory and Jobs to be Done (JTBD) methodology. His fundamental insight was this – researchers shouldn’t be worried about the customer, instead, they should be interested in what they’re trying to achieve. 

Christensen’s JTBD methodology is about understanding the various things that people need to complete in certain contexts. He argues that we, as consumers and customers, all look to “hire” products and services from businesses to get things done. We make a decision to buy, hire, or lease products or services into our lives in order to make progress on something we’re trying to achieve. 

These jobs to be done can be split broadly into three categories (which aren’t mutually exclusive):

  • Functional: Tasks that I want to complete
  • Emotional: How I want to feel
  • Social: How I want to be seen

Value creation opportunities arise when the currently available solutions (products/services in the market) are not getting the jobs done well. This “gap” essentially represents struggles and challenges that get in the way of progress. The gap is our opportunity to build something new that helps people get their jobs done better.

Chris uses Dropbox as a good example of an innovative company filling the gap and addressing a need for people. People found themselves “hiring” different solutions or workarounds to access their files anywhere (e.g. by emailing themselves and using USBs). Dropbox created a solution that addressed this by allowing people to store their files online and access them anywhere. This solution got the job done better by being more convenient, secure, and reliable.

The strategic relevance of “jobs” 🙌💼🎯

Using the JTBD methodology helps to change how you see the competitive landscape, thereby providing an opportunity to see growth where none might have seemed possible. 

Chris uses Snickers and MilkyWay chocolate bars as examples of similar products that on the surface seem to compete against each other. Both seem to sit in the same category, are bought in the same aisle, and have similar ingredients. However, looking at them through a “jobs” lens, they address two slightly different jobs. A Snickers is bought when you need fuel and is more a replacement for a sandwich, apple, or Red Bull (i.e. it is a product “hired” to prepare for the future/get an energy hit). A MilkyWay on the other hand is bought to make people feel better, eat emotionally, and is more of a replacement for ice cream or wine (i.e. a product “hired” to cope with the past).

Chris’s talk helps us to think more strategically about our design journey. To develop truly new and innovative products and services, don’t just take your users' feedback at face value. Look beyond what they’re telling you and try to see the jobs that they’re really trying to accomplish. 

Learn more
1 min read

The Role of Usability Metrics in User-Centered Design

The term ‘usability’ captures sentiments of how usable, useful, enjoyable, and intuitive a website or app is perceived by users. By its very nature, usability is somewhat subjective. But what we’re really looking for when we talk about usability is how well a website can be used to achieve a specific task or goal. Using this definition we can analyze usability metrics (standard units of measurement) to understand how well user experience design is performing.

Usability metrics provide helpful insights before and after any digital product is launched. They help us form a deeper understanding of how we can design with the user front of mind. This user-centered design approach is considered the best-practice in building effective information architecture and user experiences that help websites, apps, and software meet and exceed users' needs.

In this article, we’ll highlight key usability metrics, how to measure and understand them, and how you can apply them to improve user experience.

Understanding Usability Metrics

Usability metrics aim to understand three core elements of usability, namely: effectiveness, efficiency, and satisfaction. A variety of research techniques offer designers an avenue for quantifying usability. Quantifying usability is key because we want to measure and understand it objectively, rather than making assumptions.

Types of Usability Metrics

There are a few key metrics that we can measure directly if we’re looking to quantify effectiveness, efficiency, and satisfaction. Here are four common examples:

  • Success rate: Also known as ‘completion rate’, success rate is the percentage of users who were able to successfully complete the tasks.
  • Time-based efficiency: Also known as ‘time on task’, time-based efficiency measures how much time a user needs to complete a certain task.
  • Number of errors: Sounds like what it is! It measures the average number of times where an error occurred per user when performing a given task.
  • Post-task satisfaction: Measures a user's general impression or satisfaction after completing (or not completing) a given task.

How to Collect Usability Metrics


Usability metrics are outputs from research techniques deployed when conducting usability testing. Usability testing in web design, for example, involves assessing how a user interacts with the website by observing (and listening to) users completing defined tasks, such as purchasing a product or signing up for newsletters.

Conducting usability testing and collecting usability metrics usually involves:

  • Defining a set of tasks that you want to test
  • Recruitment of test participants
  • Observing participants (remotely or in-person)
  • Recording detailed observations
  • Follow-up satisfaction survey or questionnaire

Tools such Reframer are helpful in conducting usability tests remotely, and they enable live collaboration of multiple team members. It is extremely handy when trying to record and organize those insightful observations! Using paper prototypes is an inexpensive way to test usability early in the design process.

The Importance of Usability Metrics in User-Centered Design

User-centered design challenges designers to put user needs first. This means in order to deploy user-centered design, you need to understand your user. This is where usability testing and metrics add value to website and app performance; they provide direct, objective insight into user behavior, needs, and frustrations. If your user isn’t getting what they want or expect, they’ll simply leave and look elsewhere.

Usability metrics identify which parts of your design aren’t hitting the mark. Recognizing where users might be having trouble completing certain actions, or where users are regularly making errors, are vital insights when implementing user-centered design. In short, user-centered design relies on data-driven user insight.

But why hark on about usability metrics and user-centered design? Because at the heart of most successful businesses is a well-solved user problem. Take Spotify, for example, which solved the problem of dodgy, pirated digital files being so unreliable. People liked access to free digital music, but they had to battle viruses and fake files to get it. With Spotify, for a small monthly fee, or the cost of listening to a few ads, users have the best of both worlds. The same principle applies to user experience - identify recurring problems, then solve them.

Best Practices for Using Usability Metrics

Usability metrics should be analyzed by design teams of every size. However, there are some things to bear in mind when using usability metrics to inform design decisions:

  • Defining success: Usability metrics are only valuable if they are being measured against clearly defined benchmarks. Many tasks and processes are unique to each business, so use appropriate comparisons and targets; usually in the form of an ‘optimized’ user (a user with high efficiency).
  • Real user metrics: Be sure to test with participants that represent your final user base. For example, there’s little point in testing your team, who will likely be intimately aware of your business structure, terminology, and internal workflows.
  • Test early: Usability testing and subsequent usability metrics provide the most impact early on in the design process. This usually means testing an early prototype or even a paper prototype. Early testing helps to avoid any significant, unforeseen challenges that could be difficult to rewind in your information architecture.
  • Regular testing: Usability metrics can change over time as user behavior and familiarity with digital products evolve. You should also test and analyze the usability of new feature releases on your website or app.

Remember, data analysis is only as good as the data itself. Give yourself the best chance of designing exceptional user experiences by collecting, researching, and analyzing meaningful and accurate usability metrics.

Conclusion

Usability metrics are a guiding light when it comes to user experience. As the old saying goes, “you can’t manage what you can’t measure”. By including usability metrics in your design process, you invite direct user feedback into your product. This is ideal because we want to leave any assumptions or guesswork about user experience at the door.

User-centered design inherently relies on constant user research. Usability metrics such as success rate, time-based efficiency, number of errors, and post-task satisfaction will highlight potential shortcomings in your design. Subsequently, they identify where improvements can be made, AND they lay down a benchmark to check whether any resulting updates addressed the issues.

Ready to start collecting and analyzing usability metrics? Check out our guide to planning and running effective usability tests to get a head start!

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.