August 16, 2022
2 min

Why information architecture is the foundation of UX

Ever wondered what the relationship is between information architecture (IA) and UX? Simply put, IA is the foundation of UX. We outline why.

What is Information Architecture? 🛠️

According to Abby Covert, a leader in the field of information architecture, IA is ‘the way we arrange the parts to make sense of the whole.’ This can relate to a website, a retail store or an app. And you could even consider the way a library is sorted to be information architecture. For the purposes of this article, we will focus on digital products (apps or websites).

Well-organized information architecture is fundamentally important to the success of your product. As a designer, knowing the content you are delivering and how, is fundamental to creating a UX that performs. Working with the needs of the organization and meeting the requirements of the users in a meaningful and delightful way. Organizing and structuring the information so that navigating, searching, and understanding your product is seamless is ultimately what UX design is all about. Arranging the parts to make sense of the whole, you could say.

While design is about creating visual pointers for users to find their way, information architecture can be broken down into 3 main areas to consider when building a great user experience:

  • Navigation: How people make their way through information 
  • Labels: How information is named and represented.
  • Search: How people will look for information (keywords, categories)

When put like this it does seem pretty straightforward. Maybe even simple? But these tasks need to be straightforward for your users. Putting thought, time, and research at the front of your design and build can increase your chances of delivering an intuitive product. In fact, at any point in your product’s life cycle, it’s worth testing and reviewing these 3 areas.  

Key things to consider to build an effective IA for UX 🏗

Developing a well-thought-out and researched information architecture for your product could be considered a foundation step to creating a great UX product. To help you on your way, here are 6 key things to consider when building effective information architecture for a great user experience. 

  1. Define the goals of your organization: Before starting your IA plan, uncover what is the purpose of your product and how this will align with the goals of your stakeholders.
  2. Figure out your user’s goals: Who do you want to use your product? Create scenarios, discuss with probable users and find out what they’ll use your product for and how they’ll use it.  
  3. Study your competitors: Take note of websites, apps and other digital products that are similar to yours and look at their information architecture from a UX point of view. How does the design work with the IA. Is it simple to navigate? Easy to find what to do next?  Look at how key information is designed and displayed.
  4. Draw a site map: Once the IA is planned and developed and the content is ready, it’s time to figure out how users are going to access all of your information. Spend time planning navigation that is not too complex that will help users to browse your product easily. 
  5. Cross browser testing: Your information architecture behavior may vary from one browser so it’s worth doing some cross-browser compatibility testing. It would be very disappointing to work so hard to get the best UX with your product, only to be let down because of browser variances.
  6. Usability testing: End users are the perfect people to let you know how your product is performing. Set up a testing/staging environment and test on external users. Observing your participants while they move their way through your product uninterrupted and listening to their opinions can shed light on the successes (and failures) in a very insightful way. 

Wrap Up 🌯

Information architecture is the foundation of designing a great product that meets (or even exceeds) your users’ needs, wants, and desires. By balancing an organization’s needs with insight into what users actually want, you’re well equipped to design an information architecture  that helps build a product that delivers a positive user experience. Research, test, research, and test again should be the mantra throughout the development, design, and implementation of your product and beyond.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Chris Green: Jobs To Be Done methodology and its role in driving customer choice

Innovation is at the core of revenue growth - finding new ways to create and capture value. The reason most innovations fail is not because they don’t work (organizations are very good at building products and services with features and benefits), they fail because they don’t create value on dimensions that drive customer choice. If you don’t understand the causal drivers of customer choice, then you’re largely shooting in the dark and at risk of creating something that customers don’t choose above the alternative market solutions.

Chris Green, Head of CX and Innovation at Purple Shirt, recently spoke at UX New Zealand, the leading UX and IA conference in New Zealand hosted by Optimal Workshop, about the Jobs to be Done (JTBD) methodology and uncovering the causal drivers of customer choice in innovation.

In his talk, Chris talks us through JTBD methodology, how to use it, and how it will change the way you think about markets and competition.

Background on Chris Green

Chris has a long and deep background in strategy and innovation. Chris cut his strategy teeth in the UK before moving to New Zealand in 2000 where he led various strategy teams for organisations like Vodafone, Vector, and TelstraClear. He moved to Australia in 2011 where he started to develop his expertise in the emerging field of innovation. He sharpened his innovation knowledge and skills by studying under Professor Clayton Christensen (the godfather of modern innovation theory) at Harvard University and went on to lead one of Australia's leading innovation consultancies where he helped organizations run innovation projects and build innovation capability.

Chris returned to New Zealand at the end of 2021 to lead the innovation practice of Purple Shirt, a UX design consultancy with offices in Auckland and Christchurch. In his spare time, you'll find Chris out on the water learning about foiling boats and boards.

Contact Details:

Email: chris@purpleshirt.co.nz

LinkedIn: https://www.linkedin.com/in/chris-green-kiwi/

Jobs To Be Done methodology and its role in driving customer choice

In this talk, Chris is specifically speaking about UX research in the context of building new products and services, not optimizing existing ones. He answers a critical question - how can we improve our odds of success when we launch something new to market?

Performing UX research for products and services that already exist is very different from totally new ones. Why? Generally, it’s because customers of existing products are good at recommending improvements for things that they already know and use. They are good at this because they have user experience to draw from. The famous Henry Ford quote illustrates this well; “If I’d asked our customers what they wanted, they would have told me faster horses.”

Just because customers are giving researchers helpful and constructive feedback on a product or service, it doesn’t mean you should implement these improvements. In a user-focused discipline, this can sound counterintuitive, but when it comes to new products and services, UX researchers should be careful in relying on user feedback absolutely. 

Chris argues that customer feedback can sometimes lead us in the wrong direction. Assuming that a customer will choose our product if we simply implement their feedback is problematic. Chris stresses the difference between implementing changes that drive improvement versus implementing changes that drive customer choice. They aren’t the same thing. Many businesses continually release new features, but rarely do these new features drive or improve consumer choice. Yes, a new feature may make the product better than before, but does it make it so much better that it makes customers choose your product over others? 

Causal drivers of choice 🤔

When researching new products the most critical thing to understand is this - what causes someone to choose one product over another? If you don’t know the answer, you’re guessing about your product design from the very beginning. 

Traditionally, market research (typically driven by marketing departments) has been poor at finding causation. Market research tends to find correlations between customer attributes and customer behavior (e.g. people in a certain age bracket buy a certain product), but these correlations are quite shallow and do little to inform true drivers of choice. A lack of causal studies can be explained by the fact that they are difficult to conduct. They need to uncover deeper, root causes of human behavior, rather than high-level trends to be truly useful.

So, how can find causal drivers of choice? And why does it matter?

Why it matters 🔥

The best method for uncovering the causal drivers of choice was invented by Professor Clayton Christensen. Chris describes him as the guru of modern innovation theory. He invented Disruption Theory and Jobs to be Done (JTBD) methodology. His fundamental insight was this – researchers shouldn’t be worried about the customer, instead, they should be interested in what they’re trying to achieve. 

Christensen’s JTBD methodology is about understanding the various things that people need to complete in certain contexts. He argues that we, as consumers and customers, all look to “hire” products and services from businesses to get things done. We make a decision to buy, hire, or lease products or services into our lives in order to make progress on something we’re trying to achieve. 

These jobs to be done can be split broadly into three categories (which aren’t mutually exclusive):

  • Functional: Tasks that I want to complete
  • Emotional: How I want to feel
  • Social: How I want to be seen

Value creation opportunities arise when the currently available solutions (products/services in the market) are not getting the jobs done well. This “gap” essentially represents struggles and challenges that get in the way of progress. The gap is our opportunity to build something new that helps people get their jobs done better.

Chris uses Dropbox as a good example of an innovative company filling the gap and addressing a need for people. People found themselves “hiring” different solutions or workarounds to access their files anywhere (e.g. by emailing themselves and using USBs). Dropbox created a solution that addressed this by allowing people to store their files online and access them anywhere. This solution got the job done better by being more convenient, secure, and reliable.

The strategic relevance of “jobs” 🙌💼🎯

Using the JTBD methodology helps to change how you see the competitive landscape, thereby providing an opportunity to see growth where none might have seemed possible. 

Chris uses Snickers and MilkyWay chocolate bars as examples of similar products that on the surface seem to compete against each other. Both seem to sit in the same category, are bought in the same aisle, and have similar ingredients. However, looking at them through a “jobs” lens, they address two slightly different jobs. A Snickers is bought when you need fuel and is more a replacement for a sandwich, apple, or Red Bull (i.e. it is a product “hired” to prepare for the future/get an energy hit). A MilkyWay on the other hand is bought to make people feel better, eat emotionally, and is more of a replacement for ice cream or wine (i.e. a product “hired” to cope with the past).

Chris’s talk helps us to think more strategically about our design journey. To develop truly new and innovative products and services, don’t just take your users' feedback at face value. Look beyond what they’re telling you and try to see the jobs that they’re really trying to accomplish. 

Learn more
1 min read

The Role of Usability Metrics in User-Centered Design

The term ‘usability’ captures sentiments of how usable, useful, enjoyable, and intuitive a website or app is perceived by users. By its very nature, usability is somewhat subjective. But what we’re really looking for when we talk about usability is how well a website can be used to achieve a specific task or goal. Using this definition we can analyze usability metrics (standard units of measurement) to understand how well user experience design is performing.

Usability metrics provide helpful insights before and after any digital product is launched. They help us form a deeper understanding of how we can design with the user front of mind. This user-centered design approach is considered the best-practice in building effective information architecture and user experiences that help websites, apps, and software meet and exceed users' needs.

In this article, we’ll highlight key usability metrics, how to measure and understand them, and how you can apply them to improve user experience.

Understanding Usability Metrics

Usability metrics aim to understand three core elements of usability, namely: effectiveness, efficiency, and satisfaction. A variety of research techniques offer designers an avenue for quantifying usability. Quantifying usability is key because we want to measure and understand it objectively, rather than making assumptions.

Types of Usability Metrics

There are a few key metrics that we can measure directly if we’re looking to quantify effectiveness, efficiency, and satisfaction. Here are four common examples:

  • Success rate: Also known as ‘completion rate’, success rate is the percentage of users who were able to successfully complete the tasks.
  • Time-based efficiency: Also known as ‘time on task’, time-based efficiency measures how much time a user needs to complete a certain task.
  • Number of errors: Sounds like what it is! It measures the average number of times where an error occurred per user when performing a given task.
  • Post-task satisfaction: Measures a user's general impression or satisfaction after completing (or not completing) a given task.

How to Collect Usability Metrics


Usability metrics are outputs from research techniques deployed when conducting usability testing. Usability testing in web design, for example, involves assessing how a user interacts with the website by observing (and listening to) users completing defined tasks, such as purchasing a product or signing up for newsletters.

Conducting usability testing and collecting usability metrics usually involves:

  • Defining a set of tasks that you want to test
  • Recruitment of test participants
  • Observing participants (remotely or in-person)
  • Recording detailed observations
  • Follow-up satisfaction survey or questionnaire

Tools such Reframer are helpful in conducting usability tests remotely, and they enable live collaboration of multiple team members. It is extremely handy when trying to record and organize those insightful observations! Using paper prototypes is an inexpensive way to test usability early in the design process.

The Importance of Usability Metrics in User-Centered Design

User-centered design challenges designers to put user needs first. This means in order to deploy user-centered design, you need to understand your user. This is where usability testing and metrics add value to website and app performance; they provide direct, objective insight into user behavior, needs, and frustrations. If your user isn’t getting what they want or expect, they’ll simply leave and look elsewhere.

Usability metrics identify which parts of your design aren’t hitting the mark. Recognizing where users might be having trouble completing certain actions, or where users are regularly making errors, are vital insights when implementing user-centered design. In short, user-centered design relies on data-driven user insight.

But why hark on about usability metrics and user-centered design? Because at the heart of most successful businesses is a well-solved user problem. Take Spotify, for example, which solved the problem of dodgy, pirated digital files being so unreliable. People liked access to free digital music, but they had to battle viruses and fake files to get it. With Spotify, for a small monthly fee, or the cost of listening to a few ads, users have the best of both worlds. The same principle applies to user experience - identify recurring problems, then solve them.

Best Practices for Using Usability Metrics

Usability metrics should be analyzed by design teams of every size. However, there are some things to bear in mind when using usability metrics to inform design decisions:

  • Defining success: Usability metrics are only valuable if they are being measured against clearly defined benchmarks. Many tasks and processes are unique to each business, so use appropriate comparisons and targets; usually in the form of an ‘optimized’ user (a user with high efficiency).
  • Real user metrics: Be sure to test with participants that represent your final user base. For example, there’s little point in testing your team, who will likely be intimately aware of your business structure, terminology, and internal workflows.
  • Test early: Usability testing and subsequent usability metrics provide the most impact early on in the design process. This usually means testing an early prototype or even a paper prototype. Early testing helps to avoid any significant, unforeseen challenges that could be difficult to rewind in your information architecture.
  • Regular testing: Usability metrics can change over time as user behavior and familiarity with digital products evolve. You should also test and analyze the usability of new feature releases on your website or app.

Remember, data analysis is only as good as the data itself. Give yourself the best chance of designing exceptional user experiences by collecting, researching, and analyzing meaningful and accurate usability metrics.

Conclusion

Usability metrics are a guiding light when it comes to user experience. As the old saying goes, “you can’t manage what you can’t measure”. By including usability metrics in your design process, you invite direct user feedback into your product. This is ideal because we want to leave any assumptions or guesswork about user experience at the door.

User-centered design inherently relies on constant user research. Usability metrics such as success rate, time-based efficiency, number of errors, and post-task satisfaction will highlight potential shortcomings in your design. Subsequently, they identify where improvements can be made, AND they lay down a benchmark to check whether any resulting updates addressed the issues.

Ready to start collecting and analyzing usability metrics? Check out our guide to planning and running effective usability tests to get a head start!

Learn more
1 min read

Unmoderated usability testing: a checklist

In-person moderated user testing is a valuable part of any research project. Meaning you can see first-hand how your users interact with your prototypes and products. But in-person isn’t always a viable option. What to do if your project needs user testing but it’s just not possible to get in front of your users personally? 

Let’s talk unmoderated user testing. This approach sidesteps the need to meet your participants face-to-face as it’s done entirely remotely, over the internet. By it’s very nature there are also considerable benefits to unmoderated user testing.

What is unmoderated user testing? 💻👀

In the most basic sense, unmoderated user testing removes the ‘moderated’ part of the equation. Instead of having a facilitator guide participants through the test, participants complete the testing activity by themselves and in their own time. For the most part, everything else stays the same.

The key differences are:

  • You can’t ask follow-up questions
  • You can’t use low-fidelity prototypes
  • You can’t support participants (beyond the initial instructions you send them).

Is unmoderated user testing right for your research project?

By nature, unmoderated user research does not include any direct interaction between the researcher and the study participants. This is really the biggest benefit and also the biggest drawback. 

Benefits of unmoderated usability testing 👩🏻💻

  • Speed and turnaround  - As there is no need to schedule meetings with each participant, unmoderated testing is usually much faster to initiate and complete. Depending on the study, it may be possible to launch a study and receive results in just a few hours.
  • Size of study (participant numbers) - Unmoderated user testing also allows you to collect feedback from dozens or even hundreds of users at the same time.
  • Location (local and/or international) -Testing online removes reliance on participants being physically present for the testing which broadens the ability to make contact with participants within your country or across the globe. 

If you’d like to know more about the benefits of unmoderated usability testing, take a look at our article five reasons you should consider unmoderated user testing.

Limitations of unmoderated usability testing 🚧

  • Early-prototype testing is difficult without a moderator to explain and help participants recover from errors or limitations of the prototype.
  • Participant behavior - Without a moderator, participants tend to be less engaged and behave less realistically in tasks that depend on imagination, decision-making, or emotional responses.
  • Inability to ask follow-up questions - by not being in the testing with the participant, the facilitator can’t ask further questions to get a deeper understanding of the participant’s reasoning. As you can’t rely on human judgment through a moderator being in the room with the participants and the ability to adjust the test in the moment, unmoderated usability testing needs thorough up front planning.

Because of these limitations unmoderated usability testing usually works best for evaluating live websites and apps or highly functional prototypes.  It’s great for testing activities that don’t require a lot of imagination or emotion from participants. Such as testing functionality or answering direct queries to do with your product.

What’s involved when setting up unmoderated usability testing? 🤔💭

  1. Define testing goals

With any usability testing, it pays to define your goals before getting underway with setting up the software. What do you want to know from the participants? Goals vary from test to test. Understanding your goals upfront will help you to make the correct tool choice.


  1. Define your demographic

With a clear understanding of your goal, now it’s time to consider which participants are right for your study. Think about who they are, their demographic, and where they live. Are they new users or existing? Are they experts or novices?

  1. Selecting testing software

As unmoderated studies, are done remotely, the software used to faciliate the study plays a key role in ensuring you get useful results. Without a facilitator, the software must guide the participants through the session and record what happens. Take the time to test software and select one that is right for your study.

  1. Write your own tasks and questions

Think through your goals and what you want to achieve from the testing. Many of the unmoderated testing services include study templates with generic example tasks. Remember they are templates, and your tasks and questions should be specific to your particular study. Any task instructions guiding the participants should be clear and directive.

  1. Trial session

You’ve done all of the upfront work, now it’s time to test that it works, the software does what you expect and the instructions you have written can be followed. Doing a test run is crucial, especially with unmoderated usability testing, as there won’t be a facilitator in the testing to fix any problems.

  1. Recruit participants

Having defined your target audience and demographic, now is the time to recruit participants. Ensuring you have some control over the recruitment process is important, either through screening questions or recruiting your own. There are services that  recruit from a pool of willing participants. Thiscan be a great way to get a wide range of users.

  1. Analyze results

You are likely to accumulate a lot of data from your unmoderated testing. You’ll need a way to organize and analyze the data to derive insights that are valuable. Depending on the type of usability testing you do will vary the type of results. Quantitative testing gives data-driven results and direct answers. Whereas qualitative testing through audio or video recordings of participants’ actions or comments will need time to analyze and look at behavioral observations. 

Wrap Up 🌯

Unmoderated usability testing can be a good option for your study. It may not be right for all of your studies all of the time. While it can be quick to implement and often cheaper than moderated usability testing, it still requires time and planning to ensure you get the data insights you are looking for. Following a checklist can be a great way to ensure you approach your research methodically.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.