Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

UX

Learn more
1 min read

Democratizing UX research: empowering cross-functional teams

In today's fast-paced product development landscape, the ability to quickly gather and act on user insights is more critical than ever. While dedicated UX researchers play a crucial role, there's a growing trend towards democratizing UX research – empowering team members across various functions to contribute to and benefit from user insights. Let's explore how this approach can transform your organization's approach to user-centered design.

Benefits of a democratized UXR approach

Democratizing UX research is a transformative approach that empowers organizations to unlock the full potential of user insights. By breaking down traditional barriers and involving a broader range of team members in the research process, companies can foster a culture of user-centricity, accelerate decision-making, and drive innovation. This inclusive strategy not only enhances the depth and breadth of user understanding but also aligns diverse perspectives to create more impactful, user-friendly products and services. Here are a few of the benefits of this movement:

Increased research velocity

By enabling more team members to conduct basic research, organizations can gather insights more frequently and rapidly. This means that instead of waiting for dedicated UX researchers to be available, product managers, designers, or marketers can quickly run simple surveys or usability tests. For example, a product manager could use a user-friendly tool to get quick feedback on a new feature idea, allowing the team to iterate faster. This increased velocity helps organizations stay agile and responsive to user needs in a fast-paced market.

Broader perspective

Cross-functional participation brings diverse viewpoints to research, potentially uncovering insights that might be missed by specialized researchers alone. A developer might ask questions from a technical feasibility standpoint, while a marketer might focus on brand perception. This diversity in approach can lead to richer, more comprehensive insights. For instance, during a user interview, a sales team member might pick up on specific pain points related to competitor products that a UX researcher might not have thought to explore.

Enhanced user-centricity

When more team members engage directly with users, it fosters a culture of user-centricity across the organization. This direct exposure to user feedback and behaviors helps all team members develop empathy for the user. As a result, user needs and preferences become a central consideration in all decision-making processes, not just in UX design. For example, seeing users struggle with a feature firsthand might motivate a developer to champion user-friendly improvements in future sprints.

Improved research adoption

Team members who participate in research are more likely to understand and act on the insights generated. When people are involved in gathering data, they have a deeper understanding of the context and nuances of the findings. This personal investment leads to greater buy-in and increases the likelihood that research insights will be applied in practical ways. For instance, a product manager who conducts user interviews is more likely to prioritize features based on actual user needs rather than assumptions.

Resource optimization

Democratization allows dedicated researchers to focus on more complex, high-value research initiatives. By offloading simpler research tasks to other team members, professional UX researchers can dedicate their expertise to more challenging projects, such as longitudinal studies, complex usability evaluations, or strategic research initiatives. This optimization ensures that specialized skills are applied where they can have the most significant impact.

Our survey revealed that organizations with a more democratized approach to UXR tend to have higher levels of research maturity and integration into product development processes. This correlation suggests that democratization not only increases the quantity of research conducted but also enhances its quality and impact. Organizations that empower cross-functional teams to participate in UXR often develop more sophisticated research practices over time.

For example, these organizations might:

  • Have better-defined research processes and guidelines
  • Integrate user insights more consistently into decision-making at all levels
  • Develop more advanced metrics for measuring the impact of UXR
  • Foster a culture where challenging assumptions with user data is the norm
  • Create more opportunities for collaboration between different departments around user insights

By democratizing UXR, organizations can create a virtuous cycle where increased participation leads to better research practices, which in turn drives more value from UXR activities. This approach helps to embed user-centricity deeply into the organizational culture, leading to better products and services that truly meet user needs.

Strategies for upskilling people who do research (PWDRs)

To successfully democratize UXR, it's crucial to provide proper training and support:

1. UXR basics workshops

Offer regular training sessions on fundamental research methods and best practices. These workshops should cover a range of topics, including:

  • Introduction to user research methodologies (e.g., interviews, surveys, usability testing)
  • Basics of research design and planning
  • Participant recruitment strategies
  • Data analysis techniques
  • Ethical considerations in user research

For example, a monthly "UXR 101" workshop could be organized, where different aspects of UX research are covered in depth. These sessions could be led by experienced researchers and include practical exercises to reinforce learning.

Check out our 101 Guides

2. Mentorship programs

Pair non-researchers with experienced UX researchers for guidance and support. This one-on-one relationship allows for personalized learning and hands-on guidance. 

Mentors can:

  • Provide feedback on research plans
  • Offer advice on challenging research scenarios
  • Share best practices and personal experiences
  • Help mentees navigate the complexities of user research in their specific organizational context

A formal mentorship program could be established with clear goals, regular check-ins, and a defined duration (e.g., 6 months), after which mentees could become mentors themselves, scaling the program.

3. Research playbooks

Develop standardized templates and guidelines for common research activities. These playbooks serve as go-to resources for non-researchers, ensuring consistency and quality across studies. 

They might include:

  • Step-by-step guides for different research methods
  • Templates for research plans, screeners, and report structures
  • Best practices for participant interaction
  • Guidelines for data privacy and ethical considerations
  • Tips for presenting and socializing research findings

For instance, a "Usability Testing Playbook" could walk a product manager through the entire process of planning, conducting, and reporting on a usability test.

Check out Optimal Playbooks

4. Collaborative research

Involve non-researchers in studies led by experienced UX professionals to provide hands-on learning opportunities.

This approach allows non-researchers to:

  • Observe best practices in action
  • Contribute to real research projects
  • Understand the nuances and challenges of UX research
  • Build confidence in their research skills under expert guidance

For example, a designer could assist in a series of user interviews, gradually taking on more responsibility with each session under the researcher's supervision.

5. Continuous learning resources

Provide access to online courses, webinars, and industry events to foster ongoing skill development. This could include:

  • Subscriptions to UX research platforms and tools
  • Access to online course libraries (e.g., Coursera, LinkedIn Learning)
  • Budget for attending UX conferences and workshops
  • Internal knowledge sharing sessions where team members present on recent learnings or projects

An internal UX research resource hub could be created, curating relevant articles, videos, and courses for easy access by team members.

As one UX leader in our study noted, "It's been exciting to see [UXR] evolve as a discipline and see where it is today, and to see the various backgrounds and research specialisms that [user] researchers have today is not something I'd have expected."

This quote highlights the dynamic nature of UX research and the diversity it now encompasses. The field has evolved to welcome practitioners from various backgrounds, each bringing unique perspectives and skills. This diversity enriches the discipline and makes it more adaptable to different organizational contexts.

For example:

  • A former teacher might excel at educational research for EdTech products
  • A psychologist could bring deep insights into user behavior and motivation
  • A data scientist might introduce advanced analytical techniques to UX research

By embracing this diversity and providing comprehensive support for skill development, organizations can create a rich ecosystem of UX research capabilities. This not only democratizes the practice but also elevates its overall quality and impact.

The key to successful democratization lies in balancing accessibility with rigor. While making UX research more widely practiced, it's crucial to maintain high standards and ethical practices. The strategies outlined above help achieve this balance by providing structure, guidance, and ongoing support to those new to UX research, while leveraging the expertise of experienced researchers to ensure quality and depth in the organization's overall research efforts.

Tools and platforms enabling broader participation

The democratization of UXR has been greatly facilitated by comprehensive, user-friendly research platforms like Optimal Workshop. Our all-in-one solution offers a suite of tools designed to empower both seasoned researchers and non-researchers alike:

Surveys

Our intuitive survey creation tool allows anyone in your organization to quickly design and distribute surveys. With customizable templates and an easy-to-use interface, gathering user feedback has never been simpler.

Tree Testing and Card Sorting

These powerful tools simplify the process of conducting information architecture and card sorting studies. Non-researchers can easily set up and run tests to validate navigation structures and content organization.

Qualitative Insights

Our powerful qualitative analysis tool enables team members across your organization to efficiently analyze and synthesize user interview data. With its user-friendly interface, our Qualitative Insights tool makes deriving meaningful insights from qualitative research accessible to researchers and non-researchers alike.

First-click Testing

This easy-to-use first-click testing tool empowers anyone in your team to quickly set up and run tests to evaluate the effectiveness of their designs. First-click Testing simplifies the process of gathering initial user impressions, allowing for rapid iteration and improvement of user interfaces.

These tools, integrated into a single, user-friendly platform, make it possible for non-researchers to conduct basic studies and contribute to the overall research effort without extensive training. The intuitive design of the Optimal Workshop UXR and insights platform ensures that team members across different functions can easily engage in user research activities, from planning and execution to analysis and sharing of insights.

By providing a comprehensive, accessible platform, Optimal Workshop plays a crucial role in democratizing UX research, enabling organizations to build a more user-centric culture and make data-driven decisions at all levels.

Balancing democratization with expertise

While democratizing UXR offers numerous benefits, it's crucial to strike a balance with professional expertise. This balance involves establishing quality control measures, reserving complex research initiatives for trained professionals, maintaining strategic oversight by experienced researchers, providing clear guidelines on research ethics and data privacy, and leveraging dedicated researchers' expertise for insight synthesis. 

Our survey revealed that organizations successfully balancing democratization with expertise tend to see the highest impact from their UXR efforts. The goal of democratization is not to replace dedicated researchers but to expand the organization's capacity for generating user insights. By empowering cross-functional teams to participate in UXR, companies can foster a more user-centric culture, increase the velocity of insight generation, and ultimately create products that better meet user needs. 

As we look to the future, the trend towards democratization is likely to continue, and organizations that can effectively balance broad participation with professional expertise will be best positioned to thrive in an increasingly user-centric business landscape.

Ready to democratize your UX research? Optimal Workshop's platform empowers your entire team to contribute to user insights while maintaining professional quality. Our intuitive tools accelerate research velocity and foster a user-centric culture. 

Start your free trial today and transform your UXR practice. 

Learn more
1 min read

Workspaces delivers new privacy controls and improved collaboration

Improved organization, privacy controls, and more with new Workspaces 🚀

One of our key priorities in 2024 is making Optimal Workshop easier for large organizations to manage teams and collaborate more effectively on delivering optimal digital experiences. Workspaces is going live this week, which replaces teams, and introduces projects and folders for improved organization and privacy controls. Our latest release lays the foundations to provide more control over managing users, licenses, and user roles in the app in the near future.

More control with project privacy 🔒

Private projects allow greater flexibility on who can see what in your workspace, with the ability to make projects public or private and manage who can access a project. Find out more about how to set up private projects in this help article.

What changes for Enterprise customers? 😅

  • The teams you have set up today will remain the same; they are renamed workspaces.
  • Studies will be moved to a 'Default project' within the new workspace, from here you can decide how you would like to organize your studies and access to them.

  • You can create new projects, move studies into them, and use the new privacy features to control who has access to studies or leave them as public access.

  • Optimal Workshop are here to help if you would like to review your account structure and make changes, please reach out to your Customer Success Manager.

Watch the video 🎞️

What changes for Professional and Team customers? 😨

Customers on either a Professional or Team plan will notice the studies tab will now be called Workspace. We have introduced another layer of organization called projects, and there is a new-look sidebar on the left to create projects, folders, and studies.

What's next for Workspaces? 🔮

This new release is an essential step towards improving how we manage users, licenses, and different role types in Optimal Workshop. We hope to deliver more updates, such as the ability to move studies between workspaces, in the near future. If you have any feedback or ideas you want to share on workspaces or Optimal Workshop, please email product@optimalworkshop.com; we'd love to hear from you.

Learn more
1 min read

Ella Stoner: A three-step-tool to help designers break down the barriers of technical jargon

Designing in teams with different stakeholders can be incredibly complex. Each person looks at projects through their own lens, and can potentially introduce jargon and concepts that are confusing to others. Simplicity advocate Ella Stoner knows this scenario all too well. It’s what led her to create an easy three-step tool for recognizing problems and developing solutions. By getting everyone on the same page and creating an understanding of what the simplest solution is, designers can create products with customer needs in mind.

Ella’s background

Ella Stoner is a CX Designer at Spark in New Zealand. She is a creative thought leader and a talented designer who has facilitated over 50 Human Centered Design Workshops. Ella and her team have developed a cloud product that enables businesses to connect with Public Cloud Services such as Amazon, Google and Azure in a human-centric way. She brings a simplistic approach to her work that is reflected in her UX New Zealand talk. It’s about cutting out complex details to establish an agreed starting point that is easily understood by all team members.

Contact Details:

You can find Ella on LinkedIn.

Improving creative confidence 🤠

Ella is confident that she is not the only designer who has felt overwhelmed with technical and industry specific jargon in product meetings. For example, on Ella’s first day as a designer with Spark, she attended a meeting about an HSNS (High Speed Network Services) tool. Ella attempted to use context clues to try and predict what HSNS could mean. However, as the meeting went on, the technical and industry-specific jargon built on each other and Ella struggled to follow what was being said. At one point Ella asked the team to clarify this mysterious term:

“What’s an HSNS and why would the customer use it?” she asked. Much to her surprise, the room was completely silent. The team struggled to answer a basic question, about a term that appeared to be common knowledge during the meeting. There’s a saying, “Why do something simply when you can make it as complicated as possible?”. This happens all too often, where people and teams struggle to communicate with each other, and this results in projects and products that customers don’t understand and can’t use. Ella’s In A Nutshell tool is designed to cut through all that. It creates a base level starting point that’s understood by all, cuts out jargon, and puts the focus squarely on the customer. It:

  • condenses down language and jargon to its simplest form
  • translates everything into common language
  • flips it back to the people who’ll be using it.

Here’s how it works:

First, you complete this phrase as it pertains to your work: “In a nutshell, (project/topic) is (describe what the project or topic is in a few words), that (state what the project/topic does) for (indicate key customer/users and why). In order for this method to work, each of the four categories you insert must be simple and understandable. All acronyms, complex language, and technical jargon must be avoided.  In a literal sense, anyone reading the statement should be able to understand what is being said “in a nutshell.” When you’ve done this, you’ll have a statement that can act as a guide for the goals your project aims to achieve.

Why it matters 🤔

Applying the “In A Nutshell” tool doesn’t take long. However, it's important to write this statement as a team. Ideally, it’s best to write the statement at the start of a project, but you can also write it in the middle if you need to create a reference point, or any time you feel technical jargon creeping in.

Here’s what you’ll need to get started:

  • People with three or more role types (this accommodates varying perspectives to ensure it’s as relevant as possible)
  • A way to capture text - i.e. whiteboard, Slack channel, Miro board
  • An easy voting system - i.e., thumbs up in a chat

Before you start, you may need to pitch the idea to someone in a technical role. If you’re feeling lost or confused, chances are someone else will be too. Breaking down the technical concepts into easy-to-understand and digestible language is of utmost importance:

  1. Explain the Formula to the team..
  2. Individually brainstorm possible answers for each gap for three minutes.
  3. Put every idea up on the board or channel and vote on the best one.

Use the most popular answers as your final “In a Nutshell” statement.

Side note: Keep all the options that come through the brainstorm. They can still be useful in the design process to help form a full picture of what you’re working on, what it should do, who it should be for etc.

Learn more
1 min read

Our latest feature session replay has landed 🥳

What is session replay?

Session replay allows you to record participants completing a card sort without the need for plug-ins or integrations. This great new feature captures the participant's interactions and creates a recording for each participant completing the card sort that you can view in your own time. It’s a great way to identify where users may have struggled to categorize information to correlate with the insights you find in your data.  

Watch the video 📹 👀

How does session replay work?

  • Session replay interacts with a study and nothing else. It does not include audio or face recording in the first release, but we’re working on it for the future.
  • There is no set-up or plug-in required; you control the use of screen replay in the card sort settings.  
  • For enterprise customers, the account admin will be required to turn this feature on for teams to access.
  • Session replay is currently only available on card sort, but it’s coming soon to other study types.

Help article 🩼


Guide to using session replay

How do you activate session replay?

To activate session replay, create a card sort or open an existing card sort that has not yet been launched. Click on ‘set up,’ then ‘settings’; here, you will see the option to turn on session replay for your card sort. This feature will be off by default, and you must turn it on for each card study.

How do I view a session replay?

To view a session replay of a card sort, go to Results > Participants > Select a participant > Session replay. 

I can't see session replay in the card sort settings 👀

If this is the case, you will need to reach out to your organization's account admin to ask for this to be activated at an organizational level. It’s really easy for session replay to be enabled or disabled by the organization admin just by navigating to Settings > Features > Session Replay, where it can be toggled on/off. 

Learn more
1 min read

Using paper prototypes in UX

In UX research we are told again and again that to ensure truly user-centered design, it’s important to test ideas with real users as early as possible. There are many benefits that come from introducing the voice of the people you are designing for in the early stages of the design process. The more feedback you have to work with, the more you can inform your design to align with real needs and expectations. In turn, this leads to better experiences that are more likely to succeed in the real world.It is not surprising then that paper prototypes have become a popular tool used among researchers. They allow ideas to be tested as they emerge, and can inform initial designs before putting in the hard yards of building the real thing. It would seem that they’re almost a no-brainer for researchers, but just like anything out there, along with all the praise, they have also received a fair share of criticism, so let’s explore paper prototypes a little further.

What’s a paper prototype anyway? 🧐📖

Paper prototyping is a simple usability testing technique designed to test interfaces quickly and cheaply. A paper prototype is nothing more than a visual representation of what an interface could look like on a piece of paper (or even a whiteboard or chalkboard). Unlike high-fidelity prototypes that allow for digital interactions to take place, paper prototypes are considered to be low-fidelity, in that they don’t allow direct user interaction. They can also range in sophistication, from a simple sketch using a pen and paper to simulate an interface, through to using designing or publishing software to create a more polished experience with additional visual elements.

Screen Shot 2016-04-15 at 9.26.30 AM
Different ways of designing paper prototypes, using OptimalSort as an example

Showing a research participant a paper prototype is far from the real deal, but it can provide useful insights into how users may expect to interact with specific features and what makes sense to them from a basic, user-centered perspective. There are some mixed attitudes towards paper prototypes among the UX community, so before we make any distinct judgements, let's weigh up their pros and cons.

Advantages 🏆

  • They’re cheap and fastPen and paper, a basic word document, Photoshop. With a paper prototype, you can take an idea and transform it into a low-fidelity (but workable) testing solution very quickly, without having to write code or use sophisticated tools. This is especially beneficial to researchers who work with tight budgets, and don’t have the time or resources to design an elaborate user testing plan.
  • Anyone can do itPaper prototypes allow you to test designs without having to involve multiple roles in building them. Developers can take a back seat as you test initial ideas, before any code work begins.
  • They encourage creativityFrom both the product teams participating in their design, but also from the users. They require the user to employ their imagination, and give them the opportunity express their thoughts and ideas on what improvements can be made. Because they look unfinished, they naturally invite constructive criticism and feedback.
  • They help minimize your chances of failurePaper prototypes and user-centered design go hand in hand. Introducing real people into your design as early as possible can help verify whether you are on the right track, and generate feedback that may give you a good idea of whether your idea is likely to succeed or not.

Disadvantages 😬

  • They’re not as polished as interactive prototypesIf executed poorly, paper prototypes can appear unprofessional and haphazard. They lack the richness of an interactive experience, and if our users are not well informed when coming in for a testing session, they may be surprised to be testing digital experiences on pieces of paper.
  • The interaction is limitedDigital experiences can contain animations and interactions that can’t be replicated on paper. It can be difficult for a user to fully understand an interface when these elements are absent, and of course, the closer the interaction mimics the final product, the more reliable our findings will be.
  • They require facilitationWith an interactive prototype you can assign your user tasks to complete and observe how they interact with the interface. Paper prototypes, however, require continuous guidance from a moderator in communicating next steps and ensuring participants understand the task at hand.
  • Their results have to be interpreted carefullyPaper prototypes can’t emulate the final experience entirely. It is important to interpret their findings while keeping their limitations in mind. Although they can help minimize your chances of failure, they can’t guarantee that your final product will be a success. There are factors that determine success that cannot be captured on a piece of paper, and positive feedback at the prototyping stage does not necessarily equate to a well-received product further down the track.

Improving the interface of card sorting, one prototype at a time 💡

We recently embarked on a research project looking at the user interface of our card-sorting tool, OptimalSort. Our research has two main objectives — first of all to benchmark the current experience on laptops and tablets and identify ways in which we can improve the current interface. The second objective is to look at how we can improve the experience of card sorting on a mobile phone.

Rather than replicating the desktop experience on a smaller screen, we want to create an intuitive experience for mobiles, ensuring we maintain the quality of data collected across devices.Our current mobile experience is a scaled down version of the desktop and still has room for improvement, but despite that, 9 per cent of our users utilize the app. We decided to start from the ground up and test an entirely new design using paper prototypes. In the spirit of testing early and often, we decided to jump right into testing sessions with real users. In our first testing sprint, we asked participants to take part in two tasks. The first was to perform an open or closed card sort on a laptop or tablet. The second task involved using paper prototypes to see how people would respond to the same experience on a mobile phone.

blog_artwork_01-03

Context is everything 🎯

What did we find? In the context of our research project, paper prototypes worked remarkably well. We were somewhat apprehensive at first, trying to figure out the exact flow of the experience and whether the people coming into our office would get it. As it turns out, people are clever, and even those with limited experience using a smartphone were able to navigate and identify areas for improvement just as easily as anyone else. Some participants even said they prefered the experience of testing paper prototypes over a laptop. In an effort to make our prototype-based tasks easy to understand and easy to explain to our participants, we reduced the full card sort to a few key interactions, minimizing the number of branches in the UI flow.

This could explain a preference for the mobile task, where we only asked participants to sort through a handful of cards, as opposed to a whole set.The main thing we found was that no matter how well you plan your test, paper prototypes require you to be flexible in adapting the flow of your session to however your user responds. We accepted that deviating from our original plan was something we had to embrace, and in the end these additional conversations with our participants helped us generate insights above and beyond the basics we aimed to address. We now have a whole range of feedback that we can utilize in making more sophisticated, interactive prototypes.

Whether our success with using paper prototypes was determined by the specific setup of our testing sessions, or simply by their pure usefulness as a research technique is hard to tell. By first performing a card sorting task on a laptop or tablet, our participants approached the paper prototype with an understanding of what exactly a card sort required. Therefore there is no guarantee that we would have achieved the same level of success in testing paper prototypes on their own. What this does demonstrate, however, is that paper prototyping is heavily dependent on the context of your assessment.

Final thoughts 💬

Paper prototypes are not guaranteed to work for everybody. If you’re designing an entirely new experience and trying to describe something complex in an abstracted form on paper, people may struggle to comprehend your idea. Even a careful explanation doesn’t guarantee that it will be fully understood by the user. Should this stop you from testing out the usefulness of paper prototypes in the context of your project? Absolutely not.

In a perfect world we’d test high fidelity interactive prototypes that resemble the real deal as closely as possible, every step of the way. However, if we look at testing from a practical perspective, before we can fully test sophisticated designs, paper prototypes provide a great solution for generating initial feedback.In his article criticizing the use of paper prototypes, Jake Knapp makes the point that when we show customers a paper prototype we’re inviting feedback, not reactions. What we found in our research however, was quite the opposite.

In our sessions, participants voiced their expectations and understanding of what actions were possible at each stage, without us having to probe specifically for feedback. Sure we also received general comments on icon or colour preferences, but for the most part our users gave us insights into what they felt throughout the experience, in addition to what they thought.

Further reading 🧠

Learn more
1 min read

Behind the scenes of UX work on Trade Me's CRM system

We love getting stuck into scary, hairy problems to make things better here at Trade Me. One challenge for us in particular is how best to navigate customer reaction to any change we make to the site, the app, the terms and conditions, and so on. Our customers are passionate both about the service we provide — an online auction and marketplace — and its place in their lives, and are rightly forthcoming when they're displeased or frustrated. We therefore rely on our Customer Service (CS) team to give customers a voice, and to respond with patience and skill to customer problems ranging from incorrectly listed items to reports of abusive behavior.

The CS team uses a Customer Relationship Management (CRM) system, Trade Me Admin, to monitor support requests and manage customer accounts. As the spectrum of Trade Me's services and the complexity of the public website have grown rapidly, the CRM system has, to be blunt, been updated in ways which have not always been the prettiest. Links for new tools and reports have simply been added to existing pages, and old tools for services we no longer operate have not always been removed. Thus, our latest focus has been to improve the user experience of the CRM system for our CS team.

And though on the surface it looks like we're working on a product with only 90 internal users, our changes will have flow on effects to tens of thousands of our members at any given time (from a total number of around 3.6 million members).

The challenges of designing customer service systems

We face unique challenges designing customer service systems. Robert Schumacher from GfK summarizes these problems well. I’ve paraphrased him here and added an issue of my own:

1. Customer service centres are high volume environments — Our CS team has thousands of customer interactions every day, and and each team member travels similar paths in the CRM system.

2. Wrong turns are amplified — With so many similar interactions, a system change that adds a minute more to processing customer queries could slow down the whole team and result in delays for customers.

3. Two people relying on the same system — When the CS team takes a phone call from a customer, the CRM system is serving both people: the CS person who is interacting with it, and the caller who directs the interaction. Trouble is, the caller can't see the paths the system is forcing the CS person to take. For example, in a previous job a client’s CS team would always ask callers two or three extra security questions — not to confirm identites, but to cover up the delay between answering the call and the right page loading in the system.

4. Desktop clutter — As a result of the plethora of tools and reports and systems, the desktop of the average CS team member is crowded with open windows and tabs. They have to remember where things are and also how to interact with the different tools and reports, all of which may have been created independently (ie. work differently). This presents quite the cognitive load.

5. CS team members are expert users — They use the system every day, and will all have their own techniques for interacting with it quickly and accurately. They've also probably come up with their own solutions to system problems, which they might be very comfortable with. As Schumacher says, 'A critical mistake is to discount the expert and design for the novice. In contact centers, novices become experts very quickly.'

6. Co-design is risky — Co-design workshops, where the users become the designers,  are all the rage, and are usually pretty effective at getting great ideas quickly into systems. But expert users almost always end up regurgitating the system they're familiar with, as they've been trained by repeated use of systems to think in fixed ways.

7. Training is expensive — Complex systems require more training so if your call centre has high churn (ours doesn’t – most staff stick around for years) then you’ll be spending a lot of money. …and the one I’ve added:

8. Powerful does not mean easy to learn — The ‘it must be easy to use and intuitive’ design rationale is often the cause of badly designed CRM systems. Designers mistakenly design something simple when they should be designing something powerful. Powerful is complicated, dense, and often less easy to learn, but once mastered lets staff really motor.

Our project focus

Our improvement of Trade Me Admin is focused on fixing the shattered IA and restructuring the key pages to make them perform even better, bringing them into a new code framework. We're not redesigning the reports, tools, code or even the interaction for most of the reports, as this will be many years of effort. Watching our own staff use Trade Me Admin is like watching someone juggling six or seven things.

The system requires them to visit multiple pages, hold multiple facts in their head, pattern and problem-match across those pages, and follow their professional intuition to get to the heart of a problem. Where the system works well is on some key, densely detailed hub pages. Where it works badly, staff have to navigate click farms with arbitrary link names, have to type across the URL to get to hidden reports, and generally expend more effort on finding the answer than on comprehending the answer.

Groundwork

The first thing that we did was to sit with CS and watch them work and get to know the common actions they perform. The random nature of the IA and the plethora of dead links and superseded reports became apparent. We surveyed teams, providing them with screen printouts and three highlighter pens to colour things as green (use heaps), orange (use sometimes) and red (never use). From this, we were able to immediately remove a lot of noise from the new IA. We also saw that specific teams used certain links but that everyone used a core set. Initially focussing on the core set, we set about understanding the tasks under those links.

The complexity of the job soon became apparent – with a complex system like Trade Me Admin, it is possible to do the same thing in many different ways. Most CRM systems are complex and detailed enough for there to be more than one way to achieve the same end and often, it’s not possible to get a definitive answer, only possible to ‘build a picture’. There’s no one-to-one mapping of task to link. Links were also often arbitrarily named: ‘SQL Lookup’ being an example. The highly-trained user base are dependent on muscle memory in finding these links. This meant that when asked something like: “What and where is the policing enquiry function?”, many couldn’t tell us what or where it was, but when they needed the report it contained they found it straight away.

Sort of difficult

Therefore, it came as little surprise that staff found the subsequent card sort task quite hard. We renamed the links to better describe their associated actions, and of course, they weren't in the same location as in Trade Me Admin. So instead of taking the predicted 20 minutes, the sort was taking upwards of 40 minutes. Not great when staff are supposed to be answering customer enquiries!

We noticed some strong trends in the results, with links clustering around some of the key pages and tasks (like 'member', 'listing', 'review member financials', and so on). The results also confirmed something that we had observed — that there is a strong split between two types of information: emails/tickets/notes and member info/listing info/reports.

We built and tested two IAs

pietree results tree testing

After card sorting, we created two new IAs, and then customized one of the IAs for each of the three CS teams, giving us IAs to test. Each team was then asked to complete two tree tests, with 50% doing one first and 50% doing the other first. At first glance, the results of the tree test were okay — around 61% — but 'Could try harder'. We saw very little overall difference between the success of the two structures, but definitely some differences in task success. And we also came across an interesting quirk in the results.

Closer analysis of the pie charts with an expert in Trade Me Admin showed that some ‘wrong’ answers would give part of the picture required. In some cases so much so that I reclassified answers as ‘correct’ as they were more right than wrong. Typically, in a real world situation, staff might check several reports in order to build a picture. This ambiguous nature is hard to replicate in a tree test which wants definitive yes or no answers. Keeping the tasks both simple to follow and comprehensive proved harder than we expected.

For example, we set a task that asked participants to investigate whether two customers had been bidding on each other's auctions. When we looked at the pietree (see screenshot below), we noticed some participants had clicked on 'Search Members', thinking they needed to locate the customer accounts, when the task had presumed that the customers had already been found. This is a useful insight into writing more comprehensive tasks that we can take with us into our next tests.  

What’s clear from analysis is that although it’s possible to provide definitive answers for a typical site’s IAs, for a CRM like Trade Me Admin this is a lot harder. Devising and testing the structure of a CRM has proved a challenge for our highly trained audience, who are used to the current system and naturally find it difficult to see and do things differently. Once we had reclassified some of the answers as ‘correct’ one of the two trees was a clear winner — it had gone from 61% to 69%. The other tree had only improved slightly, from 61% to 63%.

There were still elements with it that were performing sub-optimally in our winning structure, though. Generally, the problems were to do with labelling, where, in some cases, we had attempted to disambiguate those ‘SQL lookup’-type labels but in the process, confused the team. We were left with the dilemma of whether to go with the new labels and make the system initially harder to use for staff but easier to learn for new staff, or stick with the old labels, which are harder to learn. My view is that any new system is going to see an initial performance dip, so we might as well change the labels now and make it better.

The importance of carefully structuring questions in a tree test has been highlighted, particularly in light of the ‘start anywhere/go anywhere’ nature of a CRM. The diffuse but powerful nature of a CRM means that careful consideration of tree test answer options needs to be made, in order to decide ‘how close to 100% correct answer’ you want to get.

Development work has begun so watch this space

It's great to see that our research is influencing the next stage of the CRM system, and we're looking forward to seeing it go live. Of course, our work isn't over— and nor would we want it to be! Alongside the redevelopment of the IA, I've been redesigning the key pages from Trade Me Admin, and continuing to conduct user research, including first click testing using Chalkmark.

This project has been governed by a steadily developing set of design principles, focused on complex CRM systems and the specific needs of their audience. Two of these principles are to reduce navigation and to design for experts, not novices, which means creating dense, detailed pages. It's intense, complex, and rewarding design work, and we'll be exploring this exciting space in more depth in upcoming posts.

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.