March 29, 2016
3

Which comes first: card sorting or tree testing?

“Dear Optimal, I want to test the structure of a university website (well certain sections anyway). My gut instinct is that it's pretty 'broken'. Lots of sections feel like they're in the wrong place. I want to test my hypotheses before proposing a new structure. I'm definitely going to do some card sorting, and was planning a mixture of online and offline. My question is about when to bring in tree testing. Should I do this first to test the existing IA? Or is card sorting sufficient? I do intend to tree test my new proposed IA in order to validate it, but is it worth doing it upfront too?" — Matt

Dear Matt,

Ah, the classic chicken or the egg scenario: Which should come first, tree testing or card sorting?

It’s a question that many researchers often ask themselves, but I’m here to help clear the air! You should always use both methods when changing up your information architecture (IA) in order to capture the most information.

Tree testing and card sorting, when used together, can give you fantastic insight into the way your users interact with your site. First of all, I’ll run through some of the benefits of each testing method.


What is card sorting and why should I use it?

Card sorting is a great method to gauge the way in which your users organize the content on your site. It helps you figure out which things go together and which things don’t. There are two main types of card sorting: open and closed.

Closed card sorting involves providing participants with pre-defined categories into which they sort their cards. For example, you might be reorganizing the categories for your online clothing store for women. Your cards would have all the names of your products (e.g., “socks”, “skirts” and “singlets”) and you also provide the categories (e.g.,“outerwear”, “tops” and “bottoms”).

Open card sorting involves providing participants with cards and leaving them to organize the content in a way that makes sense to them. It’s the opposite to closed card sorting, in that participants dictate the categories themselves and also label them. This means you’d provide them with the cards only, and no categories.

Card sorting, whether open or closed, is very user focused. It involves a lot of thought, input, and evaluation from each participant, helping you to form the structure of your new IA.


What is tree testing and why should I use it?

Tree testing is a fantastic way to determine how your users are navigating your site and how they’re finding information. Your site is organized into a tree structure, sorted into topics and subtopics, and participants are provided with some tasks that they need to perform. The results will show you how your participants performed those tasks, if they were successful or unsuccessful, and which route they took to complete the tasks. This data is extremely useful for creating a new and improved IA.

Tree testing is an activity that requires participants to seek information, which is quite the contrast to card sorting. Card sorting is an activity that requires participants to sort and organize information. Each activity requires users to behave in different ways, so each method will give its own valuable results.


Comparing tree testing and card sorting: Key differences

Tree testing and card sorting are complementary methods within your UX toolkit, each unlocking unique insights about how users interact with your site structure. The difference is all about direction.

Card sorting is generative. It helps you understand how users naturally group and label your content; revealing mental models, surfacing intuitive categories, and informing your site’s information architecture (IA) from the ground up. Whether using open or closed methods, card sorting gives users the power to organize content in ways that make sense to them.

Tree testing is evaluative. Once you’ve designed or restructured your IA, tree testing puts it to the test. Participants are asked to complete find-it tasks using only your site structure – no visuals, no design – just your content hierarchy. This highlights whether users can successfully locate information and how efficiently they navigate your content tree.

In short:

  • Card sorting = "How would you organize this?"
  • Tree testing = "Can you find this?"


Using both methods together gives you clarity and confidence. One builds the structure. The other proves it works.


Which method should you choose?

The right method depends on where you are in your IA journey. If you're beginning from scratch or rethinking your structure, starting with card sorting is ideal. It will give you deep insight into how users group and label content.

If you already have an existing IA and want to validate its effectiveness, tree testing is typically the better fit. Tree testing shows you where users get lost and what’s working well. Think of card sorting as how users think your site should work, and tree testing as how they experience it in action.


Should you run a card or tree test first?

In this scenario, I’d recommend running a tree test first in order to find out how your existing IA currently performs. You said your gut instinct is telling you that your existing IA is pretty “broken”, but it’s good to have the data that proves this and shows you where your users get lost.

An initial tree test will give you a benchmark to work with – after all, how will you know your shiny, new IA is performing better if you don’t have any stats to compare it with? Your results from your first tree test will also show you which parts of your current IA are the biggest pain points and from there you can work on fixing them. Make sure you keep these tasks on hand – you’ll need them later!

Once your initial tree test is done, you can start your card sort, based on the results from your tree test. Here, I recommend conducting an open card sort so you can understand how your users organize the content in a way that makes sense to them. This will also show you the language your participants use to name categories, which will help you when you’re creating your new IA.

Finally, once your card sort is done you can conduct another tree test on your new, proposed IA. By using the same (or very similar) tasks from your initial tree test, you will be able to see that any changes in the results can be directly attributed to your new and improved IA.

Once your test has concluded, you can use this data to compare the performance from the tree test for your original information architecture.


Why using both methods together is most effective

Card sorting and tree testing aren’t rivals, view them as allies. Used together, they give you end-to-end clarity. Card sorting informs your IA design based on user mental models. Tree testing evaluates that structure, confirming whether users can find what they need. This combination creates a feedback loop that removes guesswork and builds confidence. You'll move from assumptions to validation, and from confusion to clarity – all backed by real user behavior.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Behind the scenes of UX work on Trade Me's CRM system

We love getting stuck into scary, hairy problems to make things better here at Trade Me. One challenge for us in particular is how best to navigate customer reaction to any change we make to the site, the app, the terms and conditions, and so on. Our customers are passionate both about the service we provide — an online auction and marketplace — and its place in their lives, and are rightly forthcoming when they're displeased or frustrated. We therefore rely on our Customer Service (CS) team to give customers a voice, and to respond with patience and skill to customer problems ranging from incorrectly listed items to reports of abusive behavior.

The CS team uses a Customer Relationship Management (CRM) system, Trade Me Admin, to monitor support requests and manage customer accounts. As the spectrum of Trade Me's services and the complexity of the public website have grown rapidly, the CRM system has, to be blunt, been updated in ways which have not always been the prettiest. Links for new tools and reports have simply been added to existing pages, and old tools for services we no longer operate have not always been removed. Thus, our latest focus has been to improve the user experience of the CRM system for our CS team.

And though on the surface it looks like we're working on a product with only 90 internal users, our changes will have flow on effects to tens of thousands of our members at any given time (from a total number of around 3.6 million members).

The challenges of designing customer service systems

We face unique challenges designing customer service systems. Robert Schumacher from GfK summarizes these problems well. I’ve paraphrased him here and added an issue of my own:

1. Customer service centres are high volume environments — Our CS team has thousands of customer interactions every day, and and each team member travels similar paths in the CRM system.

2. Wrong turns are amplified — With so many similar interactions, a system change that adds a minute more to processing customer queries could slow down the whole team and result in delays for customers.

3. Two people relying on the same system — When the CS team takes a phone call from a customer, the CRM system is serving both people: the CS person who is interacting with it, and the caller who directs the interaction. Trouble is, the caller can't see the paths the system is forcing the CS person to take. For example, in a previous job a client’s CS team would always ask callers two or three extra security questions — not to confirm identites, but to cover up the delay between answering the call and the right page loading in the system.

4. Desktop clutter — As a result of the plethora of tools and reports and systems, the desktop of the average CS team member is crowded with open windows and tabs. They have to remember where things are and also how to interact with the different tools and reports, all of which may have been created independently (ie. work differently). This presents quite the cognitive load.

5. CS team members are expert users — They use the system every day, and will all have their own techniques for interacting with it quickly and accurately. They've also probably come up with their own solutions to system problems, which they might be very comfortable with. As Schumacher says, 'A critical mistake is to discount the expert and design for the novice. In contact centers, novices become experts very quickly.'

6. Co-design is risky — Co-design workshops, where the users become the designers,  are all the rage, and are usually pretty effective at getting great ideas quickly into systems. But expert users almost always end up regurgitating the system they're familiar with, as they've been trained by repeated use of systems to think in fixed ways.

7. Training is expensive — Complex systems require more training so if your call centre has high churn (ours doesn’t – most staff stick around for years) then you’ll be spending a lot of money. …and the one I’ve added:

8. Powerful does not mean easy to learn — The ‘it must be easy to use and intuitive’ design rationale is often the cause of badly designed CRM systems. Designers mistakenly design something simple when they should be designing something powerful. Powerful is complicated, dense, and often less easy to learn, but once mastered lets staff really motor.

Our project focus

Our improvement of Trade Me Admin is focused on fixing the shattered IA and restructuring the key pages to make them perform even better, bringing them into a new code framework. We're not redesigning the reports, tools, code or even the interaction for most of the reports, as this will be many years of effort. Watching our own staff use Trade Me Admin is like watching someone juggling six or seven things.

The system requires them to visit multiple pages, hold multiple facts in their head, pattern and problem-match across those pages, and follow their professional intuition to get to the heart of a problem. Where the system works well is on some key, densely detailed hub pages. Where it works badly, staff have to navigate click farms with arbitrary link names, have to type across the URL to get to hidden reports, and generally expend more effort on finding the answer than on comprehending the answer.

Groundwork

The first thing that we did was to sit with CS and watch them work and get to know the common actions they perform. The random nature of the IA and the plethora of dead links and superseded reports became apparent. We surveyed teams, providing them with screen printouts and three highlighter pens to colour things as green (use heaps), orange (use sometimes) and red (never use). From this, we were able to immediately remove a lot of noise from the new IA. We also saw that specific teams used certain links but that everyone used a core set. Initially focussing on the core set, we set about understanding the tasks under those links.

The complexity of the job soon became apparent – with a complex system like Trade Me Admin, it is possible to do the same thing in many different ways. Most CRM systems are complex and detailed enough for there to be more than one way to achieve the same end and often, it’s not possible to get a definitive answer, only possible to ‘build a picture’. There’s no one-to-one mapping of task to link. Links were also often arbitrarily named: ‘SQL Lookup’ being an example. The highly-trained user base are dependent on muscle memory in finding these links. This meant that when asked something like: “What and where is the policing enquiry function?”, many couldn’t tell us what or where it was, but when they needed the report it contained they found it straight away.

Sort of difficult

Therefore, it came as little surprise that staff found the subsequent card sort task quite hard. We renamed the links to better describe their associated actions, and of course, they weren't in the same location as in Trade Me Admin. So instead of taking the predicted 20 minutes, the sort was taking upwards of 40 minutes. Not great when staff are supposed to be answering customer enquiries!

We noticed some strong trends in the results, with links clustering around some of the key pages and tasks (like 'member', 'listing', 'review member financials', and so on). The results also confirmed something that we had observed — that there is a strong split between two types of information: emails/tickets/notes and member info/listing info/reports.

We built and tested two IAs

pietree results tree testing

After card sorting, we created two new IAs, and then customized one of the IAs for each of the three CS teams, giving us IAs to test. Each team was then asked to complete two tree tests, with 50% doing one first and 50% doing the other first. At first glance, the results of the tree test were okay — around 61% — but 'Could try harder'. We saw very little overall difference between the success of the two structures, but definitely some differences in task success. And we also came across an interesting quirk in the results.

Closer analysis of the pie charts with an expert in Trade Me Admin showed that some ‘wrong’ answers would give part of the picture required. In some cases so much so that I reclassified answers as ‘correct’ as they were more right than wrong. Typically, in a real world situation, staff might check several reports in order to build a picture. This ambiguous nature is hard to replicate in a tree test which wants definitive yes or no answers. Keeping the tasks both simple to follow and comprehensive proved harder than we expected.

For example, we set a task that asked participants to investigate whether two customers had been bidding on each other's auctions. When we looked at the pietree (see screenshot below), we noticed some participants had clicked on 'Search Members', thinking they needed to locate the customer accounts, when the task had presumed that the customers had already been found. This is a useful insight into writing more comprehensive tasks that we can take with us into our next tests.  

What’s clear from analysis is that although it’s possible to provide definitive answers for a typical site’s IAs, for a CRM like Trade Me Admin this is a lot harder. Devising and testing the structure of a CRM has proved a challenge for our highly trained audience, who are used to the current system and naturally find it difficult to see and do things differently. Once we had reclassified some of the answers as ‘correct’ one of the two trees was a clear winner — it had gone from 61% to 69%. The other tree had only improved slightly, from 61% to 63%.

There were still elements with it that were performing sub-optimally in our winning structure, though. Generally, the problems were to do with labelling, where, in some cases, we had attempted to disambiguate those ‘SQL lookup’-type labels but in the process, confused the team. We were left with the dilemma of whether to go with the new labels and make the system initially harder to use for staff but easier to learn for new staff, or stick with the old labels, which are harder to learn. My view is that any new system is going to see an initial performance dip, so we might as well change the labels now and make it better.

The importance of carefully structuring questions in a tree test has been highlighted, particularly in light of the ‘start anywhere/go anywhere’ nature of a CRM. The diffuse but powerful nature of a CRM means that careful consideration of tree test answer options needs to be made, in order to decide ‘how close to 100% correct answer’ you want to get.

Development work has begun so watch this space

It's great to see that our research is influencing the next stage of the CRM system, and we're looking forward to seeing it go live. Of course, our work isn't over— and nor would we want it to be! Alongside the redevelopment of the IA, I've been redesigning the key pages from Trade Me Admin, and continuing to conduct user research, including first click testing using Chalkmark.

This project has been governed by a steadily developing set of design principles, focused on complex CRM systems and the specific needs of their audience. Two of these principles are to reduce navigation and to design for experts, not novices, which means creating dense, detailed pages. It's intense, complex, and rewarding design work, and we'll be exploring this exciting space in more depth in upcoming posts.

Learn more
1 min read

Online card sorting: The comprehensive guide

When it comes to designing and testing in the world of information architecture, it’s hard to beat card sorting. As a usability testing method, card sorting is easy to set up, simple to recruit for and can supply you with a range of useful insights. But there’s a long-standing debate in the world of card sorting, and that’s whether it’s better to run card sorts in person (moderated) or remotely over the internet (unmoderated).

This article should give you some insight into the world of online card sorting. We've included an analysis of the benefits (and the downsides) as well as why people use this approach. Let's take a look!

How an online card sort works

Running a card sort remotely has quickly become a popular option just because of how time-intensive in-person card sorting is. Instead of needing to bring your participants in for dedicated card sorting sessions, you can simply set up your card sort using an online tool (like our very own OptimalSort) and then wait for the results to roll in.

So what’s involved in a typical online card sort? At a very high level, here’s what’s required. We’re going to assume you’re already set up with an online card sorting tool at this point.

  1. Define the cards: Depending on what you’re testing, add the items (cards) to your study. If you were testing the navigation menu of a hotel website, your cards might be things like “Home”, “Book a room”, “Our facilities” and “Contact us”.
  2. Work out whether to run a closed or open sort: Determine whether you’ll set the groups for participants to sort cards into (closed) or leave it up to them (open). You may also opt for a mix, where you create some categories but leave the option open for participants to create their own.
  3. Recruit your participants: Whether using a participant recruitment service or by recruiting through your own channels, send out invites to your online card sort.
  4. Wait for the data: Once you’ve sent out your invites, all that’s left to do is wait for the data to come in and then analyze the results.

That’s online card sorting in a nutshell – not entirely different from running a card sort in person. If you’re interested in learning about how to interpret your card sorting results, we’ve put together this article on open and hybrid card sorts and this one on closed card sorts.

Why is online card sorting so popular?

Online card sorting has a few distinct advantages over in-person card sorting that help to make it a popular option among information architects and user researchers. There are downsides too (as there are with any remote usability testing option), but we’ll get to those in a moment.

Where remote (unmoderated) card sorting excels:

  • Time savings: Online card sorting is essentially ‘set and forget’, meaning you can set up the study, send out invites to your participants and then sit back and wait for the results to come in. In-person card sorting requires you to moderate each session and collate the data at the end.
  • Easier for participants: It’s not often that researchers are on the other side of the table, but it’s important to consider the participant’s viewpoint. It’s much easier for someone to spend 15 minutes completing your online card sort in their own time instead of trekking across town to your office for an exercise that could take well over an hour.
  • Cheaper: In a similar vein, online card sorting is much cheaper than in-person testing. While it’s true that you may still need to recruit participants, you won’t need to reimburse people for travel expenses.
  • Analytics: Last but certainly not least, online card sorting tools (like OptimalSort) can take much of the analytical burden off you by transforming your data into actionable insights. Other tools will differ, but OptimalSort can generate a similarity matrix, dendrograms and a participant-centric analysis using your study data.

Where in-person (moderated) card sorting excels:

  • Qualitative insights: For all intents and purposes, online card sorting is the most effective way to run a card sort. It’s cheaper, faster and easier for you. But, there’s one area where in-person card sorting excels, and that’s qualitative feedback. When you’re sitting directly across the table from your participant you’re far more likely to learn about the why as well as the what. You can ask participants directly why they grouped certain cards together.

Online card sorting: Participant numbers

So that’s online card sorting in a nutshell, as well as some of the reasons why you should actually use this method. But what about participant numbers? Well, there’s no one right answer, but the general rule is that you need more people than you’d typically bring in for a usability test.

This all comes down to the fact that card sorting is what’s known as a generative method, whereas usability testing is an evaluation method. Here’s a little breakdown of what we mean by these terms:

Generative method: There’s no design, and you need to get a sense of how people think about the problem you’re trying to solve. For example, how people would arrange the items that need to go into your website’s navigation. As Nielsen Norman Group explains: “There is great variability in different people's mental models and in the vocabulary they use to describe the same concepts. We must collect data from a fair number of users before we can achieve a stable picture of the users' preferred structure and determine how to accommodate differences among users”.

Evaluation method: There’s already a design, and you basically need to work out whether it’s a good fit for your users. Any major problems are likely to crop up even after testing 5 or so users. For example, you have a wireframe of your website and need to identify any major usability issues.

Basically, because you’ll typically be using card sorting to generate a new design or structure from nothing, you need to sample a larger number of people. If you were testing an existing website structure, you could get by with a smaller group.

Where to from here?

Following on from our discussion of generative versus evaluation methods, you’ve really got a choice of 2 paths from here if you’re in the midst of a project. For those developing new structures, the best course of action is likely to be a card sort. However, if you’ve got an existing structure that you need to test in order to usability problems and possible areas of improvement, you’re likely best to run a tree test. We’ve got some useful information on getting started with a tree test right here on the blog.

Learn more
1 min read

UX workshop recap: experts from Meta, Netflix & Google share insights to elevate your career

Recently, Optimal Workshop partnered with Eniola Abioye, Lead UX Researcher at Meta and UXR Career Coach at UX Outloud to host a career masterclass featuring practical advice and guidance on how to: revamp and build a portfolio, emphasize the impact of your projects and showcase valuable collaborations. It also included panel discussions with experts from a variety of roles (UX, product management, engineering, career coaching and content design) talking about their journeys to becoming UX leaders. 

Keep reading to get key takeaways from the discussion on:

  • How to show the impact of your UX work
  • Common blockers in UX work
  • How to collaborate with cross-functional UX stakeholders 
  • How to build a resume and portfolio that uses industry language to present your experience

How to show the impact of your UX 💥

At a time when businesses are reducing costs to focus on profitability - proving the value of your work is more important than ever. Unfortunately, measuring the impact of UX isn’t as straightforward as tracking sales or marketing metrics. With this in mind, Eniola asked the panelists - how do you show the impact of UX in your work?Providing insights is simply not enough. “As a product manager, what I really care about is insights plus recommendations, because recommendations make my life easier,” said Kwame Odame. 

Auset Parris added her perspective on this topic as a Growth Content Designer, “the biggest thing for me to be impactful in my space [Content Design] is to consistently document the changes that I’ve made and share them with the team along with recommendations.” Auset also offered her perspective regarding recommendations, “recommendations are not always going to lead to the actual product executions, but recommendations are meant to guide us.” When it comes to deciding which recommendation to proceed with (if any) it's important to consider whether or not they are aligned with the overarching goal. 

Blockers in UX work 🚧

As UXR becomes more democratized in many businesses and the number of stakeholders increases, the ability to gain cross-functional buy-in for the role and outcomes of UXR is a key way to help keep research a priority. 

In his past experience, Kwame has realized that the role of a user experience researcher is just as important as that of a product manager, data scientist, engineer, or designer. However, one of the biggest blockers for him as a product manager is how the role of a UX researcher is often overlooked. “Just because I’m the product manager doesn’t mean that I’m owning every aspect of the product. I don’t have a magic wand right? We all work as a team.” Furthermore, Kwame notes that a user researcher is an incredibly hard role and a very important one, and I think we need to invest more in the UX space.

Auset also shared her perspective on the topic, “I wouldn’t say this is a blocker, but I do think this is a challenging piece of working in a team - there are so many stakeholders.” Although it would be ideal for each of the different departments to work seamlessly together at all times, that’s not always the case. Auset spoke about a time where the data scientists and user researchers were in disagreement. Her role as a Growth Content Designer is to create content that enhances the user experience. “But if I’m seeing two different experiences, how do I move forward? That’s when I have to ask everyone - come on let’s dig deeper. Are we looking at the right things?” If team members are seeing different results, or having different opinions, then maybe they are not asking the right questions and it's time to dig deeper. 

How to collaborate with cross-functional UX stakeholders 🫱🏽🫲🏻

The number and type of roles that now engage with research are increasing. As they do, the ability to collaborate and manage stakeholders in research projects has become essential. 

Kwame discussed how he sets up a meeting for the team to discuss their goals for the next 6 months. Then, he meets with the team on a weekly basis to ensure alignment. The main point of the meeting is to ensure everyone is leaving with their questions answered and blockers addressed. It's important to ensure everyone is moving in the right direction. 

Auset added that she thinks documentation is key to ensuring alignment. “One thing that has been helpful for me is having the documentation in the form of a product brief or content brief.” The brief can include the overarching goal, strategy, and indicate what each member of the team is working on. Team members can always look back at this document to ensure they are on the right track. 

Career advice: documenting the value you bring 💼

One of the participants asked the panel, “how do you secure the stability of your UX career?” 

Eniola took this opportunity to share some invaluable advice as a career coach, “I think the biggest thing that comes to mind is value proposition. It's important to be very clear about the value and impact you bring to the team. It used to be enough to just be really, really good at research and just do research and provide recommendations. Now that’s not enough. Now you have to take your teams through the process, integrate your recommendations into the product, and focus on driving impact.” 

Companies aren’t looking to hire someone who can perform a laundry list of tasks, they’re looking for UX professionals who can drive results. Think about the metrics you can track, to help showcase the impact of your work. For example, if you’re a UX designer - how much less time did the user spend on the task with your new design? Did the abandonment or error rate decrease significantly as a result of your work? How much did the overall customer satisfaction score rise, after you implemented your project? Before starting your project, decide on several metrics to track (make sure they align with your organization’s goals), and reflect on these after each project. 

Fatimah Richmond offered another piece of golden career advice. She encourages UX researchers to create an ongoing impact tracker. She’ll create a document where she lists the company's objectives, the projects she worked on, and the specific impact she made on the companies objectives. It's much easier to keep track of the wins as they happen, and jot a few notes about the impact you’ve made with each project, then scrambling to think of all the impact you’ve made when writing your resume. It's also important to note the impact your work has made on the different departments - product, marketing, sales, etc.

She also advises UX researchers to frequently share their science insights with their colleagues as the project progresses. Instead of waiting until the very end of the project and providing a “perfectly polished” deck, be transparent with the team about what you are working on and the impact it's having throughout the duration of the project.

Another participant asked - what if you need help determining the value you bring? Auset recommends asking for actionable feedback from coworkers. These people work with you every single day, so they know your contributions you are making to the team. 

Documenting the tangible impact you make as a UX professional is crucial - not only will it help create greater stability for your career, but it will also help organizations recognize the importance of a UX research. As Kwame discussed in the “blockers” section, one of the biggest challenges he faces as a product manager is the perception of the UX role as less important than the more traditional product manager, Engineer, and Designer roles. 

About Eniola Abioye

Eniola helps UX researchers improve their research practice. Whether you’re seasoned and looking to level up or a new researcher looking to get your bearings in UX, Eniola can help you focus and apply your skillset. She is a UX Researcher and Founder of UX Outloud. As a career coach, she guides her clients through short and long term SMART goals and then works with them to build a strategic plan of attack. She is innately curious, a self-starter, adaptable, and communicative with a knack for storytelling.

Learn more about UX Outloud.

Connect with Eniola on Linkedin.

About the panelists 🧑🏽🤝🧑🏽

The panel was comprised of talented professionals from a variety of fields including UX research, content strategy, product management & engineering, and career coaching. Their diverse perspectives led to an insightful and informative panel session. Keep reading to get to know each of the amazing panelists: 

Growth Content Designer: Auset Parris is a growth content designer at Meta. She has spent 7 years navigating the ever-evolving landscape of content strategy. She is passionate about the role of user research in shaping content strategies. Furthermore, Auset believes that understanding user behavior and preferences is fundamental to creating content that not only meets but exceeds user expectations. 

Senior UX Researcher: Jasmine Williams, Ph.D. is a senior researcher with over a decade of experience conducting youth-focused research. She has deep expertise in qualitative methods, child and adolescent development, and social and emotional well-being. Jasmine is currently a user experience researcher at Meta and her work focuses on teen safety and wellbeing. 

Product Manager: Kwame Odame has over 7 years of high-tech experience working in product management and software engineering. At Meta, Kwame is currently responsible for building the product management direction for Fan Engagement on Facebook. Kwame has also helped build Mastercard’s SaaS authentication platform, enabling cardholders to quickly confirm their identity when a suspicious transaction occurred, leveraging biometric technology. 

UX Researcher (UXR): Fatimah Richmond is a well-rounded UX researcher with over 15 years of experience, having influenced enterprise products across leading tech giants like Google, SAP, Linkedin, and Microsoft. Fatimah has led strategy for research, programs and operations that have significantly impacted the UXR landscape, from clinician engagement strategist to reshaping Linkedin Recruiter and Jobs. As a forward thinker, she’s here to challenge our assumptions and the status quo on how research gets planned, communicated, and measured.

Career Coach: An Xia spent the first decade of her professional life in consulting and Big Tech data science (Netflix, Meta). As a career coach, An has supported clients in gaining clarity on their career goals, navigating challenges of career growth, and making successful transitions. As a somatic coach, An has helped clients tap into the wisdom of their soma to reconnect with what truly matters to them. 

UX Strategist: Natalie Gauvin is an experienced professional with a demonstrated history of purpose-driven work in agile software development industries and higher education. Skilled in various research methodologies. Doctor of Philosophy (Ph.D.) Candidate in Learning Design and Technology from the University of Hawaii at Manoa, focused on empathy in user experience through personas

Level up your UXR capabilities (for free!) with the Optimal Academy 📚

Here at Optimal we really care about helping UX researchers level up their career. This is why we’ve developed the Optimal Academy, to help you master your Optimal Workshop skills and learn more about user research and information architecture.

Check out some of our free courses here: https://academy.optimalworkshop.com/

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.