March 29, 2016
3

Which comes first: card sorting or tree testing?

“Dear Optimal, I want to test the structure of a university website (well certain sections anyway). My gut instinct is that it's pretty 'broken'. Lots of sections feel like they're in the wrong place. I want to test my hypotheses before proposing a new structure. I'm definitely going to do some card sorting, and was planning a mixture of online and offline. My question is about when to bring in tree testing. Should I do this first to test the existing IA? Or is card sorting sufficient? I do intend to tree test my new proposed IA in order to validate it, but is it worth doing it upfront too?" — Matt

Dear Matt,

Ah, the classic chicken or the egg scenario: Which should come first, tree testing or card sorting?

It’s a question that many researchers often ask themselves, but I’m here to help clear the air! You should always use both methods when changing up your information architecture (IA) in order to capture the most information.

Tree testing and card sorting, when used together, can give you fantastic insight into the way your users interact with your site. First of all, I’ll run through some of the benefits of each testing method.


What is card sorting and why should I use it?

Card sorting is a great method to gauge the way in which your users organize the content on your site. It helps you figure out which things go together and which things don’t. There are two main types of card sorting: open and closed.

Closed card sorting involves providing participants with pre-defined categories into which they sort their cards. For example, you might be reorganizing the categories for your online clothing store for women. Your cards would have all the names of your products (e.g., “socks”, “skirts” and “singlets”) and you also provide the categories (e.g.,“outerwear”, “tops” and “bottoms”).

Open card sorting involves providing participants with cards and leaving them to organize the content in a way that makes sense to them. It’s the opposite to closed card sorting, in that participants dictate the categories themselves and also label them. This means you’d provide them with the cards only, and no categories.

Card sorting, whether open or closed, is very user focused. It involves a lot of thought, input, and evaluation from each participant, helping you to form the structure of your new IA.


What is tree testing and why should I use it?

Tree testing is a fantastic way to determine how your users are navigating your site and how they’re finding information. Your site is organized into a tree structure, sorted into topics and subtopics, and participants are provided with some tasks that they need to perform. The results will show you how your participants performed those tasks, if they were successful or unsuccessful, and which route they took to complete the tasks. This data is extremely useful for creating a new and improved IA.

Tree testing is an activity that requires participants to seek information, which is quite the contrast to card sorting. Card sorting is an activity that requires participants to sort and organize information. Each activity requires users to behave in different ways, so each method will give its own valuable results.


Comparing tree testing and card sorting: Key differences

Tree testing and card sorting are complementary methods within your UX toolkit, each unlocking unique insights about how users interact with your site structure. The difference is all about direction.

Card sorting is generative. It helps you understand how users naturally group and label your content; revealing mental models, surfacing intuitive categories, and informing your site’s information architecture (IA) from the ground up. Whether using open or closed methods, card sorting gives users the power to organize content in ways that make sense to them.

Tree testing is evaluative. Once you’ve designed or restructured your IA, tree testing puts it to the test. Participants are asked to complete find-it tasks using only your site structure – no visuals, no design – just your content hierarchy. This highlights whether users can successfully locate information and how efficiently they navigate your content tree.

In short:

  • Card sorting = "How would you organize this?"
  • Tree testing = "Can you find this?"


Using both methods together gives you clarity and confidence. One builds the structure. The other proves it works.


Which method should you choose?

The right method depends on where you are in your IA journey. If you're beginning from scratch or rethinking your structure, starting with card sorting is ideal. It will give you deep insight into how users group and label content.

If you already have an existing IA and want to validate its effectiveness, tree testing is typically the better fit. Tree testing shows you where users get lost and what’s working well. Think of card sorting as how users think your site should work, and tree testing as how they experience it in action.


Should you run a card or tree test first?

In this scenario, I’d recommend running a tree test first in order to find out how your existing IA currently performs. You said your gut instinct is telling you that your existing IA is pretty “broken”, but it’s good to have the data that proves this and shows you where your users get lost.

An initial tree test will give you a benchmark to work with – after all, how will you know your shiny, new IA is performing better if you don’t have any stats to compare it with? Your results from your first tree test will also show you which parts of your current IA are the biggest pain points and from there you can work on fixing them. Make sure you keep these tasks on hand – you’ll need them later!

Once your initial tree test is done, you can start your card sort, based on the results from your tree test. Here, I recommend conducting an open card sort so you can understand how your users organize the content in a way that makes sense to them. This will also show you the language your participants use to name categories, which will help you when you’re creating your new IA.

Finally, once your card sort is done you can conduct another tree test on your new, proposed IA. By using the same (or very similar) tasks from your initial tree test, you will be able to see that any changes in the results can be directly attributed to your new and improved IA.

Once your test has concluded, you can use this data to compare the performance from the tree test for your original information architecture.


Why using both methods together is most effective

Card sorting and tree testing aren’t rivals, view them as allies. Used together, they give you end-to-end clarity. Card sorting informs your IA design based on user mental models. Tree testing evaluates that structure, confirming whether users can find what they need. This combination creates a feedback loop that removes guesswork and builds confidence. You'll move from assumptions to validation, and from confusion to clarity – all backed by real user behavior.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

A quick analysis of feedback collected with OptimalSort

Card sorting is an invaluable tool for understanding how people organize information in their minds, making websites more intuitive and content easier to navigate. It’s a useful method outside of information architecture and UX research, too. It can be a useful prioritization technique, or used in a more traditional sense. For example, it’s handy in psychology, sociology or anthropology to inform research and deepen our understanding of how people conceptualize information.

The introduction of remote card sorting has provided many advantages, making it easier than ever to conduct your own research. Tools such as our very own OptimalSort allow you to quickly and easily gather findings from a large number of participants from all around the world. Not having to organize moderated, face-to-face sessions gives researchers more time to focus on their work, and easier access to larger data sets.

One of the main disadvantages of remote card sorting is that it eliminates the opportunity to dive deeper into the choices made by your participants. Human conversation is a great thing, and when conducting a remote card sort with users who could potentially be on the other side of the world, opportunities for our participants to provide direct feedback and voice their opinions are severely limited.Your survey design may not be perfect.

The labels you provide your participants may be incorrect, confusing or redundant. Your users may have their own ideas of how you could improve your products or services beyond what you are trying to capture in your card sort. People may be more willing to provide their feedback than you realize, and limiting their insights to a simple card sort may not capture all that they have to offer.So, how can you run an unmoderated, remote card sort, but do your best to mitigate this potential loss of insight?

A quick look into the data

In an effort to evaluate the usefulness of the existing “Leave a comment” feature in OptimalSort, I recently asked our development team to pull out some data.You might be asking “There’s a comment box in OptimalSort?”If you’ve never noticed this feature, I can’t exactly blame you. It’s relatively hidden away as an unassuming hyperlink in the top right corner of your card sort.

OptimalSortCommentBox1

OptimalSortCommentBox2

Comments left by your participants can be viewed in the “Participants” tab in your results section, and are indicated by a grey speech bubble.

OptimalSortSpeechBubble

The history of the button is unknown even to long-time Optimal Workshop team members. The purpose of the button is also unspecified. “Why would anyone leave a comment while participating in a card sort?”, I found myself wondering.As it turns out, 133,303 comments have been left by participants. This means 133,303 insights, opinions, critiques or frustrations. Additionally, these numbers only represent the participants who noticed the feature in the first place. Considering the current button can easily be missed when focusing on the task at hand, I can’t help but wonder how this number might change if we drew more attention to the feature.

Breaking down the comments

To avoid having to manually analyze and code 133,303 open text fields, I decided to only spend enough time to decipher any obvious patterns. Luckily for me, this didn’t take very long. After looking at only a hundred or so random entries, four distinct types of comments started to emerge.

  1. This card/group doesn’t make sense.Comments related to cards and groups dominate. This is a great thing, as it means that the majority of comments made by participants relate specifically to the task they are completing. For closed and hybrid sorts, comments frequently relate to the predefined categories available, and since the participants most likely to leave a comment are those experiencing issues, the majority of the feedback relates to issues with category names themselves. Many comments are related to card labels and offer suggestions for improving naming conventions, while many others draw attention to some terms being confusing, unclear or jargony. Comments on task length can also be found, along with reasons for why certain cards may be left ungrouped, e.g., “I’ve left behind items I think the site could do without”.
  2. Your organization is awesome for doing this/you’re doing it all wrong. A substantial number of participants used the comment box as an opportunity to voice their general feedback on the organization or company running the study. Some of the more positive comments include an appreciation for seeing private companies or public sector organizations conducting research with real users in an effort to improve their services. It’s also nice to see many comments related to general enjoyment in completing the task.On the other hand, some participants used the comment box as an opportunity to comment on what other areas of their services should be improved, or what features they would like to see implemented that may otherwise be missed in a card sort, e.g., “Increased, accurate search functionality is imperative in a new system”.
  3. This isn’t working for me. Taking a closer look at some of the comments reveals some useful feedback for us at Optimal Workshop, too. Some of the comments relate specifically to UI and usability issues. The majority of these issues are things we are already working to improve or have dealt with. However, for researchers, comments that relate to challenges in using the tool or completing the survey itself may help explain some instances of data variability.
  4. #YOLO, hello, ;) And of course, the unrelated. As you may expect, when you provide people with the opportunity to leave a comment online, you can expect just about anything in return.

How to make the most of your user insights in OptimalSort

If you’re running a card sort, chances are you already place a lot of value in the voice of your users. To ensure you capture any additional insights, it’s best to ensure your participants are aware of the opportunity to do so. Here are two ways you may like to ensure your participants have a space to voice their feedback:

Adding more context to the “Leave a comment” feature

One way to encourage your participants to leave comments is to promote the use of the this feature in your card sort instructions. OptimalSort gives you flexibility to customize your instructions every time you run a survey. By making your participants aware of the feature, or offering ideas around what kinds of comments you may be looking for, you not only make them more likely to use the feature, but also open yourself up to a whole range of additional feedback. An advantage of using this feature is that comments can be added in real time during a card sort, so any remarks can be made as soon as they arise.

Making use of post-survey questions

Adding targeted post-survey questions is the best way to ensure your participants are able to voice any thoughts or concerns that emerged during the activity. Here, you can ask specific questions that touch upon different aspects of your card sort, such as length, labels, categories or any other comments your participants may have. This can not only help you generate useful insights but also inform the design of your surveys in the future.

Make your remote card sorts more human

Card sorts are exploratory by nature. Avoid forcing your participants into choices that may not accurately reflect their thinking by giving them the space to voice their opinions. Providing opportunities to capture feedback opens up the conversation between you and your users, and can lead to surprising insights from unexpected places.

Further reading

Learn more
1 min read

UX workshop recap: experts from Meta, Netflix & Google share insights to elevate your career

Recently, Optimal Workshop partnered with Eniola Abioye, Lead UX Researcher at Meta and UXR Career Coach at UX Outloud to host a career masterclass featuring practical advice and guidance on how to: revamp and build a portfolio, emphasize the impact of your projects and showcase valuable collaborations. It also included panel discussions with experts from a variety of roles (UX, product management, engineering, career coaching and content design) talking about their journeys to becoming UX leaders. 

Keep reading to get key takeaways from the discussion on:

  • How to show the impact of your UX work
  • Common blockers in UX work
  • How to collaborate with cross-functional UX stakeholders 
  • How to build a resume and portfolio that uses industry language to present your experience

How to show the impact of your UX 💥

At a time when businesses are reducing costs to focus on profitability - proving the value of your work is more important than ever. Unfortunately, measuring the impact of UX isn’t as straightforward as tracking sales or marketing metrics. With this in mind, Eniola asked the panelists - how do you show the impact of UX in your work?Providing insights is simply not enough. “As a product manager, what I really care about is insights plus recommendations, because recommendations make my life easier,” said Kwame Odame. 

Auset Parris added her perspective on this topic as a Growth Content Designer, “the biggest thing for me to be impactful in my space [Content Design] is to consistently document the changes that I’ve made and share them with the team along with recommendations.” Auset also offered her perspective regarding recommendations, “recommendations are not always going to lead to the actual product executions, but recommendations are meant to guide us.” When it comes to deciding which recommendation to proceed with (if any) it's important to consider whether or not they are aligned with the overarching goal. 

Blockers in UX work 🚧

As UXR becomes more democratized in many businesses and the number of stakeholders increases, the ability to gain cross-functional buy-in for the role and outcomes of UXR is a key way to help keep research a priority. 

In his past experience, Kwame has realized that the role of a user experience researcher is just as important as that of a product manager, data scientist, engineer, or designer. However, one of the biggest blockers for him as a product manager is how the role of a UX researcher is often overlooked. “Just because I’m the product manager doesn’t mean that I’m owning every aspect of the product. I don’t have a magic wand right? We all work as a team.” Furthermore, Kwame notes that a user researcher is an incredibly hard role and a very important one, and I think we need to invest more in the UX space.

Auset also shared her perspective on the topic, “I wouldn’t say this is a blocker, but I do think this is a challenging piece of working in a team - there are so many stakeholders.” Although it would be ideal for each of the different departments to work seamlessly together at all times, that’s not always the case. Auset spoke about a time where the data scientists and user researchers were in disagreement. Her role as a Growth Content Designer is to create content that enhances the user experience. “But if I’m seeing two different experiences, how do I move forward? That’s when I have to ask everyone - come on let’s dig deeper. Are we looking at the right things?” If team members are seeing different results, or having different opinions, then maybe they are not asking the right questions and it's time to dig deeper. 

How to collaborate with cross-functional UX stakeholders 🫱🏽🫲🏻

The number and type of roles that now engage with research are increasing. As they do, the ability to collaborate and manage stakeholders in research projects has become essential. 

Kwame discussed how he sets up a meeting for the team to discuss their goals for the next 6 months. Then, he meets with the team on a weekly basis to ensure alignment. The main point of the meeting is to ensure everyone is leaving with their questions answered and blockers addressed. It's important to ensure everyone is moving in the right direction. 

Auset added that she thinks documentation is key to ensuring alignment. “One thing that has been helpful for me is having the documentation in the form of a product brief or content brief.” The brief can include the overarching goal, strategy, and indicate what each member of the team is working on. Team members can always look back at this document to ensure they are on the right track. 

Career advice: documenting the value you bring 💼

One of the participants asked the panel, “how do you secure the stability of your UX career?” 

Eniola took this opportunity to share some invaluable advice as a career coach, “I think the biggest thing that comes to mind is value proposition. It's important to be very clear about the value and impact you bring to the team. It used to be enough to just be really, really good at research and just do research and provide recommendations. Now that’s not enough. Now you have to take your teams through the process, integrate your recommendations into the product, and focus on driving impact.” 

Companies aren’t looking to hire someone who can perform a laundry list of tasks, they’re looking for UX professionals who can drive results. Think about the metrics you can track, to help showcase the impact of your work. For example, if you’re a UX designer - how much less time did the user spend on the task with your new design? Did the abandonment or error rate decrease significantly as a result of your work? How much did the overall customer satisfaction score rise, after you implemented your project? Before starting your project, decide on several metrics to track (make sure they align with your organization’s goals), and reflect on these after each project. 

Fatimah Richmond offered another piece of golden career advice. She encourages UX researchers to create an ongoing impact tracker. She’ll create a document where she lists the company's objectives, the projects she worked on, and the specific impact she made on the companies objectives. It's much easier to keep track of the wins as they happen, and jot a few notes about the impact you’ve made with each project, then scrambling to think of all the impact you’ve made when writing your resume. It's also important to note the impact your work has made on the different departments - product, marketing, sales, etc.

She also advises UX researchers to frequently share their science insights with their colleagues as the project progresses. Instead of waiting until the very end of the project and providing a “perfectly polished” deck, be transparent with the team about what you are working on and the impact it's having throughout the duration of the project.

Another participant asked - what if you need help determining the value you bring? Auset recommends asking for actionable feedback from coworkers. These people work with you every single day, so they know your contributions you are making to the team. 

Documenting the tangible impact you make as a UX professional is crucial - not only will it help create greater stability for your career, but it will also help organizations recognize the importance of a UX research. As Kwame discussed in the “blockers” section, one of the biggest challenges he faces as a product manager is the perception of the UX role as less important than the more traditional product manager, Engineer, and Designer roles. 

About Eniola Abioye

Eniola helps UX researchers improve their research practice. Whether you’re seasoned and looking to level up or a new researcher looking to get your bearings in UX, Eniola can help you focus and apply your skillset. She is a UX Researcher and Founder of UX Outloud. As a career coach, she guides her clients through short and long term SMART goals and then works with them to build a strategic plan of attack. She is innately curious, a self-starter, adaptable, and communicative with a knack for storytelling.

Learn more about UX Outloud.

Connect with Eniola on Linkedin.

About the panelists 🧑🏽🤝🧑🏽

The panel was comprised of talented professionals from a variety of fields including UX research, content strategy, product management & engineering, and career coaching. Their diverse perspectives led to an insightful and informative panel session. Keep reading to get to know each of the amazing panelists: 

Growth Content Designer: Auset Parris is a growth content designer at Meta. She has spent 7 years navigating the ever-evolving landscape of content strategy. She is passionate about the role of user research in shaping content strategies. Furthermore, Auset believes that understanding user behavior and preferences is fundamental to creating content that not only meets but exceeds user expectations. 

Senior UX Researcher: Jasmine Williams, Ph.D. is a senior researcher with over a decade of experience conducting youth-focused research. She has deep expertise in qualitative methods, child and adolescent development, and social and emotional well-being. Jasmine is currently a user experience researcher at Meta and her work focuses on teen safety and wellbeing. 

Product Manager: Kwame Odame has over 7 years of high-tech experience working in product management and software engineering. At Meta, Kwame is currently responsible for building the product management direction for Fan Engagement on Facebook. Kwame has also helped build Mastercard’s SaaS authentication platform, enabling cardholders to quickly confirm their identity when a suspicious transaction occurred, leveraging biometric technology. 

UX Researcher (UXR): Fatimah Richmond is a well-rounded UX researcher with over 15 years of experience, having influenced enterprise products across leading tech giants like Google, SAP, Linkedin, and Microsoft. Fatimah has led strategy for research, programs and operations that have significantly impacted the UXR landscape, from clinician engagement strategist to reshaping Linkedin Recruiter and Jobs. As a forward thinker, she’s here to challenge our assumptions and the status quo on how research gets planned, communicated, and measured.

Career Coach: An Xia spent the first decade of her professional life in consulting and Big Tech data science (Netflix, Meta). As a career coach, An has supported clients in gaining clarity on their career goals, navigating challenges of career growth, and making successful transitions. As a somatic coach, An has helped clients tap into the wisdom of their soma to reconnect with what truly matters to them. 

UX Strategist: Natalie Gauvin is an experienced professional with a demonstrated history of purpose-driven work in agile software development industries and higher education. Skilled in various research methodologies. Doctor of Philosophy (Ph.D.) Candidate in Learning Design and Technology from the University of Hawaii at Manoa, focused on empathy in user experience through personas

Level up your UXR capabilities (for free!) with the Optimal Academy 📚

Here at Optimal we really care about helping UX researchers level up their career. This is why we’ve developed the Optimal Academy, to help you master your Optimal Workshop skills and learn more about user research and information architecture.

Check out some of our free courses here: https://academy.optimalworkshop.com/

Learn more
1 min read

Card Sorting outside UX: How I use online card sorting for in-person sociological research

Hello, my name is Rick and I’m a sociologist. All together, “Hi, Rick!” Now that we’ve got that out of the way, let me tell you about how I use card sorting in my research. I'll soon be running a series of in-person, moderated card sorting sessions. This article covers why card sorting is an integral part of my research, and how I've designed the study toanswer specific questions about two distinct parts of society.

Card sorting to establish how different people comprehend their worlds

Card sorting,or pile sorting as it’s sometimes called, has a long history in anthropology, psychology and sociology. Anthropologists, in particular, have used it to study how different cultures think about various categories. Researchers in the 1970s conducted card sorts to understand how different cultures categorize things like plants and animals. Sociologists of that era also used card sorts to examine how people think about different professions and careers. And since then, scholars have continued to use card sorts to learn about similar categorization questions.

In my own research, I study how different groups of people in the United States imagine the category of 'religion'. Asthose crazy 1970s anthropologists showed, card sorting is a great way to understand how people cognitively understand particular social categories. So, in particular,I’m using card sorting in my research to better understand how groups of people with dramatically different views understand 'religion' — namely, evangelical Christians and self-identified atheists. Thinkof it like this. Some people say that religion is the bedrock of American society.

Others say that too much religion in public life is exactly what’s wrong with this country. What's not often considered is these two groups oftenunderstand the concept of 'religion' in very different ways. It’s like the group of blind men and the elephant: one touches the trunk, one touches the ears, and one touches the tail. All three come away with very different ideas of what an elephant is. So you could say that I study how different people experience the 'elephant' of religion in their daily lives. I’m doing so using primarily in-person moderated sorts on an iPad, which I’ll describe below.

How I generated the words on the cards

The first step in the process was to generate lists of relevant terms for my subjects to sort. Unlike in UX testing, where cards for sorting might come from an existing website, in my world these concepts first have to be mined from the group of people being studied. So the first thing I did was have members of both atheist and evangelical groups complete a free listing task. In a free listing task, participants simply list as many words as they can that meet the criteria given. Sets of both atheist and evangelical respondents were given the instructions: "What words best describe 'religion?' Please list as many as you can.” They were then also asked to list words that describe 'atheism', 'spirituality', and 'Christianity'.

I took the lists generated and standardizedthem by combining synonyms. For example, some of my atheists used words like 'ancient', 'antiquated', and 'archaic' to describe religion. SoI combined all of these words into the one that was mentioned most: 'antiquated'. By doing this, I created a list of the most common words each group used to describe each category. Doing this also gave my research another useful dimension, ideal for exploring alongside my card sorting results. Free lists can beanalyzed themselves using statistical techniques likemulti-dimensional scaling, so I used this technique for apreliminary analysis of the words evangelicals used to describe 'atheism':

Optimalsort and sociological research

Now that I’m armed with these lists of words that atheist and evangelicals used to describe religion, atheism etc., I’m about to embark on phase two of the project: the card sort.

Why using card sorting software is a no-brainer for my research

I’ll be conducting my card sorts in person, for various reasons. I have relatively easy access to the specific population that I’m interested in, and for the kind of academic research I’m conducting, in-person activities are preferred. In theory, I could just print the words on some index cards and conduct a manual card sort, but I quickly realized that a software solution would be far preferable, for a bunch of reasons.

First of all, it's important for me to conductinterviews in coffee shops and restaurants, and an iPad on the table is, to put it mildly, more practical than a table covered in cards — no space for the teapot after all.

Second, usingsoftwareeliminates the need for manual data entry on my part. Not only is manual data entry a time consuming process, but it also introduces the possibly of data entry errors which may compromise my research results.

Third, while the bulk of the card sorts are going to be done in person, having an online version will enable meto scale the project up after the initial in-person sorts are complete. The atheist community, in particular, has a significant online presence, making a web solution ideal for additional data collection.

Fourth, OptimalSort gives the option to re-direct respondents after they complete a sort to any webpage, which allows multiple card sorts to be daisy-chained together. It also enables card sorts to be easily combined with complex survey instruments from other providers (e.g. Qualtrics or Survey Monkey), so card sorting data can be gathered in conjunction with other methodologies.

Finally, and just as important, doing card sorts on a tablet is more fun for participants. After all, who doesn’t like to play with an iPad? If respondents enjoy the unique process of the experiment, this is likely to actually improve the quality of the data, andrespondents are more likely to reflect positively on the experience, making recruitment easier. And a fun experience also makes it more likely that respondents will complete the exercise.

What my in-person, on-tablet card sorting research will look like

Respondents will be handed an iPad Air with 4G data capability. While the venues where the card sorts will take place usually have public Wi-Fi networks available, these networks are not always reliable, so the cellular data capabilities are needed as a back-up (and my pre-testing has shown that OptimalSort works on cellular networks too).

The iPad’s screen orientation will be locked to landscape and multi-touch functions will be disabled to prevent respondents from accidentally leaving the testing environment. In addition, respondents will have the option of using a rubber tipped stylus for ease of sorting the cards. While I personally prefer to use a microfiber tipped stylus in other applications, pre-testing revealed that an old fashioned rubber tipped stylus was easier for sorting activities.

using a tablet to conduct a card sort

When the respondent receives the iPad, the card sort first page with general instructions will already be open on the tablet in the third party browser Perfect Web. A third party browser is necessary because it is best to run OptimalSort locked in a full screen mode, both for aesthetic reasons and to keep the screen simple and uncluttered for respondents. Perfect Web is currently the best choice in the ever shifting app landscape.

participants see the cards like this

I'll give respondents their instructions and then go to another table to give them privacy (because who wants the creepy feeling of some guy hanging over you as you do stuff?). Altogether, respondents will complete two open card sorts and a fewsurvey-style questions, all chained together by redirect URLs. First, they'll sort 30 cards into groups based on how they perceive 'religion', and name the categories they create. Then, they'll complete a similar card sort, this time based on how they perceive 'atheism'.

Both atheist and evangelicals will receive a mixture of some of the top words that both groups generated in the earlier free listing tasks. To finish, they'll answer a few questions that will provide further data on how they think about 'religion'. After I’ve conducted these card sorts with both of my target populations, I’ll analyze the resulting data on its own and also in conjunction with qualitative data I’ve already collected via ethnographic research and in-depth interviews. I can't wait, actually. In a few months I’ll report back and let you know what I’ve found.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.