6 min

Decoding Taylor Swift: A data-driven deep dive into the Swiftie psyche 👱🏻‍♀️

Taylor Swift's music has captivated millions, but what do her fans really think about her extensive catalog? We've crunched the numbers, analyzed the data, and uncovered some fascinating insights into how Swifties perceive and categorize their favorite artist's work. Let's dive in!

The great debate: openers, encores, and everything in between ⋆.˚✮🎧✮˚.⋆

Our study asked fans to categorize Swift's songs into potential opening numbers, encores, and songs they'd rather not hear (affectionately dubbed "Nah" songs). The results? As diverse as Swift's discography itself!

Opening with a bang 💥

Swifties seem to agree that high-energy tracks make for the best concert openers, but the results are more nuanced than previously suggested. "Shake It Off" emerged as the clear favorite for opening a concert, with 17 votes. "Love Story" follows closely behind with 14 votes, showing that nostalgia indeed plays a significant role. Interestingly, both "Cruel Summer" and "Blank Space" tied for third place with 13 votes each.

This mix of songs from different eras of Swift's career suggests that fans appreciate both her newer hits and classic favorites when it comes to kicking off a show. The strong showing for "Love Story" does indeed speak to the power of nostalgia in concert experiences. It's worth noting that "...Ready for It?", while a popular song, received fewer votes (9) for the opening slot than might have been expected.

Encore extravaganza 🎤

When it comes to encores, fans seem to favor a diverse mix of Taylor Swift's discography, with a surprising tie at the top. "Slut!" (Taylor's Version), "exile", "Guilty as Sin?", and "Bad Blood (Remix)" all received the highest number of votes with 13 each. This variety showcases the breadth of Swift's career and the different aspects of her artistry that resonate with fans for a memorable show finale.

Close behind are "evermore", "Wildest Dreams", "ME!", "Love Story", and "Lavender Haze", each garnering 12 votes. It's particularly interesting to see both newer tracks and classic hits like "Love Story" maintaining strong popularity for the encore slot. This balance suggests that Swifties appreciate both nostalgia and Swift's artistic evolution when it comes to closing out a concert experience.

The "Nah" list 😒

Interestingly, some of Taylor Swift's tracks found themselves on the "Nah" list, indicating that fans might prefer not to hear them in a concert setting. "Clara Bow" tops this category with 13 votes, closely followed by "You're On Your Own, Kid", "You're Losing Me", and "Delicate", each receiving 12 votes.

This doesn't necessarily mean fans dislike these songs - they might just feel they're not well-suited for live performances or don't fit as well into a concert setlist. It's particularly surprising to see "Delicate" on this list, given its popularity. The presence of both newer tracks like "Clara Bow" and older ones like "Delicate" suggests that the "Nah" list isn't tied to a specific era of Swift's career, but rather to individual song preferences in a live concert context.

It's worth noting that even popular songs can end up on this list, highlighting the complex relationship fans have with different tracks in various contexts. This data provides an interesting insight into how Swifties perceive songs differently when considering them for a live performance versus general listening.

The Similarity Matrix: set list synergies ⚡

Our similarity matrix revealed fascinating insights into how fans envision Taylor Swift's songs fitting together in a concert set list:

1. The "Midnights" Connection: Songs from "Midnights" like "Midnight Rain", "The Black Dog", and "The Tortured Poets Department" showed high similarity in set list placement. This suggests fans see these tracks working well in similar parts of a concert, perhaps as a cohesive segment showcasing the album's distinct sound.

2. Cross-album transitions: There's an intriguing connection between "Guilty as Sin?" and "exile", with a high similarity percentage. This indicates fans see these songs from different albums as complementary in a live setting, potentially suggesting a smooth transition point in the set list that bridges different eras of Swift's career.

3. The show-stoppers: "Shake It Off" stands out as dissimilar to most other songs in terms of placement. This likely reflects its perceived role as a high-energy, statement piece that occupies a unique position in the set list, perhaps as an opener, closer, or peak moment.

4. Set list evolution: There's a noticeable pattern of higher similarity between songs from the same or adjacent eras, suggesting fans envision distinct segments for different periods of Swift's career within the concert. This could indicate a preference for a chronological journey through her discography or strategic placement of different styles throughout the show.

5. Thematic groupings: Some songs from different albums showed higher similarity, such as "Is It Over Now? (Taylor's Version)" and "You're On Your Own, Kid". This suggests fans see them working well together in the set list based on thematic or emotional connections rather than just album cohesion.

What does it all mean?! 💃🏼📊

This card sort data paints a picture of an artist who continually evolves while maintaining certain core elements that define her work. Swift's ability to create cohesive album experiences, make bold stylistic shifts, and maintain thematic threads throughout her career is reflected in how fans perceive and categorize her songs. Moreover, the diversity of opinions on song categorization - with 59 different songs suggested as potential openers - speaks to the depth and breadth of Swift's discography. It also highlights the personal nature of music appreciation; what one fan sees as the perfect opener, another might categorize as a "Nah".

In the end, this analysis gives us a fascinating glimpse into the complex web of associations in Swift's discography. It shows us not just how Swift has evolved as an artist, but how her fans have evolved with her, creating deep and sometimes unexpected connections between songs across her entire career. Whether you're a die-hard Swiftie or a casual listener, or a weirdo who just loves a good card sort, one thing is clear: Taylor Swift's music is rich, complex, and deeply meaningful to her fans. And with each new album, she continues to surprise, delight, and challenge our expectations.

Conclusion: shaking up our understanding 🥤🤔

This deep dive into the Swiftie psyche through a card sort reveals the complexity of Taylor Swift's discography and fans' relationship with it. From strategic song placement in a dream setlist to unexpected cross-era connections, we've uncovered layers of meaning that showcase Swift's artistry and her fans' engagement. The exercise demonstrates how a song can be a potential opener, mid-show energy boost, poignant closer, or a skip-worthy track, highlighting Swift's ability to create diverse, emotionally resonant music that serves various roles in the listening experience.

The analysis underscores Swift's evolving career, with distinct album clusters alongside surprising connections, painting a picture of an artist who reinvents herself while maintaining a core essence. It also demonstrates how fan-driven analyses like card sorting can be insightful and engaging, offering a unique window into music fandom and reminding us that in Swift's discography, there's always more to discover. This exercise proves valuable whether you're a die-hard Swiftie, casual listener, or someone who loves to analyze pop culture phenomena.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

How to interpret your card sort results Part 2: closed card sorts and next steps

In Part 1 of this series we looked at how to interpret results from open and hybrid card sorts and now in Part 2, we’re going to talk about closed card sorts. In closed card sorts, participants are asked to sort the cards into predetermined categories and are not allowed to create any of their own. You might use this approach when you are constrained by specific category names or as a quick checkup before launching a new or newly redesigned website.In Part 1, we also discussed the two different - but complementary - types of analysis that are generally used together for interpreting card sort results: exploratory and statistical. Exploratory analysis is intuitive and creative while statistical analysis is all about the numbers. Check out Part 1 for a refresher or learn more about exploratory and statistical analysis in Donna Spencer’s book.

Getting started

Closed card sort analysis is generally much quicker and easier than open and hybrid card sorts because there are no participant created category names to analyze - it’s really just about where the cards were placed. There are some similarities about how you might start to approach your analysis process but overall there’s a lot less information to take in and there isn’t much in the way of drilling down into the details like we did in Part 1.Just like with an open card sort, kick off your analysis process by taking an overall look at the results as a whole. Quickly cast your eye over each individual card sort and just take it all in. Look for common patterns in how the cards have been sorted. Does anything jump out as surprising? Are there similarities or differences between participant sorts?

If you’re redesigning an existing information architecture (IA), how do your results compare to the current state? If this is a final check up before launching a live website, how do these results compare to what you learned during your previous research studies?If you ran your card sort using information architecture tool OptimalSort, head straight to the Overview and Participants Table presented in the results section of the tool. If you ran a moderated card sort using OptimalSort’s printed cards, you’ve probably been scanning them in after each completed session, but now is a good time to double check you got them all. And if you didn’t know about this handy feature of OptimalSort, it’s something to keep in mind for next time!

The Participants Table shows a breakdown of your card sorting data by individual participant. Start by reviewing each individual card sort one by one by clicking on the arrow in the far left column next to the Participants numbers. From here you can easily flick back and forth between participants without needing to close that modal window. Don’t spend too much time on this — you’re just trying to get a general impression of how the cards were sorted into your predetermined categories. Keep an eye out for any card sorts that you might like to exclude from the results. For example participants who have lumped everything into one group and haven’t actually sorted the cards.

Don’t worry- excluding or including participants isn’t permanent and can be toggled on or off at anytime.Once you’re happy with the individual card sorts that will and won’t be included in your results visualizations, it’s time to take a look at the Results Matrix in OptimalSort. The Results Matrix shows the number of times each card was sorted into each of your predetermined categories- the higher the number, the darker the shade of blue (see below).

A screenshot of the Results Matrix tab in OptimalSort.
Results Matrix in OptimalSort.

This table enables you to quickly and easily get across how the cards were sorted and gauge the highest and lowest levels of agreement among your participants. This will tell you if you’re on the right track or highlight opportunities for further refinement of your categories.If we take a closer look (see below) we can see that in this example closed card sort conducted on the Dewey Decimal Classification system commonly used in libraries, The Interpretation of Dreams by Sigmund Freud was sorted into ‘Philosophy and psychology’ 38 times in study a completed by 51 participants.

A screenshot of the Results Matrix in OptimalSort zoomed in.
Results Matrix in OptimalSort zoomed in with hover.

In the real world, that is exactly where that content lives and this is useful to know because it shows that the current state is supporting user expectations around findability reasonably well. Note: this particular example study used image based cards instead of word label based cards so the description that appears in both the grey box and down the left hand side of the matrix is for reference purposes only and was hidden from the participants.Sometimes you may come across cards that are popular in multiple categories. In our example study, How to win friends and influence people by Dale Carnegie, is popular in two categories: ‘Philosophy & psychology’ and ‘Social sciences’ with 22 and 21 placements respectively. The remaining card placements are scattered across a further 5 categories although in much smaller numbers.

A screenshot of the Results Matrix in OptimalSort showing cards popular in multiple categories.
Results Matrix showing cards popular in multiple categories.

When this happens, it’s up to you to determine what your number thresholds are. If it’s a tie or really close like it is in this case, you might review the results against any previous research studies to see if anything has changed or if this is something that comes up often. It might be a new category that you’ve just introduced, it might be an issue that hasn’t been resolved yet or it might just be limited to this one study. If you’re really not sure, it’s a good idea to run some in-person card sorts as well so you can ask questions and gain clarification around why your participants felt a card belonged in a particular category. If you’ve already done that great! Time to review those notes and recordings!You may also find yourself in a situation where no category is any more popular than the others for a particular card. This means there’s not much agreement among your participants about where that card actually belongs. In our example closed card sort study, the World Book Encyclopedia was placed into 9 of 10 categories. While it was placed in ‘History & geography’ 18 times, that’s still only 35% of the total placements for that card- it’s hardly conclusive.

A screenshot of the Results Matrix showing a card with a lack of agreement.
Results Matrix showing a card with a lack of agreement.

Sometimes this happens when the card label or image is quite general and could logically belong in many of the categories. In this case, an encyclopedia could easily fit into any of those categories and I suspect this happened because people may not be aware that encyclopedias make up a very large part of the category on the far left of the above matrix: ‘Computer science, information & general works’. You may also see this happening when a card is ambiguous and people have to guess where it might belong. Again - if you haven’t already - if in doubt, run some in-person card sorts so you can ask questions and get to the bottom of it!After reviewing the Results Matrix in OptimalSort, visit the Popular Placements Matrix to see which cards were most popular for each of your categories based on how your participants sorted them (see below 2 images).

A screenshot of the Popular Placements Matrix in OptimalSort, with the top half of the diagram showing.
Popular Placements Matrix in OptimalSort- top half of the diagram.

A screenshot of the Popular Placements Matrix in OptimalSort, with the top half of the diagram showing.
Popular Placements Matrix in OptimalSort- scrolled to show the bottom half of the diagram.

The diagram shades the most popular placements for each category in blue making it very easy to spot what belongs where in the eyes of your participants. It’s useful for quickly identifying clusters and also highlights the categories that didn’t get a lot of card sorting love. In our example study (2 images above) we can see that ‘Technology’ wasn’t a popular card category choice potentially indicating ambiguity around that particular category name. As someone familiar with the Dewey Decimal Classification system I know that ‘Technology’ is a bit of a tricky one because it contains a wide variety of content that includes topics on medicine and food science - sometimes it will appear as ‘Technology & applied sciences’. These results appear to support the case for exploring that alternative further!

Where to from here?

Now that we’ve looked at how to interpret your open, hybrid and closed card sorts, here are some next steps to help you turn those insights into action!Once you’ve analyzed your card sort results, it’s time to feed those insights into your design process and create your taxonomy which goes hand in hand with your information architecture. You can build your taxonomy out in Post-it notes before popping it into a spreadsheet for review. This is also a great time to identify any alternate labelling and placement options that came out of your card sorting process for further testing.From here, you might move into tree testing your new IA or you might run another card sort focussing on a specific area of your website. You can learn more about card sorting in general via our 101 guide.

When interpreting card sort results, don’t forget to have fun! It’s easy to get overwhelmed and bogged down in the results but don’t lose sight of the magic that is uncovering user insights.I’m going to leave you with this quote from Donna Spencer that summarizes the essence of card sort analysis quite nicely:Remember that you are the one who is doing the thinking, not the technique... you are the one who puts it all together into a great solution. Follow your instincts, take some risks, and try new approaches. - Donna Spencer

Further reading

  • Card Sorting 101 – Learn about the differences between open, closed and hybrid card sorts, and how to run your own using OptimalSort.

Learn more
1 min read

Behind the scenes of UX work on Trade Me's CRM system

We love getting stuck into scary, hairy problems to make things better here at Trade Me. One challenge for us in particular is how best to navigate customer reaction to any change we make to the site, the app, the terms and conditions, and so on. Our customers are passionate both about the service we provide — an online auction and marketplace — and its place in their lives, and are rightly forthcoming when they're displeased or frustrated. We therefore rely on our Customer Service (CS) team to give customers a voice, and to respond with patience and skill to customer problems ranging from incorrectly listed items to reports of abusive behavior.

The CS team uses a Customer Relationship Management (CRM) system, Trade Me Admin, to monitor support requests and manage customer accounts. As the spectrum of Trade Me's services and the complexity of the public website have grown rapidly, the CRM system has, to be blunt, been updated in ways which have not always been the prettiest. Links for new tools and reports have simply been added to existing pages, and old tools for services we no longer operate have not always been removed. Thus, our latest focus has been to improve the user experience of the CRM system for our CS team.

And though on the surface it looks like we're working on a product with only 90 internal users, our changes will have flow on effects to tens of thousands of our members at any given time (from a total number of around 3.6 million members).

The challenges of designing customer service systems

We face unique challenges designing customer service systems. Robert Schumacher from GfK summarizes these problems well. I’ve paraphrased him here and added an issue of my own:

1. Customer service centres are high volume environments — Our CS team has thousands of customer interactions every day, and and each team member travels similar paths in the CRM system.

2. Wrong turns are amplified — With so many similar interactions, a system change that adds a minute more to processing customer queries could slow down the whole team and result in delays for customers.

3. Two people relying on the same system — When the CS team takes a phone call from a customer, the CRM system is serving both people: the CS person who is interacting with it, and the caller who directs the interaction. Trouble is, the caller can't see the paths the system is forcing the CS person to take. For example, in a previous job a client’s CS team would always ask callers two or three extra security questions — not to confirm identites, but to cover up the delay between answering the call and the right page loading in the system.

4. Desktop clutter — As a result of the plethora of tools and reports and systems, the desktop of the average CS team member is crowded with open windows and tabs. They have to remember where things are and also how to interact with the different tools and reports, all of which may have been created independently (ie. work differently). This presents quite the cognitive load.

5. CS team members are expert users — They use the system every day, and will all have their own techniques for interacting with it quickly and accurately. They've also probably come up with their own solutions to system problems, which they might be very comfortable with. As Schumacher says, 'A critical mistake is to discount the expert and design for the novice. In contact centers, novices become experts very quickly.'

6. Co-design is risky — Co-design workshops, where the users become the designers,  are all the rage, and are usually pretty effective at getting great ideas quickly into systems. But expert users almost always end up regurgitating the system they're familiar with, as they've been trained by repeated use of systems to think in fixed ways.

7. Training is expensive — Complex systems require more training so if your call centre has high churn (ours doesn’t – most staff stick around for years) then you’ll be spending a lot of money. …and the one I’ve added:

8. Powerful does not mean easy to learn — The ‘it must be easy to use and intuitive’ design rationale is often the cause of badly designed CRM systems. Designers mistakenly design something simple when they should be designing something powerful. Powerful is complicated, dense, and often less easy to learn, but once mastered lets staff really motor.

Our project focus

Our improvement of Trade Me Admin is focused on fixing the shattered IA and restructuring the key pages to make them perform even better, bringing them into a new code framework. We're not redesigning the reports, tools, code or even the interaction for most of the reports, as this will be many years of effort. Watching our own staff use Trade Me Admin is like watching someone juggling six or seven things.

The system requires them to visit multiple pages, hold multiple facts in their head, pattern and problem-match across those pages, and follow their professional intuition to get to the heart of a problem. Where the system works well is on some key, densely detailed hub pages. Where it works badly, staff have to navigate click farms with arbitrary link names, have to type across the URL to get to hidden reports, and generally expend more effort on finding the answer than on comprehending the answer.

Groundwork

The first thing that we did was to sit with CS and watch them work and get to know the common actions they perform. The random nature of the IA and the plethora of dead links and superseded reports became apparent. We surveyed teams, providing them with screen printouts and three highlighter pens to colour things as green (use heaps), orange (use sometimes) and red (never use). From this, we were able to immediately remove a lot of noise from the new IA. We also saw that specific teams used certain links but that everyone used a core set. Initially focussing on the core set, we set about understanding the tasks under those links.

The complexity of the job soon became apparent – with a complex system like Trade Me Admin, it is possible to do the same thing in many different ways. Most CRM systems are complex and detailed enough for there to be more than one way to achieve the same end and often, it’s not possible to get a definitive answer, only possible to ‘build a picture’. There’s no one-to-one mapping of task to link. Links were also often arbitrarily named: ‘SQL Lookup’ being an example. The highly-trained user base are dependent on muscle memory in finding these links. This meant that when asked something like: “What and where is the policing enquiry function?”, many couldn’t tell us what or where it was, but when they needed the report it contained they found it straight away.

Sort of difficult

Therefore, it came as little surprise that staff found the subsequent card sort task quite hard. We renamed the links to better describe their associated actions, and of course, they weren't in the same location as in Trade Me Admin. So instead of taking the predicted 20 minutes, the sort was taking upwards of 40 minutes. Not great when staff are supposed to be answering customer enquiries!

We noticed some strong trends in the results, with links clustering around some of the key pages and tasks (like 'member', 'listing', 'review member financials', and so on). The results also confirmed something that we had observed — that there is a strong split between two types of information: emails/tickets/notes and member info/listing info/reports.

We built and tested two IAs

pietree results tree testing

After card sorting, we created two new IAs, and then customized one of the IAs for each of the three CS teams, giving us IAs to test. Each team was then asked to complete two tree tests, with 50% doing one first and 50% doing the other first. At first glance, the results of the tree test were okay — around 61% — but 'Could try harder'. We saw very little overall difference between the success of the two structures, but definitely some differences in task success. And we also came across an interesting quirk in the results.

Closer analysis of the pie charts with an expert in Trade Me Admin showed that some ‘wrong’ answers would give part of the picture required. In some cases so much so that I reclassified answers as ‘correct’ as they were more right than wrong. Typically, in a real world situation, staff might check several reports in order to build a picture. This ambiguous nature is hard to replicate in a tree test which wants definitive yes or no answers. Keeping the tasks both simple to follow and comprehensive proved harder than we expected.

For example, we set a task that asked participants to investigate whether two customers had been bidding on each other's auctions. When we looked at the pietree (see screenshot below), we noticed some participants had clicked on 'Search Members', thinking they needed to locate the customer accounts, when the task had presumed that the customers had already been found. This is a useful insight into writing more comprehensive tasks that we can take with us into our next tests.  

What’s clear from analysis is that although it’s possible to provide definitive answers for a typical site’s IAs, for a CRM like Trade Me Admin this is a lot harder. Devising and testing the structure of a CRM has proved a challenge for our highly trained audience, who are used to the current system and naturally find it difficult to see and do things differently. Once we had reclassified some of the answers as ‘correct’ one of the two trees was a clear winner — it had gone from 61% to 69%. The other tree had only improved slightly, from 61% to 63%.

There were still elements with it that were performing sub-optimally in our winning structure, though. Generally, the problems were to do with labelling, where, in some cases, we had attempted to disambiguate those ‘SQL lookup’-type labels but in the process, confused the team. We were left with the dilemma of whether to go with the new labels and make the system initially harder to use for staff but easier to learn for new staff, or stick with the old labels, which are harder to learn. My view is that any new system is going to see an initial performance dip, so we might as well change the labels now and make it better.

The importance of carefully structuring questions in a tree test has been highlighted, particularly in light of the ‘start anywhere/go anywhere’ nature of a CRM. The diffuse but powerful nature of a CRM means that careful consideration of tree test answer options needs to be made, in order to decide ‘how close to 100% correct answer’ you want to get.

Development work has begun so watch this space

It's great to see that our research is influencing the next stage of the CRM system, and we're looking forward to seeing it go live. Of course, our work isn't over— and nor would we want it to be! Alongside the redevelopment of the IA, I've been redesigning the key pages from Trade Me Admin, and continuing to conduct user research, including first click testing using Chalkmark.

This project has been governed by a steadily developing set of design principles, focused on complex CRM systems and the specific needs of their audience. Two of these principles are to reduce navigation and to design for experts, not novices, which means creating dense, detailed pages. It's intense, complex, and rewarding design work, and we'll be exploring this exciting space in more depth in upcoming posts.

Learn more
1 min read

Card Sorting outside UX: How I use online card sorting for in-person sociological research

Hello, my name is Rick and I’m a sociologist. All together, “Hi, Rick!” Now that we’ve got that out of the way, let me tell you about how I use card sorting in my research. I'll soon be running a series of in-person, moderated card sorting sessions. This article covers why card sorting is an integral part of my research, and how I've designed the study toanswer specific questions about two distinct parts of society.

Card sorting to establish how different people comprehend their worlds

Card sorting,or pile sorting as it’s sometimes called, has a long history in anthropology, psychology and sociology. Anthropologists, in particular, have used it to study how different cultures think about various categories. Researchers in the 1970s conducted card sorts to understand how different cultures categorize things like plants and animals. Sociologists of that era also used card sorts to examine how people think about different professions and careers. And since then, scholars have continued to use card sorts to learn about similar categorization questions.

In my own research, I study how different groups of people in the United States imagine the category of 'religion'. Asthose crazy 1970s anthropologists showed, card sorting is a great way to understand how people cognitively understand particular social categories. So, in particular,I’m using card sorting in my research to better understand how groups of people with dramatically different views understand 'religion' — namely, evangelical Christians and self-identified atheists. Thinkof it like this. Some people say that religion is the bedrock of American society.

Others say that too much religion in public life is exactly what’s wrong with this country. What's not often considered is these two groups oftenunderstand the concept of 'religion' in very different ways. It’s like the group of blind men and the elephant: one touches the trunk, one touches the ears, and one touches the tail. All three come away with very different ideas of what an elephant is. So you could say that I study how different people experience the 'elephant' of religion in their daily lives. I’m doing so using primarily in-person moderated sorts on an iPad, which I’ll describe below.

How I generated the words on the cards

The first step in the process was to generate lists of relevant terms for my subjects to sort. Unlike in UX testing, where cards for sorting might come from an existing website, in my world these concepts first have to be mined from the group of people being studied. So the first thing I did was have members of both atheist and evangelical groups complete a free listing task. In a free listing task, participants simply list as many words as they can that meet the criteria given. Sets of both atheist and evangelical respondents were given the instructions: "What words best describe 'religion?' Please list as many as you can.” They were then also asked to list words that describe 'atheism', 'spirituality', and 'Christianity'.

I took the lists generated and standardizedthem by combining synonyms. For example, some of my atheists used words like 'ancient', 'antiquated', and 'archaic' to describe religion. SoI combined all of these words into the one that was mentioned most: 'antiquated'. By doing this, I created a list of the most common words each group used to describe each category. Doing this also gave my research another useful dimension, ideal for exploring alongside my card sorting results. Free lists can beanalyzed themselves using statistical techniques likemulti-dimensional scaling, so I used this technique for apreliminary analysis of the words evangelicals used to describe 'atheism':

Optimalsort and sociological research

Now that I’m armed with these lists of words that atheist and evangelicals used to describe religion, atheism etc., I’m about to embark on phase two of the project: the card sort.

Why using card sorting software is a no-brainer for my research

I’ll be conducting my card sorts in person, for various reasons. I have relatively easy access to the specific population that I’m interested in, and for the kind of academic research I’m conducting, in-person activities are preferred. In theory, I could just print the words on some index cards and conduct a manual card sort, but I quickly realized that a software solution would be far preferable, for a bunch of reasons.

First of all, it's important for me to conductinterviews in coffee shops and restaurants, and an iPad on the table is, to put it mildly, more practical than a table covered in cards — no space for the teapot after all.

Second, usingsoftwareeliminates the need for manual data entry on my part. Not only is manual data entry a time consuming process, but it also introduces the possibly of data entry errors which may compromise my research results.

Third, while the bulk of the card sorts are going to be done in person, having an online version will enable meto scale the project up after the initial in-person sorts are complete. The atheist community, in particular, has a significant online presence, making a web solution ideal for additional data collection.

Fourth, OptimalSort gives the option to re-direct respondents after they complete a sort to any webpage, which allows multiple card sorts to be daisy-chained together. It also enables card sorts to be easily combined with complex survey instruments from other providers (e.g. Qualtrics or Survey Monkey), so card sorting data can be gathered in conjunction with other methodologies.

Finally, and just as important, doing card sorts on a tablet is more fun for participants. After all, who doesn’t like to play with an iPad? If respondents enjoy the unique process of the experiment, this is likely to actually improve the quality of the data, andrespondents are more likely to reflect positively on the experience, making recruitment easier. And a fun experience also makes it more likely that respondents will complete the exercise.

What my in-person, on-tablet card sorting research will look like

Respondents will be handed an iPad Air with 4G data capability. While the venues where the card sorts will take place usually have public Wi-Fi networks available, these networks are not always reliable, so the cellular data capabilities are needed as a back-up (and my pre-testing has shown that OptimalSort works on cellular networks too).

The iPad’s screen orientation will be locked to landscape and multi-touch functions will be disabled to prevent respondents from accidentally leaving the testing environment. In addition, respondents will have the option of using a rubber tipped stylus for ease of sorting the cards. While I personally prefer to use a microfiber tipped stylus in other applications, pre-testing revealed that an old fashioned rubber tipped stylus was easier for sorting activities.

using a tablet to conduct a card sort

When the respondent receives the iPad, the card sort first page with general instructions will already be open on the tablet in the third party browser Perfect Web. A third party browser is necessary because it is best to run OptimalSort locked in a full screen mode, both for aesthetic reasons and to keep the screen simple and uncluttered for respondents. Perfect Web is currently the best choice in the ever shifting app landscape.

participants see the cards like this

I'll give respondents their instructions and then go to another table to give them privacy (because who wants the creepy feeling of some guy hanging over you as you do stuff?). Altogether, respondents will complete two open card sorts and a fewsurvey-style questions, all chained together by redirect URLs. First, they'll sort 30 cards into groups based on how they perceive 'religion', and name the categories they create. Then, they'll complete a similar card sort, this time based on how they perceive 'atheism'.

Both atheist and evangelicals will receive a mixture of some of the top words that both groups generated in the earlier free listing tasks. To finish, they'll answer a few questions that will provide further data on how they think about 'religion'. After I’ve conducted these card sorts with both of my target populations, I’ll analyze the resulting data on its own and also in conjunction with qualitative data I’ve already collected via ethnographic research and in-depth interviews. I can't wait, actually. In a few months I’ll report back and let you know what I’ve found.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.