September 16, 2024
6 min

The future of UX research: AI's role in analysis and synthesis

As artificial intelligence (AI) continues to advance and permeate various industries, the field of user experience (UX) research is no exception. 

At Optimal Workshop, our recent Value of UX report revealed that 68% of UX professionals believe AI will have the greatest impact on analysis and synthesis in the research project lifecycle. In this article, we'll explore the current and potential applications of AI in UXR, its limitations, and how the role of UX researchers may evolve alongside these technological advancements.

How researchers are already using AI

AI is already making inroads in UX research, primarily in tasks that involve processing large amounts of data, such as

  • Automated transcription: AI-powered tools can quickly transcribe user interviews and focus group sessions, saving researchers significant time.

  • Sentiment analysis: Machine learning algorithms can analyze text data from surveys or social media to gauge overall user sentiment towards a product or feature.

  • Pattern recognition: AI can help identify recurring themes or issues in large datasets, potentially surfacing insights that might be missed by human researchers.

  • Data visualization: AI-driven tools can create interactive visualizations of complex data sets, making it easier for researchers to communicate findings to stakeholders.

As AI technology continues to evolve, its role in UX research is poised to expand, offering even more sophisticated tools and capabilities. While AI will undoubtedly enhance efficiency and uncover deeper insights, it's important to recognize that human expertise remains crucial in interpreting context, understanding nuanced user needs, and making strategic decisions. 

The future of UX research lies in the synergy between AI's analytical power and human creativity and empathy, promising a new era of user-centered design that is both data-driven and deeply insightful.

The potential for AI to accelerate UXR processes

As AI capabilities advance, the potential to accelerate UX research processes grows exponentially. We anticipate AI revolutionizing UXR by enabling rapid synthesis of qualitative data, offering predictive analysis to guide research focus, automating initial reporting, and providing real-time insights during user testing sessions. 

These advancements could dramatically enhance the efficiency and depth of UX research, allowing researchers to process larger datasets, uncover hidden patterns, and generate insights faster than ever before. As we continue to develop our platform, we're exploring ways to harness these AI capabilities, aiming to empower UX professionals with tools that amplify their expertise and drive more impactful, data-driven design decisions.

AI’s good, but it’s not perfect

While AI shows great promise in accelerating certain aspects of UX research, it's important to recognize its limitations, particularly when it comes to understanding the nuances of human experience. AI may struggle to grasp the full context of user responses, missing subtle cues or cultural nuances that human researchers would pick up on. Moreover, the ability to truly empathize with users and understand their emotional responses is a uniquely human trait that AI cannot fully replicate. These limitations underscore the continued importance of human expertise in UX research, especially when dealing with complex, emotionally-charged user experiences.

Furthermore, the creative problem-solving aspect of UX research remains firmly in the human domain. While AI can identify patterns and trends with remarkable efficiency, the creative leap from insight to innovative solution still requires human ingenuity. UX research often deals with ambiguous or conflicting user feedback, and human researchers are better equipped to navigate these complexities and make nuanced judgment calls. As we move forward, the most effective UX research strategies will likely involve a symbiotic relationship between AI and human researchers, leveraging the strengths of both to create more comprehensive, nuanced, and actionable insights.

Ethical considerations and data privacy concerns‍

As AI becomes more integrated into UX research processes, several ethical considerations come to the forefront. Data security emerges as a paramount concern, with our report highlighting it as a significant factor when adopting new UX research tools. Ensuring the privacy and protection of user data becomes even more critical as AI systems process increasingly sensitive information. Additionally, we must remain vigilant about potential biases in AI algorithms that could skew research results or perpetuate existing inequalities, potentially leading to flawed design decisions that could negatively impact user experiences.

Transparency and informed consent also take on new dimensions in the age of AI-driven UX research. It's crucial to maintain clarity about which insights are derived from AI analysis versus human interpretation, ensuring that stakeholders understand the origins and potential limitations of research findings. As AI capabilities expand, we may need to revisit and refine informed consent processes, ensuring that users fully comprehend how their data might be analyzed by AI systems. These ethical considerations underscore the need for ongoing dialogue and evolving best practices in the UX research community as we navigate the integration of AI into our workflows.

The evolving role of researchers in the age of AI

As AI technologies advance, the role of UX researchers is not being replaced but rather evolving and expanding in crucial ways. Our Value of UX report reveals that while 35% of organizations consider their UXR practice to be "strategic" or "leading," there's significant room for growth. This evolution presents an opportunity for researchers to focus on higher-level strategic thinking and problem-solving, as AI takes on more of the data processing and initial analysis tasks.

The future of UX research lies in a symbiotic relationship between human expertise and AI capabilities. Researchers will need to develop skills in AI collaboration, guiding and interpreting AI-driven analyses to extract meaningful insights. Moreover, they will play a vital role in ensuring the ethical use of AI in research processes and critically evaluating AI-generated insights. As AI becomes more prevalent, UX researchers will be instrumental in bridging the gap between technological capabilities and genuine human needs and experiences.

Democratizing UXR through AI

The integration of AI into UX research processes holds immense potential for democratizing the field, making advanced research techniques more accessible to a broader range of organizations and professionals. Our report indicates that while 68% believe AI will impact analysis and synthesis, only 18% think it will affect co-presenting findings, highlighting the enduring value of human interpretation and communication of insights.

At Optimal Workshop, we're excited about the possibilities AI brings to UX research. We envision a future where AI-powered tools can lower the barriers to entry for conducting comprehensive UX research, allowing smaller teams and organizations to gain deeper insights into their users' needs and behaviors. This democratization could lead to more user-centered products and services across various industries, ultimately benefiting end-users.

However, as we embrace these technological advancements, it's crucial to remember that the core of UX research remains fundamentally human. The unique skills of empathy, contextual understanding, and creative problem-solving that human researchers bring to the table will continue to be invaluable. As we move forward, UX researchers must stay informed about AI advancements, critically evaluate their application in research processes, and continue to advocate for the human-centered approach that is at the heart of our field.

By leveraging AI to handle time-consuming tasks and uncover patterns in large datasets, researchers can focus more on strategic interpretation, ethical considerations, and translating insights into impactful design decisions. This shift not only enhances the value of UX research within organizations but also opens up new possibilities for innovation and user-centric design.

As we continue to develop our platform at Optimal Workshop, we're committed to exploring how AI can complement and amplify human expertise in UX research, always with the goal of creating better user experiences.

The future of UX research is bright, with AI serving as a powerful tool to enhance our capabilities, democratize our practices, and ultimately create more intuitive, efficient, and delightful user experiences for people around the world.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Designing User Experiences for Agentic AI: The Next Frontier

Beyond Generative AI: A New Paradigm Emerges

The AI landscape is undergoing a profound transformation. While generative AI has captured public imagination with its ability to create content, a new paradigm is quietly revolutionizing how we think about human-computer interaction: Agentic AI.

Unlike traditional software that waits for explicit commands or generative AI focused primarily on content creation, Agentic AI represents a fundamental shift toward truly autonomous systems. These advanced AI agents can independently make decisions, take actions, and solve complex problems with minimal human oversight. Rather than simply responding to prompts, they proactively work toward goals, demonstrating initiative and adaptability that more closely resembles human collaboration than traditional software interaction.

This evolution is already transforming industries across the board:

  • In customer service, AI agents handle complex inquiries end-to-end
  • In software development, they autonomously debug code and suggest improvements
  • In healthcare, they monitor patient data and flag concerning patterns
  • In finance, they analyze market trends and execute optimized strategies
  • In manufacturing and logistics, they orchestrate complex operations with minimal human intervention

As these autonomous systems become more prevalent, designing exceptional user experiences for them becomes not just important, but essential. The challenge? Traditional UX approaches built around graphical user interfaces and direct manipulation fall short when designing for AI that thinks and acts independently.

The New Interaction Model: From Commands to Collaboration

Interacting with Agentic AI represents a fundamental departure from conventional software experiences. The predictable, structured nature of traditional GUIs—with their buttons, menus, and visual feedback—gives way to something more fluid, conversational, and at times, unpredictable.

The ideal Agentic AI experience feels less like operating a tool and more like collaborating with a capable teammate. This shift demands that UX designers look beyond the visual aspects of interfaces to consider entirely new interaction models that emphasize:

  • Natural language as the primary interface
  • The AI's ability to take initiative appropriately
  • Establishing the right balance of autonomy and human control
  • Building and maintaining trust through transparency
  • Adapting to individual user preferences over time

The core challenge lies in bridging the gap between users accustomed to direct manipulation of software and the more abstract interactions inherent in systems that can think and act independently. How do we design experiences that harness the power of autonomy while maintaining the user's sense of control and understanding?

Understanding Users in the Age of Autonomous AI

The foundation of effective Agentic AI design begins with deep user understanding. Expectations for these autonomous agents are shaped by prior experiences with traditional AI assistants but require significant recalibration given their increased autonomy and capability.

Essential UX Research Methods for Agentic AI

Several research methodologies prove particularly valuable when designing for autonomous agents:

User Interviews provide rich qualitative insights into perceptions, trust factors, and control preferences. These conversations reveal the nuanced ways users think about AI autonomy—often accepting it readily for low-stakes tasks like calendar management while requiring more oversight for consequential decisions like financial planning.

Usability Testing with Agentic AI prototypes reveals how users react to AI initiative in real-time. Observing these interactions highlights moments where users feel empowered versus instances where they experience discomfort or confusion when the AI acts independently.

Longitudinal Studies track how user perceptions and interaction patterns evolve as the AI learns and adapts to individual preferences. Since Agentic AI improves through use, understanding this relationship over time provides critical design insights.

Ethnographic Research offers contextual understanding of how autonomous agents integrate into users' daily workflows and environments. This immersive approach reveals unmet needs and potential areas of friction that might not emerge in controlled testing environments.

Key Questions to Uncover

Effective research for Agentic AI should focus on several fundamental dimensions:

Perceived Autonomy: How much independence do users expect and desire from AI agents across different contexts? When does autonomy feel helpful versus intrusive?

Trust Factors: What elements contribute to users trusting an AI's decisions and actions? How quickly is trust lost when mistakes occur, and what mechanisms help rebuild it?

Control Mechanisms: What types of controls (pause, override, adjust parameters) do users expect to have over autonomous systems? How can these be implemented without undermining the benefits of autonomy?

Transparency Needs: What level of insight into the AI's reasoning do users require? How can this information be presented effectively without overwhelming them with technical complexity?

The answers to these questions vary significantly across user segments, task types, and domains—making comprehensive research essential for designing effective Agentic AI experiences.

Core UX Principles for Agentic AI Design

Designing for autonomous agents requires a unique set of principles that address their distinct characteristics and challenges:

Clear Communication

Effective Agentic AI interfaces facilitate natural, transparent communication between user and agent. The AI should clearly convey:

  • Its capabilities and limitations upfront
  • When it's taking action versus gathering information
  • Why it's making specific recommendations or decisions
  • What information it's using to inform its actions

Just as with human collaboration, clear communication forms the foundation of successful human-AI partnerships.

Robust Feedback Mechanisms

Agentic AI should provide meaningful feedback about its operations and make it easy for users to provide input on its performance. This bidirectional exchange enables:

  • Continuous learning and refinement of the agent's behavior
  • Adaptation to individual user preferences
  • Improved accuracy and usefulness over time

The most effective agents make feedback feel conversational rather than mechanical, encouraging users to shape the AI's behavior through natural interaction.

Thoughtful Error Handling

How an autonomous agent handles mistakes significantly impacts user trust and satisfaction. Effective error handling includes:

  • Proactively identifying potential errors before they occur
  • Clearly communicating when and why errors happen
  • Providing straightforward paths for recovery or human intervention
  • Learning from mistakes to prevent recurrence

The ability to gracefully manage errors and learn from them is often what separates exceptional Agentic AI experiences from frustrating ones.

Appropriate User Control

Users need intuitive mechanisms to guide and control autonomous agents, including:

  • Setting goals and parameters for the AI to work within
  • The ability to pause or stop actions in progress
  • Options to override decisions when necessary
  • Preferences that persist across sessions

The level of control should adapt to both user expertise and task criticality, offering more granular options for advanced users or high-stakes decisions.

Balanced Transparency

Effective Agentic AI provides appropriate visibility into its reasoning and decision-making processes without overwhelming users. This involves:

  • Making the AI's "thinking" visible and understandable
  • Explaining data sources and how they influence decisions
  • Offering progressive disclosure—basic explanations for casual users, deeper insights for those who want them

Transparency builds trust by demystifying what might otherwise feel like a "black box" of AI decision-making.

Proactive Assistance

Perhaps the most distinctive aspect of Agentic AI is its ability to anticipate needs and take initiative, offering:

  • Relevant suggestions based on user context
  • Automation of routine tasks without explicit commands
  • Timely information that helps users make better decisions

When implemented thoughtfully, this proactive assistance transforms the AI from a passive tool into a true collaborative partner.

Building User Confidence Through Transparency and Explainability

For users to embrace autonomous agents, they need to understand and trust how these systems operate. This requires both transparency (being open about how the system works) and explainability (providing clear reasons for specific decisions).

Several techniques can enhance these critical qualities:

  • Feature visualization that shows what the AI is "seeing" or focusing on
  • Attribution methods that identify influential factors in decisions
  • Counterfactual explanations that illustrate "what if" scenarios
  • Natural language explanations that translate complex reasoning into simple terms

From a UX perspective, this means designing interfaces that:

  • Clearly indicate when users are interacting with AI versus human systems
  • Make complex decisions accessible through visualizations or natural language
  • Offer progressive disclosure—basic explanations by default with deeper insights available on demand
  • Implement audit trails documenting the AI's actions and reasoning

The goal is to provide the right information at the right time, helping users understand the AI's behavior without drowning them in technical details.

Embracing Iteration and Continuous Testing

The dynamic, learning nature of Agentic AI makes traditional "design once, deploy forever" approaches inadequate. Instead, successful development requires:

Iterative Design Processes

  • Starting with minimal viable agents and expanding capabilities based on user feedback
  • Incorporating user input at every development stage
  • Continuously refining the AI's behavior based on real-world interaction data

Comprehensive Testing Approaches

  • A/B testing different AI behaviors with actual users
  • Implementing feedback loops for ongoing improvement
  • Monitoring key performance indicators related to user satisfaction and task completion
  • Testing for edge cases, adversarial inputs, and ethical alignment

Cross-Functional Collaboration

  • Breaking down silos between UX designers, AI engineers, and domain experts
  • Ensuring technical capabilities align with user needs
  • Creating shared understanding of both technical constraints and user expectations

This ongoing cycle of design, testing, and refinement ensures Agentic AI continuously evolves to better serve user needs.

Learning from Real-World Success Stories

Several existing applications offer valuable lessons for designing effective autonomous systems:

Autonomous Vehicles demonstrate the importance of clearly communicating intentions, providing reassurance during operation, and offering intuitive override controls for passengers.

Smart Assistants like Alexa and Google Assistant highlight the value of natural language processing, personalization based on user preferences, and proactive assistance.

Robotic Systems in industrial settings showcase the need for glanceable information, simplified task selection, and workflows that ensure safety in shared human-robot environments.

Healthcare AI emphasizes providing relevant insights to professionals, automating routine tasks to reduce cognitive load, and enhancing patient care through personalized recommendations.

Customer Service AI prioritizes personalized interactions, 24/7 availability, and the ability to handle both simple requests and complex problem-solving.

These successful implementations share several common elements:

  • They prioritize transparency about capabilities and limitations
  • They provide appropriate user control while maximizing the benefits of autonomy
  • They establish clear expectations about what the AI can and cannot do

Shaping the Future of Human-Agent Interaction

Designing user experiences for Agentic AI represents a fundamental shift in how we think about human-computer interaction. The evolution from graphical user interfaces to autonomous agents requires UX professionals to:

  • Move beyond traditional design patterns focused on direct manipulation
  • Develop new frameworks for building trust in autonomous systems
  • Create interaction models that balance AI initiative with user control
  • Embrace continuous refinement as both technology and user expectations evolve

The future of UX in this space will likely explore more natural interaction modalities (voice, gesture, mixed reality), increasingly sophisticated personalization, and thoughtful approaches to ethical considerations around AI autonomy.

For UX professionals and AI developers alike, this new frontier offers the opportunity to fundamentally reimagine the relationship between humans and technology—moving from tools we use to partners we collaborate with. By focusing on deep user understanding, transparent design, and iterative improvement, we can create autonomous AI experiences that genuinely enhance human capability rather than simply automating tasks.

The journey has just begun, and how we design these experiences today will shape our relationship with intelligent technology for decades to come.

Learn more
1 min read

How to conduct a user interview

Few UX research techniques can surpass the user interview for the simple fact that you can gain a number of in-depth insights by speaking to just a handful of people. Yes, the prospect of sitting down in front of your customers can be a daunting one, but you’ll gain a level of insight and detail that really is tough to beat.

This research method is popular for a reason – it’s extremely flexible and can deliver deep, meaningful results in a relatively short amount of time.

We’ve put together this article for both user interview newbies and old hands alike. Our intention is to give you a guide that you can refer back to so you can make sure you're getting the most out of this technique. Of course, feel free to leave a comment if you think there’s something else we should add.

What is a user interview?

User interviews are a technique you can use to capture qualitative information from your customers and other people you’re interested in learning from. For example, you may want to interview a group of retirees before developing a new product aimed at their market.

User interviews usually follow the format of a guided conversation, diving deep into a particular topic. While sometimes you may have some predefined questions or topics to cover, the focus of your interviews can change depending on what you learn along the way.

Given the format, user interviews can help you answer any number of questions, such as:

  • How do people currently shop online? Are there any products they would never consider purchasing this way?
  • How do people feel about using meal delivery services? What stops them from trying them out?
  • How do ride sharing drivers figure out which app to use when they’re about to start a shift?

It’s important to remember that user interviews are all about people's perception of something, not usability. What this means in practical terms is that you shouldn’t go into a user interview expecting to find out how they navigate through a particular app, product or website. Those are answers you can gain through usability testing.

When should you interview your users?

Now that we have an understanding of what user interviews are and the types of questions this method can help you answer, when should you do them? As this method will give you insights into why people think the way they do, what they think is important and any suggestions they have, they’re mostly useful in the discovery stages of the design process when you're trying to understand the problem space.

You may want to run a series of user interviews at the start of a project in order to inform the design process. Interviews with users can help you to create detailed personas, generate feature ideas based on real user needs and set priorities. Looked at another way, doesn’t it seem like an unnecessary risk not to talk to your users before building something for them?

Plan your research

Before sitting down and writing your user interview, you need to figure out your research question. This is the primary reason for running your user interviews – your ‘north star’. It’s also a good idea to engage with your stakeholders when trying to figure this question out as they’ll be able to give you useful insights and feedback.

A strong research question will help you to create interview questions that are aligned and give you a clear goal. The key thing is to make sure that it’s a strong, concise goal that relates to specific user behaviors. You don’t want to start planning for your interview with a research question like “How do customers use our mobile app”. It’s far too broad to direct your interview planning.

Write your questions

Now it’s time to write your user interview questions. If you’ve taken the time to engage with stakeholders and you’ve created a solid research question, this step should be relatively straightforward.

Here are a few things to focus on when writing your interview questions:

  • Encourage your interviewees to tell stories: There’s a direct correlation between the questions you write for a user interview and the answers you get back. Consider more open-ended questions, with the aim of getting your interviewees to tell you stories and share more detail. For example, “Tell me about the last car you owned” is much better than “What was the last car you owned”.
  • Consider different types of questions: You don’t want to dive right into the complex, detailed questions when your interviewee has barely walked into the room. It’s much better to start an interview off with several ‘warm-up’ questions, that will get them in the right frame of mind. Think questions like: “What do you do for work?” and “How often do you use a computer at home?”. Answering these questions will put them in the right frame of mind for the rest of the interview.
  • Start with as many questions as you can think of – then trim: This can be quite a helpful exercise. When you’re actually putting pen to paper (or fingers to keyboard) and writing your questions, go broad at first. Then, once you’ve got a large selection to choose from, trim them back.
  • Have someone review your questions: Whether it’s another researcher on your team or perhaps someone who’s familiar with the audience you plan to interview, get another pair of eyes on your questions. Beyond just making sure they all make sense and are appropriate, they may be able to point out any questions you may have missed.

Recruit participants

Having a great set of questions is all well and good, but you need to interview the right kind of people. It’s not always easy. Finding representative or real users can quickly suck up a lot of time and bog down your other work. But this doesn’t have to be the case. With some strategy and planning you can make the process of participant recruitment quick and easy.

There are 2 main ways to go about recruitment. You can either handle the process yourself – we’ll share some tips for how to do this below – or use a recruitment service. Using a dedicated recruitment service will save you the hassle of actively searching for participants, which can often become a significant time-sink.

If you’re planning to recruit people yourself, here are a few ways to go about the process. You may find that using multiple methods is the best way to net the pool of participants you need.

  • Reach out to your customer support team: There’s a ready source of real users available in every organization: the customer support team. These are the people that speak to your organization’s customers every day, and have a direct line to their problems and pain points. Working with this team is a great way to access suitable participants, plus customers will value the fact that you’re taking the time to speak to them.
  • Recruit directly from your website: Support messaging apps like Intercom and intercept recruiting tools like Ethnio allow you to recruit participants directly from your website by serving up live intercepts. This is a fast, relatively hands-off way to recruit people quickly.
  • Ask your social media followers: LinkedIn, Twitter and Facebook can be great sources of research participants. There’s also the bonus that you can broadcast the fact that your organization focuses on research – and that’s always good publicity! If you don’t have a large following, you can also run paid ads on different social platforms.

Once a pool of participants start to flow in, consider setting up a dedicated research panel where you can log their details and willingness to take part in future research. It may take some admin at the start, but you’ll save time in the long run.

Note: Figure out a plan for participant data protection before you start collecting and storing their information. As the researcher, it’s up to you to take proper measures for privacy and confidentiality, from the moment you collect an email address until you delete it. Only store information in secure locations, and make sure you get consent before you ever turn on a microphone recorder or video camera.

Run your interviews

Now for the fun part – running your user interviews. In most cases, user interviews follow a simple format. You sit down next to your participant and run through your list of questions, veering into new territory if you sense an interesting discussion. At the end, you thank them for their time and pass along a small gift (such as a voucher) as a thank-you.

Of course, there are a few other things that you’ll want to keep in mind if you really want to conduct the best possible interviews.

  • Involve others: User interviews are a great way to show the value of research and give people within your organization a direct insight into how users think. There are no hard and fast rules around who you should bring to a user interview, just consider how useful the experience is likely to be for them. If you like, you can also assign them the role of notetaker.
  • Record the interview: You’ll have to get consent from the interviewee, but having a recording of the interview will make the process of analysis that much easier. In addition to being able to listen to the recording again, you can convert the entire session into a searchable text file.
  • Don’t be afraid to go off-script: Interviewing is a skill, meaning that the more interviews you conduct, the better you’re going to get. Over time, you’ll find that you’re able to naturally guide the conversation in different directions as you pick up on things the interviewee says. Don’t be discouraged if you find yourself sticking to your prepared questions during your first few interviews.
  • Be attentive: You don’t want to come across as a brick wall when interviewing someone – you want to be seen as an attentive listener. This means confirming that you’re listening by nodding, making eye contact and asking follow-up questions naturally (this last one may take practice). If you really struggle to ask follow-up questions, try writing a few generic questions can you can use at different points throughout the interview, for example “Could you tell me more about that?”. There’s a great guide on UXmatters about the role empathy has to play in understanding users.
  • Debrief afterwards: Whether it’s just you or you and a notetaker, take some time after the interview to go over how it went. This is a good opportunity to take down any details either you may have missed and to reflect and discuss some of the key takeaways.

Analyze your interview findings

At first glance, analyzing the qualitative data you’ve captured from a user interview can seem daunting. But, with the right approach (and some useful tools) you can extract each and every useful insight.

If you’ve recorded your interview sessions, you’ll need to convert your audio recordings into text files. We recommend a tool like Descript. This software makes it easy to take an audio file of your recording and transform it into a document, which is much faster than doing it without dedicated software. If you like, there’s also the option of various ‘white glove’ services where someone will transcribe the interview for you.

With your interview recordings transcribed and notes in-hand, you can start the process of thematic analysis. If you’re unfamiliar, thematic analysis is one of the most popular approaches for qualitative research as it helps you to find different patterns and themes in your data. There are 2 ways to approach this. The first is largely manual, where you set up a spreadsheet with different themes like ‘navigation issue’ and ‘design problem’, and group your findings into these areas. This can be done using sticky notes, which used to be a common ways to analyze findings.

The second involves dedicated qualitative research tool like Reframer. You log your notes over the course of several interview sessions and then use Reframer’s tagging functionality to assign tags to different insights. By applying tags to your observations, you can then use its analysis features to create wider themes. The real benefit here is that there’s no chance of losing your past interviews and analysis as everything is stored in one place. You can also easily download your findings into a spreadsheet to share them with your team.

What’s next?

With your interviews all wrapped up and your analysis underway, you’re likely wondering what’s next. There’s a good chance your interviews will have opened up new areas you’d like to test, so now could be the perfect time to assess other qualitative research methods and add more human data to your research project. On the other hand, you may want to move onto quantitative research and put some numbers behind your research.

Whether you choose to proceed down a qualitative or quantitative path, we’re pulled together some more useful articles and things for you to read:

Learn more
1 min read

My journey running a design sprint

Recently, everyone in the design industry has been talking about design sprints. So, naturally, the team at Optimal Workshop wanted to see what all the fuss was about. I picked up a copy of The Sprint Book and suggested to the team that we try out the technique.

In order to keep momentum, we identified a current problem and decided to run the sprint only two weeks later. The short notice was a bit of a challenge, but in the end we made it work. Here’s a run down of how things went, what worked, what didn’t, and lessons learned.

A sprint is an intensive focused period of time to get a product or feature designed and tested with the goal of knowing whether or not the team should keep investing in the development of the idea. The idea needs to be either validated or not validated by the end of the sprint. In turn, this saves time and resource further down the track by being able to pivot early if the idea doesn’t float.

If you’re following The Sprint Book you might have a structured 5 day plan that looks likes this:

  • Day 1 - Understand: Discover the business opportunity, the audience, the competition, the value proposition and define metrics of success.
  • Day 2 - Diverge: Explore, develop and iterate creative ways of solving the problem, regardless of feasibility.
  • Day 3 - Converge: Identify ideas that fit the next product cycle and explore them in further detail through storyboarding.
  • Day 4 - Prototype: Design and prepare prototype(s) that can be tested with people.
  • Day 5 - Test: User testing with the product's primary target audience.
Design sprint cycle
 With a Design Sprint, a product doesn't need to go full cycle to learn about the opportunities and gather feedback.

When you’re running a design sprint, it’s important that you have the right people in the room. It’s all about focus and working fast; you need the right people around in order to do this and not have any blocks down the path. Team, stakeholder and expert buy-in is key — this is not a task just for a design team!After getting buy in and picking out the people who should be involved (developers, designers, product owner, customer success rep, marketing rep, user researcher), these were my next steps:

Pre-sprint

  1. Read the book
  2. Panic
  3. Send out invites
  4. Write the agenda
  5. Book a meeting room
  6. Organize food and coffee
  7. Get supplies (Post-its, paper, Sharpies, laptops, chargers, cameras)

Some fresh smoothies for the sprinters made by our juice technician
 Some fresh smoothies for the sprinters made by our juice technician

The sprint

Due to scheduling issues we had to split the sprint over the end of the week and weekend. Sprint guidelines suggest you hold it over Monday to Friday — this is a nice block of time but we had to do Thursday to Thursday, with the weekend off in between, which in turn worked really well. We are all self confessed introverts and, to be honest, the thought of spending five solid days workshopping was daunting. At about two days in, we were exhausted and went away for the weekend and came back on Monday feeling sociable and recharged again and ready to examine the work we’d done in the first two days with fresh eyes.

Design sprint activities

During our sprint we completed a range of different activities but here’s a list of some that worked well for us. You can find out more information about how to run most of these over at The Sprint Book website or checkout some great resources over at Design Sprint Kit.

Lightning talks

We kicked off our sprint by having each person give a quick 5-minute talk on one of these topics in the list below. This gave us all an overview of the whole project and since we each had to present, we in turn became the expert in that area and engaged with the topic (rather than just listening to one person deliver all the information).

Our lightning talk topics included:

  • Product history - where have we come from so the whole group has an understanding of who we are and why we’ve made the things we’ve made.
  • Vision and business goals - (from the product owner or CEO) a look ahead not just of the tools we provide but where we want the business to go in the future.
  • User feedback - what have users been saying so far about the idea we’ve chosen for our sprint. This information is collected by our User Research and Customer Success teams.
  • Technical review - an overview of our tech and anything we should be aware of (or a look at possible available tech). This is a good chance to get an engineering lead in to share technical opportunities.
  • Comparative research - what else is out there, how have other teams or products addressed this problem space?

Empathy exercise

I asked the sprinters to participate in an exercise so that we could gain empathy for those who are using our tools. The task was to pretend we were one of our customers who had to present a dendrogram to some of our team members who are not involved in product development or user research. In this frame of mind, we had to talk through how we might start to draw conclusions from the data presented to the stakeholders. We all gained more empathy for what it’s like to be a researcher trying to use the graphs in our tools to gain insights.

How Might We

In the beginning, it’s important to be open to all ideas. One way we did this was to phrase questions in the format: “How might we…” At this stage (day two) we weren’t trying to come up with solutions — we were trying to work out what problems there were to solve. ‘We’ is a reminder that this is a team effort, and ‘might’ reminds us that it’s just one suggestion that may or may not work (and that’s OK). These questions then get voted on and moved into a workshop for generating ideas (see Crazy 8s).Read a more detailed instructions on how to run a ‘How might we’ session on the Design Sprint Kit website.

Crazy 8s

This activity is a super quick-fire idea generation technique. The gist of it is that each person gets a piece of paper that has been folded 8 times and has 8 minutes to come up with eight ideas (really rough sketches). When time is up, it’s all pens down and the rest of the team gets to review each other's ideas.In our sprint, we gave each person Post-it notes, paper, and set the timer for 8 minutes. At the end of the activity, we put all the sketches on a wall (this is where the art gallery exercise comes in).

Mila our data scientist sketching intensely during Crazy 8s
 Mila our data scientist sketching intensely during Crazy 8s

A close up of some sketches from the team
 A close up of some sketches from the team

Art gallery/Silent critique

The art gallery is the place where all the sketches go. We give everyone dot stickers so they can vote and pull out key ideas from each sketch. This is done silently, as the ideas should be understood without needing explanation from the person who made them. At the end of it you’ve got a kind of heat map, and you can see the ideas that stand out the most. After this first round of voting, the authors of the sketches get to talk through their ideas, then another round of voting begins.

Mila putting some sticky dots on some sketches
 Mila putting some sticky dots on some sketches

Bowie, our head of security/office dog, even took part in the sprint...kind of.
 Bowie, our head of security, even took part in the sprint...kind of

Usability testing and validation

The key part of a design sprint is validation. For one of our sprints we had two parts of our concept that needed validating. To test one part we conducted simple user tests with other members of Optimal Workshop (the feature was an internal tool). For the second part we needed to validate whether we had the data to continue with this project, so we had our data scientist run some numbers and predictions for us.

6-dan-design-sprintOur remote worker Rebecca dialed in to watch one of our user tests live
 Our remote worker Rebecca dialed in to watch one of our user tests live
"I'm pretty bloody happy" — Actual feedback.
 Actual feedback

Challenges and outcomes

One of our key team members, Rebecca, was working remotely during the sprint. To make things easier for her, we set up 2 cameras: one pointed to the whiteboard, the other was focused on the rest of the sprint team sitting at the table. Next to that, we set up a monitor so we could see Rebecca.

Engaging in workshop activities is a lot harder when working remotely. Rebecca would get around this by completing the activities and take photos to send to us.

8-rebecca-design-sprint
 For more information, read this great Medium post about running design sprints remotely

Lessons

  • Lightning talks are a great way to have each person contribute up front and feel invested in the process.
  • Sprints are energy intensive. Make sure you’re in a good place with plenty of fresh air with comfortable chairs and a break out space. We like to split the five days up so that we get a weekend break.
  • Give people plenty of notice to clear their schedules. Asking busy people to take five days from their schedule might not go down too well. Make sure they know why you’d like them there and what they should expect from the week. Send them an outline of the agenda. Ideally, have a chat in person and get them excited to be part of it.
  • Invite the right people. It’s important that you get the right kind of people from different parts of the company involved in your sprint. The role they play in day-to-day work doesn’t matter too much for this. We’re all mainly using pens and paper and the more types of brains in the room the better. Looking back, what we really needed on our team was a customer support team member. They have the experience and knowledge about our customers that we don’t have.
  • Choose the right sprint problem. The project we chose for our first sprint wasn’t really suited for a design sprint. We went in with a well defined problem and a suggested solution from the team instead of having a project that needed fresh ideas. This made the activities like ‘How Might We’ seem very redundant. The challenge we decided to tackle ended up being more of a data prototype (spreadsheets!). We used the week to validate assumptions around how we can better use data and how we can write a script to automate some internal processes. We got the prototype working and tested but due to the nature of the project we will have to run this experiment in the background for a few months before any building happens.

Overall, this design sprint was a great team bonding experience and we felt pleased with what we achieved in such a short amount of time. Naturally, here at Optimal Workshop, we're experimenters at heart and we will keep exploring new ways to work across teams and find a good middle ground.

Further reading

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.