July 10, 2018

My journey running a design sprint

Recently, everyone in the design industry has been talking about design sprints. So, naturally, the team at Optimal Workshop wanted to see what all the fuss was about. I picked up a copy of The Sprint Book and suggested to the team that we try out the technique.

In order to keep momentum, we identified a current problem and decided to run the sprint only two weeks later. The short notice was a bit of a challenge, but in the end we made it work. Here’s a run down of how things went, what worked, what didn’t, and lessons learned.

A sprint is an intensive focused period of time to get a product or feature designed and tested with the goal of knowing whether or not the team should keep investing in the development of the idea. The idea needs to be either validated or not validated by the end of the sprint. In turn, this saves time and resource further down the track by being able to pivot early if the idea doesn’t float.

If you’re following The Sprint Book you might have a structured 5 day plan that looks likes this:

  • Day 1 - Understand: Discover the business opportunity, the audience, the competition, the value proposition and define metrics of success.
  • Day 2 - Diverge: Explore, develop and iterate creative ways of solving the problem, regardless of feasibility.
  • Day 3 - Converge: Identify ideas that fit the next product cycle and explore them in further detail through storyboarding.
  • Day 4 - Prototype: Design and prepare prototype(s) that can be tested with people.
  • Day 5 - Test: User testing with the product's primary target audience.
Design sprint cycle
 With a Design Sprint, a product doesn't need to go full cycle to learn about the opportunities and gather feedback.

When you’re running a design sprint, it’s important that you have the right people in the room. It’s all about focus and working fast; you need the right people around in order to do this and not have any blocks down the path. Team, stakeholder and expert buy-in is key — this is not a task just for a design team!After getting buy in and picking out the people who should be involved (developers, designers, product owner, customer success rep, marketing rep, user researcher), these were my next steps:

Pre-sprint

  1. Read the book
  2. Panic
  3. Send out invites
  4. Write the agenda
  5. Book a meeting room
  6. Organize food and coffee
  7. Get supplies (Post-its, paper, Sharpies, laptops, chargers, cameras)

Some fresh smoothies for the sprinters made by our juice technician
 Some fresh smoothies for the sprinters made by our juice technician

The sprint

Due to scheduling issues we had to split the sprint over the end of the week and weekend. Sprint guidelines suggest you hold it over Monday to Friday — this is a nice block of time but we had to do Thursday to Thursday, with the weekend off in between, which in turn worked really well. We are all self confessed introverts and, to be honest, the thought of spending five solid days workshopping was daunting. At about two days in, we were exhausted and went away for the weekend and came back on Monday feeling sociable and recharged again and ready to examine the work we’d done in the first two days with fresh eyes.

Design sprint activities

During our sprint we completed a range of different activities but here’s a list of some that worked well for us. You can find out more information about how to run most of these over at The Sprint Book website or checkout some great resources over at Design Sprint Kit.

Lightning talks

We kicked off our sprint by having each person give a quick 5-minute talk on one of these topics in the list below. This gave us all an overview of the whole project and since we each had to present, we in turn became the expert in that area and engaged with the topic (rather than just listening to one person deliver all the information).

Our lightning talk topics included:

  • Product history - where have we come from so the whole group has an understanding of who we are and why we’ve made the things we’ve made.
  • Vision and business goals - (from the product owner or CEO) a look ahead not just of the tools we provide but where we want the business to go in the future.
  • User feedback - what have users been saying so far about the idea we’ve chosen for our sprint. This information is collected by our User Research and Customer Success teams.
  • Technical review - an overview of our tech and anything we should be aware of (or a look at possible available tech). This is a good chance to get an engineering lead in to share technical opportunities.
  • Comparative research - what else is out there, how have other teams or products addressed this problem space?

Empathy exercise

I asked the sprinters to participate in an exercise so that we could gain empathy for those who are using our tools. The task was to pretend we were one of our customers who had to present a dendrogram to some of our team members who are not involved in product development or user research. In this frame of mind, we had to talk through how we might start to draw conclusions from the data presented to the stakeholders. We all gained more empathy for what it’s like to be a researcher trying to use the graphs in our tools to gain insights.

How Might We

In the beginning, it’s important to be open to all ideas. One way we did this was to phrase questions in the format: “How might we…” At this stage (day two) we weren’t trying to come up with solutions — we were trying to work out what problems there were to solve. ‘We’ is a reminder that this is a team effort, and ‘might’ reminds us that it’s just one suggestion that may or may not work (and that’s OK). These questions then get voted on and moved into a workshop for generating ideas (see Crazy 8s).Read a more detailed instructions on how to run a ‘How might we’ session on the Design Sprint Kit website.

Crazy 8s

This activity is a super quick-fire idea generation technique. The gist of it is that each person gets a piece of paper that has been folded 8 times and has 8 minutes to come up with eight ideas (really rough sketches). When time is up, it’s all pens down and the rest of the team gets to review each other's ideas.In our sprint, we gave each person Post-it notes, paper, and set the timer for 8 minutes. At the end of the activity, we put all the sketches on a wall (this is where the art gallery exercise comes in).

Mila our data scientist sketching intensely during Crazy 8s
 Mila our data scientist sketching intensely during Crazy 8s

A close up of some sketches from the team
 A close up of some sketches from the team

Art gallery/Silent critique

The art gallery is the place where all the sketches go. We give everyone dot stickers so they can vote and pull out key ideas from each sketch. This is done silently, as the ideas should be understood without needing explanation from the person who made them. At the end of it you’ve got a kind of heat map, and you can see the ideas that stand out the most. After this first round of voting, the authors of the sketches get to talk through their ideas, then another round of voting begins.

Mila putting some sticky dots on some sketches
 Mila putting some sticky dots on some sketches

Bowie, our head of security/office dog, even took part in the sprint...kind of.
 Bowie, our head of security, even took part in the sprint...kind of

Usability testing and validation

The key part of a design sprint is validation. For one of our sprints we had two parts of our concept that needed validating. To test one part we conducted simple user tests with other members of Optimal Workshop (the feature was an internal tool). For the second part we needed to validate whether we had the data to continue with this project, so we had our data scientist run some numbers and predictions for us.

6-dan-design-sprintOur remote worker Rebecca dialed in to watch one of our user tests live
 Our remote worker Rebecca dialed in to watch one of our user tests live
"I'm pretty bloody happy" — Actual feedback.
 Actual feedback

Challenges and outcomes

One of our key team members, Rebecca, was working remotely during the sprint. To make things easier for her, we set up 2 cameras: one pointed to the whiteboard, the other was focused on the rest of the sprint team sitting at the table. Next to that, we set up a monitor so we could see Rebecca.

Engaging in workshop activities is a lot harder when working remotely. Rebecca would get around this by completing the activities and take photos to send to us.

8-rebecca-design-sprint
 For more information, read this great Medium post about running design sprints remotely

Lessons

  • Lightning talks are a great way to have each person contribute up front and feel invested in the process.
  • Sprints are energy intensive. Make sure you’re in a good place with plenty of fresh air with comfortable chairs and a break out space. We like to split the five days up so that we get a weekend break.
  • Give people plenty of notice to clear their schedules. Asking busy people to take five days from their schedule might not go down too well. Make sure they know why you’d like them there and what they should expect from the week. Send them an outline of the agenda. Ideally, have a chat in person and get them excited to be part of it.
  • Invite the right people. It’s important that you get the right kind of people from different parts of the company involved in your sprint. The role they play in day-to-day work doesn’t matter too much for this. We’re all mainly using pens and paper and the more types of brains in the room the better. Looking back, what we really needed on our team was a customer support team member. They have the experience and knowledge about our customers that we don’t have.
  • Choose the right sprint problem. The project we chose for our first sprint wasn’t really suited for a design sprint. We went in with a well defined problem and a suggested solution from the team instead of having a project that needed fresh ideas. This made the activities like ‘How Might We’ seem very redundant. The challenge we decided to tackle ended up being more of a data prototype (spreadsheets!). We used the week to validate assumptions around how we can better use data and how we can write a script to automate some internal processes. We got the prototype working and tested but due to the nature of the project we will have to run this experiment in the background for a few months before any building happens.

Overall, this design sprint was a great team bonding experience and we felt pleased with what we achieved in such a short amount of time. Naturally, here at Optimal Workshop, we're experimenters at heart and we will keep exploring new ways to work across teams and find a good middle ground.

Further reading

Share this article
Author
Optimal
Workshop
Topics

Related articles

View all blog articles
Learn more
1 min read

Insights & AI Beta

As part of our beta release, you'll gain access to the newest enhancements to our Qualitative Insights tool (previously known as Reframer).

  1. Insights: A dedicated space to create, organize, and communicate your key takeaways. Create Insights on your own or with AI.
  2. AI capabilities: Optional AI-powered assistance to create Insights from your observations to accelerate your analysis process.

Our new Insights and AI functionality streamlines your qualitative analysis process, allowing you to quickly summarize, create, and organize key takeaways from your data.

Help documentation

  1. How AI Insights clustering works in Optimal
  2. Create an Insight with and without AI
  3. How AI generated Insights work in Optimal
  4. Our AI guiding principles
  5. How to set your preferences for AI
  6. Analyzing & sharing your Insights
  7. Learn more about Qualitative Insights

Notes on AI Privacy & Security


Optimal uses AWS Amazon Bedrock, which is the fully managed service that makes large language models (LLMs) from Amazon and leading AI startups available through an API, for AI generation for Qualitative Insights.


Amazon Bedrock meets industry-leading standards for compliance, including: ISO, SOC, CSA STAR Level 2, GDPR, and HIPAA eligible. Learn more about Amazon Bedrock.

We take your privacy seriously. When you use AI Insights: 

  1. Your data stays within your organization
  2. We don't use it to train other AI models
  3. You control when to use AI for insights
  4. AI features can be turned on or off anytime

Questions & feedback

If you have any questions, please reach out to our support team via live chat.

We appreciate any feedback you have on improving your experience and invite you to share your thoughts through this feedback form at anytime. Our product team will also be in touch mid-late October to to gather further insights about your experience.

Learn more
1 min read

AI Innovation + Human Validation: Why It Matters

AI creates beautiful designs, but only humans can validate if they work

Let's talk about something that's fundamentally reshaping product development: AI-generated designs. It's not just a trendy tool; it's a complete transformation of the design workflow as we know it.

Today's AI design tools aren't just creating mockups, they're generating entire design systems, producing variations at scale, and predicting user preferences before you've even finished your prompt. Instead of spending hours on iterations, designers are exploring dozens of directions in minutes.

This is where platforms like Lovable shine with their vibe coding approach, generating design directions based on emotional and aesthetic inputs rather than just functional requirements, and while this AI-powered innovation is impressive, it raises a critical question for everyone creating digital products: How do we ensure these AI-generated designs actually resonate with real people?

The Gap Between AI Efficiency and Human Connection

The design process has fundamentally shifted. Instead of building from scratch, designers are prompting and curating. Rather than crafting each pixel, they're directing AI to explore design spaces.

The whole interaction feels more experimental. Designers are using natural language to describe desired outcomes, and the AI responses feel like collaborative explorations rather than final deliverables.

This shift has major implications for product teams:

  • If you're a product manager, you need to balance AI efficiency with proven user validation methods to ensure designs solve actual user problems.
  • UX designers, you're now curating and refining AI outputs. When AI generates interfaces, will real users understand how to use them?
  • Visual designers, your expertise is evolving. You need to develop prompting skills while maintaining your critical eye for what actually works.
  • And UX researchers, there's an urgent need to validate AI-generated designs with real human feedback before implementation.

The Future of Design: AI Innovation + Human Validation

As AI design tools become more powerful, the teams that thrive will be those who balance technological innovation with human understanding. The winning approach isn't AI alone or human-only design, it's the thoughtful integration of both.

Why Human Validation Is Essential for AI-Generated Designs

AI is revolutionizing design creation, but it has inherent limitations that only human validation can address:

  • AI Lacks Contextual Understanding While AI can generate visually impressive designs, it doesn't truly understand cultural nuances, emotional responses, or lived experiences of your users. Only human feedback can verify whether an AI-generated interface feels intuitive rather than just looking good.
  • The "Uncanny Valley" of AI Design AI-generated designs sometimes create an "almost right but slightly off" feeling, technically correct but missing the human touch. Real user testing helps identify these subtle disconnects that might otherwise go unnoticed by design teams.
  • AI Reinforces Patterns, Not Breakthroughs AI models are trained on existing design patterns, meaning they excel at iteration but struggle with true innovation. Human validation helps identify when AI-generated designs feel derivative versus when they create genuine emotional connections with users.
  • Diverse User Needs Require Human Insight AI may not account for accessibility considerations, cultural sensitivities, or edge cases without explicit prompting. Human validation ensures designs work for your entire audience, not just the statistical average.

The Multiplier Effect: Why AI + Human Validation Outperforms Either Approach Alone

The combination of AI-powered design and human validation creates a virtuous cycle that elevates both:

  • From Rapid Iteration to Deeper Insights AI allows teams to test more design variations than ever before, gathering richer comparative data through human testing. This breadth of exploration was previously impossible with human-only design processes.
  • Continuous Learning Loop Human validation of AI designs creates feedback that improves future AI prompts. Over time, this creates a compounding advantage where AI tools become increasingly aligned with real user preferences.
  • Scale + Depth AI provides the scale to generate numerous options, while human validation provides the depth of understanding required to select the right ones. This combination addresses both the breadth and depth dimensions of effective design.

At Optimal, we're committed to helping you navigate this new landscape by providing the tools you need to ensure AI-generated designs truly resonate with the humans who will use them. Our human validation platform is the essential complement to AI's creative potential, turning promising designs into proven experiences.

Introducing the Optimal + Lovable Integration: Bridging AI Innovation with Human Validation

At Optimal, we've always believed in the power of human feedback to create truly effective designs. Now, with our new Lovable integration, we're making it easier than ever to validate AI-generated designs with real users.

Here's how our integrated approach works:

1. Generate Innovative Designs with Lovable

Lovable allows you to:

  • Explore emotional dimensions of design through AI prompting
  • Generate multiple design variations in minutes
  • Create interfaces that feel aligned with your brand's emotional targets

2. Validate Those Designs with Optimal

Interactive Prototype Testing Our integration lets you import Lovable designs directly as interactive prototypes, allowing users to click, navigate, and experience your AI-generated interfaces in a realistic environment. This reveals critical insights about how users naturally interact with your design.

Ready to Transform Your Design Process?

Try our Optimal + Lovable integration today and experience the power of combining AI innovation with human validation. Your first study is on us! See firsthand how real user feedback can elevate your AI-generated designs from interesting to truly effective.

Try the Optimal + Lovable Integration today

Learn more
1 min read

When Personalization Gets Personal: Balancing AI with Human-Centered Design

AI-driven personalization is redefining digital experiences, allowing companies to tailor content, recommendations, and interfaces to individual users at an unprecedented scale. From e-commerce product suggestions to content feeds, streaming recommendations, and even customized user interfaces, personalization has become a cornerstone of modern digital strategy. The appeal is clear: research shows that effective personalization can increase engagement by 72%, boost conversion rates by up to 30%, and drive revenue growth of 10-15%.

However, the reality often falls short of these impressive statistics. Personalization can easily backfire, frustrating users instead of engaging them, creating experiences that feel invasive rather than helpful, and sometimes actively driving users away from the very content or products they might genuinely enjoy. Many organizations invest heavily in AI technology while underinvesting in understanding how these personalized experiences actually impact their users.

The Widening Gap Between Capability and Quality

The technical capability to personalize digital experiences has advanced rapidly, but the quality of these experiences hasn't always kept pace. According to a 2023 survey by Baymard Institute, 68% of users reported encountering personalization that felt "off-putting" or "frustrating" in the previous month, while only 34% could recall a personalized experience that genuinely improved their interaction with a digital product.

This disconnect stems from a fundamental misalignment: while AI excels at pattern recognition and prediction based on historical data, it often lacks the contextual understanding and nuance that make personalization truly valuable. The result? Technically sophisticated personalization regularly misses the mark on actual user needs and preferences.

The Pitfalls of AI-Driven Personalization

Many companies struggle with personalization due to several common pitfalls that undermine even the most sophisticated AI implementations:

Over-Personalization: When Helpful Becomes Restrictive

AI that assumes too much can make users feel restricted or trapped in a "filter bubble" of limited options. This phenomenon, often called "over-personalization," occurs when algorithms become too confident in their understanding of user preferences.

Signs of over-personalization include:

  • Content feeds that become increasingly homogeneous over time
  • Disappearing options that might interest users but don't match their history
  • User frustration at being unable to discover new content or products
  • Decreased engagement as experiences become predictable and stale

A study by researchers at University of Minnesota found that highly personalized news feeds led to a 23% reduction in content diversity over time, even when users actively sought varied content. This "filter bubble" effect not only limits discovery but can leave users feeling manipulated or constrained.

Incorrect Assumptions: When Data Tells the Wrong Story

AI recommendations based on incomplete or misinterpreted data can lead to irrelevant, inappropriate, or even offensive suggestions. These incorrect assumptions often stem from:

  • Limited data points that don't capture the full context of user behavior
  • Misinterpreting casual interest as strong preference
  • Failing to distinguish between the user's behavior and actions taken on behalf of others
  • Not recognizing temporary or situational needs versus ongoing preferences

These misinterpretations can range from merely annoying (continuously recommending products similar to a one-time purchase) to deeply problematic (showing weight loss ads to users with eating disorders based on their browsing history).

A particularly striking example occurred when a major retailer's algorithm began sending pregnancy-related offers to a teenage girl before her family knew she was pregnant. While technically accurate in its prediction, this incident highlights how even "correct" personalization can fail to consider the broader human context and implications.

Lack of Transparency: The Black Box Problem

Users increasingly want to understand why they're being shown specific content or recommendations. When personalization happens behind a "black box" without explanation, it can create:

  • Distrust in the system and the brand behind it
  • Confusion about how to influence or improve recommendations
  • Feelings of being manipulated rather than assisted
  • Concerns about what personal data is being used and how

Research from the Pew Research Center shows that 74% of users consider it important to know why they are seeing certain recommendations, yet only 22% of personalization systems provide clear explanations for their suggestions.

Inconsistent Experiences Across Channels

Many organizations struggle to maintain consistent personalization across different touchpoints, creating disjointed experiences:

  • Product recommendations that vary wildly between web and mobile
  • Personalization that doesn't account for previous customer service interactions
  • Different personalization strategies across email, website, and app experiences
  • Recommendations that don't adapt to the user's current context or device

This inconsistency can make personalization feel random or arbitrary rather than thoughtfully tailored to the user's needs.

Neglecting Privacy Concerns and Control

As personalization becomes more sophisticated, user concerns about privacy intensify. Key issues include:

  • Collecting more data than necessary for effective personalization
  • Lack of user control over what information influences their experience
  • Unclear opt-out mechanisms for personalization features
  • Personalization that reveals sensitive information to others

A recent study found that 79% of users want control over what personal data influences their recommendations, but only 31% felt they had adequate control in their most-used digital products.

How Product Managers Can Leverage UX Insight for Better AI Personalization

To create a personalized experience that feels natural and helpful rather than creepy or restrictive, UX teams need to validate AI-driven decisions through systematic research with real users. Rather than treating personalization as a purely technical challenge, successful organizations recognize it as a human-centered design problem that requires continuous testing and refinement.

Understanding User Mental Models Through Card Sorting & Tree Testing

Card sorting and tree testing help structure content in a way that aligns with users' expectations and mental models, creating a foundation for personalization that feels intuitive rather than imposed:

  • Open and Closed Card Sorting – Helps understand how different user segments naturally categorize content, products, or features, providing a baseline for personalization strategies
  • Tree Testing – Validates whether personalized navigation structures work for different user types and contexts
  • Hybrid Approaches – Combining card sorting with interviews to understand not just how users categorize items, but why they do so

Case Study: A financial services company used card sorting with different customer segments to discover distinct mental models for organizing financial products. Rather than creating a one-size-fits-all personalization system, they developed segment-specific personalization frameworks that aligned with these different mental models, resulting in a 28% increase in product discovery and application rates.

Validating Interaction Patterns Through First-Click Testing

First-click testing ensures users interact with personalized experiences as intended across different contexts and scenarios:

  • Testing how users respond to personalized elements vs. standard content
  • Evaluating whether personalization cues (like "Recommended for you") influence click behavior
  • Comparing how different user segments respond to the same personalization approaches
  • Identifying potential confusion points in personalized interfaces

Research by the Nielsen Norman Group found that getting the first click right increases the overall task success rate by 87%. For personalized experiences, this is even more critical, as users may abandon a site entirely if early personalized recommendations seem irrelevant or confusing.

Gathering Qualitative Insights Through User Interviews & Usability Testing

Direct observation and conversation with users provides critical context for personalization strategies:

  • Moderated Usability Testing – Reveals how users react to personalized elements in real-time
  • Think-Aloud Protocols – Help understand users' expectations and reactions to personalization
  • Longitudinal Studies – Track how perceptions of personalization change over time and repeated use
  • Contextual Inquiry – Observes how personalization fits into users' broader goals and environments

These qualitative approaches help answer critical questions like:

  • When does personalization feel helpful versus intrusive?
  • What level of explanation do users want for recommendations?
  • How do different user segments react to similar personalization strategies?
  • What control do users expect over their personalized experience?

Measuring Sentiment Through Surveys & User Feedback

Systematic feedback collection helps gauge users' comfort levels with AI-driven recommendations:

  • Targeted Microsurveys – Quick pulse checks after personalized interactions
  • Preference Centers – Direct input mechanisms for refining personalization
  • Satisfaction Tracking – Monitoring how personalization affects overall satisfaction metrics
  • Feature-Specific Feedback – Gathering input on specific personalization features

A streaming service discovered through targeted surveys that users were significantly more satisfied with content recommendations when they could see a clear explanation of why items were suggested (e.g., "Because you watched X"). Implementing these explanations increased content exploration by 34% and reduced account cancellations by 8%.

A/B Testing Personalization Approaches

Experimental validation ensures personalization actually improves key metrics:

  • Testing different levels of personalization intensity
  • Comparing explicit versus implicit personalization methods
  • Evaluating various approaches to explaining recommendations
  • Measuring the impact of personalization on both short and long-term engagement

Importantly, A/B testing should look beyond immediate conversion metrics to consider longer-term impacts on user satisfaction, trust, and retention.

Building a User-Centered Personalization Strategy That Works

To implement personalization that truly enhances user experience, organizations should follow these research-backed principles:

1. Start with User Needs, Not Technical Capabilities

The most effective personalization addresses genuine user needs rather than showcasing algorithmic sophistication:

  • Identify specific pain points that personalization could solve
  • Understand which aspects of your product would benefit most from personalization
  • Determine where users already expect or desire personalized experiences
  • Recognize which elements should remain consistent for all users

2. Implement Transparent Personalization

Users increasingly expect to understand and control how their experiences are personalized:

  • Clearly communicate what aspects of the experience are personalized
  • Explain the primary factors influencing recommendations
  • Provide simple mechanisms for users to adjust or reset their personalization
  • Consider making personalization opt-in for sensitive domains

3. Design for Serendipity and Discovery

Effective personalization balances predictability with discovery:

  • Deliberately introduce variety into recommendations
  • Include "exploration" categories alongside highly targeted suggestions
  • Monitor and prevent increasing homogeneity in personalized feeds over time
  • Allow users to easily branch out beyond their established patterns

4. Apply Progressive Personalization

Rather than immediately implementing highly tailored experiences, consider a gradual approach:

  • Begin with light personalization based on explicit user choices
  • Gradually introduce more sophisticated personalization as users engage
  • Calibrate personalization depth based on relationship strength and context
  • Adjust personalization based on user feedback and behavior

5. Establish Continuous Feedback Loops

Personalization should never be "set and forget":

  • Implement regular evaluation cycles for personalization effectiveness
  • Create easy feedback mechanisms for users to rate recommendations
  • Monitor for signs of over-personalization or filter bubbles
  • Regularly test personalization assumptions with diverse user groups

The Future of Personalization: Human-Centered AI

As AI capabilities continue to advance, the companies that will succeed with personalization won't necessarily be those with the most sophisticated algorithms, but those who best integrate human understanding into their approach. The future of personalization lies in creating systems that:

  • Learn from qualitative human feedback, not just behavioral data
  • Respect the nuance and complexity of human preferences
  • Maintain transparency in how personalization works
  • Empower users with appropriate control
  • Balance algorithm-driven efficiency with human-centered design principles

AI should learn from real people, not just data. UX research ensures that personalization enhances, rather than alienates, users by bringing human insight to algorithmic decisions.

By combining the pattern-recognition power of AI with the contextual understanding provided by UX research, organizations can create personalized experiences that feel less like surveillance and more like genuine understanding: experiences that don't just predict what users might click, but truly respond to what they need and value.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.