Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Design

Learn more
1 min read

Selling your design recommendations to clients and colleagues

If you’ve ever presented design findings or recommendations to clients or colleagues, then perhaps you’ve heard them say:

  • “We don’t have the budget or resources for those improvements.”
  • “The new executive project has higher priority.”
  • “Let’s postpone that to Phase 2.”

As an information architect, I‘ve presented recommendations many times. And I’ve crashed and burned more than once by doing a poor job of selling some promising ideas. Here’s some things I’ve learned from getting it wrong.

Buyers prefer sellers they like and trust

You need to establish trust with peers, developers, executives and so on before you present your findings and recommendations . It sounds obvious, yet presentations often fail due to unfamiliarity, sloppiness or designer arrogance. A year ago I ran an IA test on a large company website. The project schedule was typically “aggressive” and the client’s VPs were endlessly busy. So I launched the test without their feedback. Saved time, right?Wrong. The client ignored all my IA recommendations, and their VPs ultimately rewrote my site map from scratch. I could have argued that they didn’t understand user-centered design. The truth is that I failed to establish credibility. I needed them to buy into the testing process, suggest test questions beforehand, or take the test as a control group. Anything to engage them would have helped – turning stakeholders into collaborators is a great way to establish trust.

Techniques for presenting UX recommendations

Many presentation tactics can be borrowed from salespeople, but a single blog post can’t do justice to the entire sales profession. So I’d just like to offer a few ideas for thought. No Jedi mind tricks though. Sincerity matters.

Emphasize product benefits, not product features

Beer commercials on TV don’t sell beer. They sell backyard parties and voluptuous strangers. Likewise, UX recommendations should emphasize product benefits rather than feature sets. This may be common marketing strategy. However, the benefits should resonate with stakeholders and not just test participants. Stakeholders often don’t care about Joe End User. They care about ROI, a more flexible platform, a faster way to publish content – whatever metrics determine their job performance.Several years ago, I researched call center data at a large corporation. To analyze the data, I eventually built a Web dashboard. The dashboard illustrated different types of customer calls by product. When I showed it to my co-workers, I presented the features and even the benefits of tracking usability issues this way.However, I didn’t research the specific benefits to my fellow designers. Consequently it was much, much harder to sell the idea. I should have investigated how a dashboard would fit into their daily routines. I had neglected the question that they silently asked: “What’s in it for me?”

Have a go at contrast selling

When selling your recommendations, consider submitting your dream plan first. If your stakeholders balk, introduce the practical solution next. The contrast in price will make the modest recommendation more palatable.While working on e-commerce UI, I once ran a usability test on a checkout flow. The test clearly suggested improvements to the payment page. To try slipping it into an upcoming sprint, I asked my boss if we could make a few crucial fixes. They wouldn’t take much time. He said...no. In essence, my boss was comparing extra work to doing nothing. My mistake was compromising the proposal before even presenting it. I should have requested an entire package first: a full redesign of the shopping cart experience on all web properties. Then the comparison would have been a huge effort vs. a small effort.Retailers take this approach every day. Car dealerships anchor buyers to lofty sticker prices, then offer cash back. Retailers like Amazon display strikethrough prices for similar effect. This works whenever buyers prefer justifying a purchase based on savings, not price.

Use the alternative choice close

Alternative Choice is a closing technique in which a buyer selects from two options. Cleverly, each answer implies a sale. Here are examples adapted for UX recommendations:

  • “Which website could we implement these changes on first, X or Y?”
  • “Which developer has more time available in the next sprint, Tom or Harry?”

This is better than simply asking, “Can we start on Website X?” or “Do we have any developers available?” Avoid any proposition that can be rejected with a direct “No.”

Convince with the embarrassment close

Buying decisions are emotional. When presenting recommendations to stakeholders, try appealing to their pride (remember, you’re not actually trying to embarrass someone). Again, sincerity is important. Some UX examples include:

  • “To be an industry leader, we need a best-of-breed design like Acme Co.”
  • “I know that you want your co mpany to be the best. That’s why we’re recommending a full set of    improvements instead of a quick fix.”

Techniques for answering objections once you’ve presented

Once you’ve done your best to present your design recommendations, you may still encounter resistance (surprise!). To make it simple, I’ve classified objections using the three points in the triangle model of project management: Time, Price and Quality. Any project can only have two. And when presenting design research, you’re advocating Quality, i.e. design usability or enhancements. Pushback on Quality generally means that people disagree with your designs (a topic for another day).

Therefore, objections will likely be based on Time or Price instead.In a perfect world, all design recommendations yield ROI backed by quantitative data. But many don’t. When selling the intangibles of “user experience” or “usability” improvements, here are some responses to consider when you hear “We don’t have time” or “We can’t afford it”.

“We don’t have time” means your project team values Time over Quality

If possible, ask people to consider future repercussions. If your proposal isn’t implemented now, it may require even more time and money later. Product lines and features expand, and new websites and mobile apps get built. What will your design improvements cost across the board in 6 months? Opportunity costs also matter. If your design recommendations are postponed, then perhaps you’ll miss the holiday shopping season, or the launch of your latest software release. What is the cost of not approving your recommendations?

“We can’t afford it” means your project team values Price over Quality

Many project sponsors nix user testing to reduce the design price tag. But there’s always a long-term cost. A buggy product generates customer complaints. The flawed design must then be tested, redesigned, and recoded. So, which is cheaper: paying for a single usability test now, or the aggregate cost of user dissatisfaction and future rework? Explain the difference between price and cost to your team.

Parting Thoughts

I realize that this only scratches the surface of sales, negotiation, persuasion and influence. Entire books have been written on topics like body language alone. Uncommon books in a UX library might be “Influence: The Psychology of Persuasion” by Robert Cialdini and “Secrets of Closing the Sale” by Zig Ziglar. Feel free to share your own ideas or references as well.Any time we present user research, we’re selling. Stakeholder mental models are just as relevant as user mental models.

Learn more
1 min read

When Personalization Gets Personal: Balancing AI with Human-Centered Design

AI-driven personalization is redefining digital experiences, allowing companies to tailor content, recommendations, and interfaces to individual users at an unprecedented scale. From e-commerce product suggestions to content feeds, streaming recommendations, and even customized user interfaces, personalization has become a cornerstone of modern digital strategy. The appeal is clear: research shows that effective personalization can increase engagement by 72%, boost conversion rates by up to 30%, and drive revenue growth of 10-15%.

However, the reality often falls short of these impressive statistics. Personalization can easily backfire, frustrating users instead of engaging them, creating experiences that feel invasive rather than helpful, and sometimes actively driving users away from the very content or products they might genuinely enjoy. Many organizations invest heavily in AI technology while underinvesting in understanding how these personalized experiences actually impact their users.

The Widening Gap Between Capability and Quality

The technical capability to personalize digital experiences has advanced rapidly, but the quality of these experiences hasn't always kept pace. According to a 2023 survey by Baymard Institute, 68% of users reported encountering personalization that felt "off-putting" or "frustrating" in the previous month, while only 34% could recall a personalized experience that genuinely improved their interaction with a digital product.

This disconnect stems from a fundamental misalignment: while AI excels at pattern recognition and prediction based on historical data, it often lacks the contextual understanding and nuance that make personalization truly valuable. The result? Technically sophisticated personalization regularly misses the mark on actual user needs and preferences.

The Pitfalls of AI-Driven Personalization

Many companies struggle with personalization due to several common pitfalls that undermine even the most sophisticated AI implementations:

Over-Personalization: When Helpful Becomes Restrictive

AI that assumes too much can make users feel restricted or trapped in a "filter bubble" of limited options. This phenomenon, often called "over-personalization," occurs when algorithms become too confident in their understanding of user preferences.

Signs of over-personalization include:

  • Content feeds that become increasingly homogeneous over time
  • Disappearing options that might interest users but don't match their history
  • User frustration at being unable to discover new content or products
  • Decreased engagement as experiences become predictable and stale

A study by researchers at University of Minnesota found that highly personalized news feeds led to a 23% reduction in content diversity over time, even when users actively sought varied content. This "filter bubble" effect not only limits discovery but can leave users feeling manipulated or constrained.

Incorrect Assumptions: When Data Tells the Wrong Story

AI recommendations based on incomplete or misinterpreted data can lead to irrelevant, inappropriate, or even offensive suggestions. These incorrect assumptions often stem from:

  • Limited data points that don't capture the full context of user behavior
  • Misinterpreting casual interest as strong preference
  • Failing to distinguish between the user's behavior and actions taken on behalf of others
  • Not recognizing temporary or situational needs versus ongoing preferences

These misinterpretations can range from merely annoying (continuously recommending products similar to a one-time purchase) to deeply problematic (showing weight loss ads to users with eating disorders based on their browsing history).

A particularly striking example occurred when a major retailer's algorithm began sending pregnancy-related offers to a teenage girl before her family knew she was pregnant. While technically accurate in its prediction, this incident highlights how even "correct" personalization can fail to consider the broader human context and implications.

Lack of Transparency: The Black Box Problem

Users increasingly want to understand why they're being shown specific content or recommendations. When personalization happens behind a "black box" without explanation, it can create:

  • Distrust in the system and the brand behind it
  • Confusion about how to influence or improve recommendations
  • Feelings of being manipulated rather than assisted
  • Concerns about what personal data is being used and how

Research from the Pew Research Center shows that 74% of users consider it important to know why they are seeing certain recommendations, yet only 22% of personalization systems provide clear explanations for their suggestions.

Inconsistent Experiences Across Channels

Many organizations struggle to maintain consistent personalization across different touchpoints, creating disjointed experiences:

  • Product recommendations that vary wildly between web and mobile
  • Personalization that doesn't account for previous customer service interactions
  • Different personalization strategies across email, website, and app experiences
  • Recommendations that don't adapt to the user's current context or device

This inconsistency can make personalization feel random or arbitrary rather than thoughtfully tailored to the user's needs.

Neglecting Privacy Concerns and Control

As personalization becomes more sophisticated, user concerns about privacy intensify. Key issues include:

  • Collecting more data than necessary for effective personalization
  • Lack of user control over what information influences their experience
  • Unclear opt-out mechanisms for personalization features
  • Personalization that reveals sensitive information to others

A recent study found that 79% of users want control over what personal data influences their recommendations, but only 31% felt they had adequate control in their most-used digital products.

How Product Managers Can Leverage UX Insight for Better AI Personalization

To create a personalized experience that feels natural and helpful rather than creepy or restrictive, UX teams need to validate AI-driven decisions through systematic research with real users. Rather than treating personalization as a purely technical challenge, successful organizations recognize it as a human-centered design problem that requires continuous testing and refinement.

Understanding User Mental Models Through Card Sorting & Tree Testing

Card sorting and tree testing help structure content in a way that aligns with users' expectations and mental models, creating a foundation for personalization that feels intuitive rather than imposed:

  • Open and Closed Card Sorting – Helps understand how different user segments naturally categorize content, products, or features, providing a baseline for personalization strategies
  • Tree Testing – Validates whether personalized navigation structures work for different user types and contexts
  • Hybrid Approaches – Combining card sorting with interviews to understand not just how users categorize items, but why they do so

Case Study: A financial services company used card sorting with different customer segments to discover distinct mental models for organizing financial products. Rather than creating a one-size-fits-all personalization system, they developed segment-specific personalization frameworks that aligned with these different mental models, resulting in a 28% increase in product discovery and application rates.

Validating Interaction Patterns Through First-Click Testing

First-click testing ensures users interact with personalized experiences as intended across different contexts and scenarios:

  • Testing how users respond to personalized elements vs. standard content
  • Evaluating whether personalization cues (like "Recommended for you") influence click behavior
  • Comparing how different user segments respond to the same personalization approaches
  • Identifying potential confusion points in personalized interfaces

Research by the Nielsen Norman Group found that getting the first click right increases the overall task success rate by 87%. For personalized experiences, this is even more critical, as users may abandon a site entirely if early personalized recommendations seem irrelevant or confusing.

Gathering Qualitative Insights Through User Interviews & Usability Testing

Direct observation and conversation with users provides critical context for personalization strategies:

  • Moderated Usability Testing – Reveals how users react to personalized elements in real-time
  • Think-Aloud Protocols – Help understand users' expectations and reactions to personalization
  • Longitudinal Studies – Track how perceptions of personalization change over time and repeated use
  • Contextual Inquiry – Observes how personalization fits into users' broader goals and environments

These qualitative approaches help answer critical questions like:

  • When does personalization feel helpful versus intrusive?
  • What level of explanation do users want for recommendations?
  • How do different user segments react to similar personalization strategies?
  • What control do users expect over their personalized experience?

Measuring Sentiment Through Surveys & User Feedback

Systematic feedback collection helps gauge users' comfort levels with AI-driven recommendations:

  • Targeted Microsurveys – Quick pulse checks after personalized interactions
  • Preference Centers – Direct input mechanisms for refining personalization
  • Satisfaction Tracking – Monitoring how personalization affects overall satisfaction metrics
  • Feature-Specific Feedback – Gathering input on specific personalization features

A streaming service discovered through targeted surveys that users were significantly more satisfied with content recommendations when they could see a clear explanation of why items were suggested (e.g., "Because you watched X"). Implementing these explanations increased content exploration by 34% and reduced account cancellations by 8%.

A/B Testing Personalization Approaches

Experimental validation ensures personalization actually improves key metrics:

  • Testing different levels of personalization intensity
  • Comparing explicit versus implicit personalization methods
  • Evaluating various approaches to explaining recommendations
  • Measuring the impact of personalization on both short and long-term engagement

Importantly, A/B testing should look beyond immediate conversion metrics to consider longer-term impacts on user satisfaction, trust, and retention.

Building a User-Centered Personalization Strategy That Works

To implement personalization that truly enhances user experience, organizations should follow these research-backed principles:

1. Start with User Needs, Not Technical Capabilities

The most effective personalization addresses genuine user needs rather than showcasing algorithmic sophistication:

  • Identify specific pain points that personalization could solve
  • Understand which aspects of your product would benefit most from personalization
  • Determine where users already expect or desire personalized experiences
  • Recognize which elements should remain consistent for all users

2. Implement Transparent Personalization

Users increasingly expect to understand and control how their experiences are personalized:

  • Clearly communicate what aspects of the experience are personalized
  • Explain the primary factors influencing recommendations
  • Provide simple mechanisms for users to adjust or reset their personalization
  • Consider making personalization opt-in for sensitive domains

3. Design for Serendipity and Discovery

Effective personalization balances predictability with discovery:

  • Deliberately introduce variety into recommendations
  • Include "exploration" categories alongside highly targeted suggestions
  • Monitor and prevent increasing homogeneity in personalized feeds over time
  • Allow users to easily branch out beyond their established patterns

4. Apply Progressive Personalization

Rather than immediately implementing highly tailored experiences, consider a gradual approach:

  • Begin with light personalization based on explicit user choices
  • Gradually introduce more sophisticated personalization as users engage
  • Calibrate personalization depth based on relationship strength and context
  • Adjust personalization based on user feedback and behavior

5. Establish Continuous Feedback Loops

Personalization should never be "set and forget":

  • Implement regular evaluation cycles for personalization effectiveness
  • Create easy feedback mechanisms for users to rate recommendations
  • Monitor for signs of over-personalization or filter bubbles
  • Regularly test personalization assumptions with diverse user groups

The Future of Personalization: Human-Centered AI

As AI capabilities continue to advance, the companies that will succeed with personalization won't necessarily be those with the most sophisticated algorithms, but those who best integrate human understanding into their approach. The future of personalization lies in creating systems that:

  • Learn from qualitative human feedback, not just behavioral data
  • Respect the nuance and complexity of human preferences
  • Maintain transparency in how personalization works
  • Empower users with appropriate control
  • Balance algorithm-driven efficiency with human-centered design principles

AI should learn from real people, not just data. UX research ensures that personalization enhances, rather than alienates, users by bringing human insight to algorithmic decisions.

By combining the pattern-recognition power of AI with the contextual understanding provided by UX research, organizations can create personalized experiences that feel less like surveillance and more like genuine understanding: experiences that don't just predict what users might click, but truly respond to what they need and value.

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.