Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

User Research

Learn more
1 min read

5 Signs It's Time to Switch Your Research Platform

How to Know When Your Current Tool Is Holding You Back

Your research platform should accelerate insights, not create obstacles. Yet many enterprise research teams are discovering their tools weren't built for the scale, velocity, and quality standards that today’s product development demands.

If you're experiencing any of these five warning signs, it might be time to evaluate alternatives.

1. Your Research Team Is Creating Internal Queues

The Challenge: When platforms limit concurrent studies, research becomes a first-come-first-served bottleneck and urgent research gets delayed by scheduled projects. In fast-moving businesses, research velocity directly impacts competitiveness. Every queued study is a delayed product launch, a missed market opportunity, or a competitor gaining ground.

The Solution: Enterprise-grade research platforms allow unlimited concurrent studies. Multiple teams can research simultaneously without coordination overhead or artificial constraints. Organizations that remove study volume constraints report 3-4x increases in research velocity within the first quarter of switching platforms.

2. Pricing Has Become Unpredictable 

The Problem: When pricing gest too complicated, it becomes unpredictable. Some businesses have per-participant fees, usage caps and seat limits not to mention other hidden charges. Many pricing models weren't designed for enterprise-scale research, they were designed to maximize per-transaction revenue. When you can't predict research costs, you can't plan research roadmaps. Teams start rationing participants, avoiding "expensive" audiences, or excluding stakeholders from platform access to control costs.

The Solution: Transparent, scalable pricing with unlimited seats that grows with your needs.  Volume-based plans that reward research investment rather than penalizing growth. No hidden per-participant markups. 

3. Participant Quality Is Declining

The Problem: This is the most dangerous sign because it corrupts insights at the source. Low-quality participants create low-quality data, which creates poor product decisions.

Warning signs include:

  • Participants using AI assistance during moderated sessions
  • Bot-like response patterns in surveys
  • Participants who clearly don't meet screening criteria
  • Low-effort responses that provide no actionable insight
  • Increasing "throw away this response" rates in your analysis

Poor participant quality isn't just frustrating, it's expensive. Research with the wrong participants produces misleading insights that derail product strategy, waste development resources, and damage market positioning.

The Solution: Multi-layer fraud prevention systems. Behavioral verification. AI-response detection. Real-time quality monitoring. 100% quality guarantees backed by participant replacement policies. When product, design and research teams work with brands that offer 100% participant quality guarantees, they know that they can trust their research and make real business decisions from their insights. 

4. You Can't Reach Your Actual Target Audience

The Problem: Limited panel reach forces compromises. Example: You need B2B software buyers but you get anyone who's used software. Research with "close enough" participants produces insights that don't apply to your actual market. Product decisions based on proxy audiences fail in real-world application.

The solution: Tools like Optimal that offer 10M+ participants across 150+ countries with genuine niche targeting capabilities. Proven Australian market coverage from broad demographics to specialized B2B audiences. Advanced screening beyond basic demographics.

5. Your Platform Hasn't Evolved with Your Needs

The Problem: You chose your platform 3-5 years ago when you were a smaller team with simpler needs. But your organization has grown, research has become more strategic, and your platform's limitations are now organizational constraints. Platform limitations become organizational limitations. When your tools can't support enterprise workflows, your research function can't deliver enterprise value.

The Solution: Complete research lifecycle support from recruitment to analysis. AI-powered insight generation. Enterprise-grade security and compliance. Dedicated support and onboarding. Integration ecosystems that connect research across your organization.

Why Enterprises Are Switching to Optimal

Leading product, design and research teams are moving to Optimal because it's specifically built to address the pain points outlined above:

  1. No Study Volume Constraints: Run unlimited concurrent studies across your entire organization
  2. Transparent, Scalable Pricing: Flexible plans with unlimited seats and predictable costs
  3. Verified Quality Guarantee: 10M+ participants with multi-layer fraud prevention and 100% quality guarantee
  4. Enterprise-Grade Platform: Complete research lifecycle tools, AI-powered insights, dedicated support

Next Steps 

If you're experiencing any of these five signs, it's worth exploring alternatives. The cost of continuing with inadequate tools, delayed launches, poor data quality, limited research capacity, far outweigh the effort of evaluation.

Start a Free Trial – Test Optimal with your real research projects

Compare Platforms – See detailed capability comparisons

Talk to Our Team – Discuss your specific research needs with Australian experts

Learn more
1 min read

5 Alternatives to Askable for User Research and Participant Recruitment

When evaluating tools for user testing and participant recruitment, Askable often appears on the shortlist, especially for teams based in Australia and New Zealand. But in 2025, many researchers are finding Askable’s limitations increasingly difficult to work around: restricted study volume, inconsistent participant quality, and new pricing that limits flexibility.

If you’re exploring Askable alternatives that offer more scalability, higher data quality, and global reach, here are five strong options.

1. Optimal: Best Overall Alternative for Scalable, AI-Powered Research 

Optimal is a comprehensive user insights platform supporting the full research lifecycle, from participant recruitment to analysis and reporting. Unlike Askable, which has historically focused on recruitment, Optimal unifies multiple research methods in one platform, including prototype testing, card sorting, tree testing, and AI-assisted interviews.

Why teams switch from Askable to Optimal

1. You can only run one study at a time in Askable

Optimal removes that bottleneck, letting you launch multiple concurrent studies across teams and research methods.

2. Askable’s new pricing limits flexibility 

Optimal offers scalable plans with unlimited seats, so teams only pay for what they need.

3. Askable’s participant quality has dropped

Optimal provides access to over 100+ million verified participants worldwide, with strong fraud-prevention and screening systems that eliminate low-effort or AI-assisted responses.



Additional advantages

  • End-to-end research tools in one workspace
  • AI-powered insight generation that tags and summarizes automatically
  • Enterprise-grade reliability with decade-long market trust
  • Dedicated onboarding and SLA-backed support

Best for: Teams seeking an enterprise-ready, scalable research platform that eliminates the operational constraints of Askable.

2. UserTesting: Best for Video-Based Moderated Studies

UserTesting remains one of the most established platforms for moderated and unmoderated usability testing. It excels at gathering video feedback from participants in real time.

Pros:

  • Large participant pool with strong demographic filters
  • Supports moderated sessions and live interviews
  • Integrations with design tools like Figma and Miro


Cons:

  • Higher cost at enterprise scale
  • Less flexible for survey-driven or unmoderated studies compared with Optimal
  • The UI has become increasingly complex and buggy as UserTesting has been expanding their platform through acquisitions such as UserZoom and Validately.


Best for: Companies prioritizing live, moderated usability sessions.

3. Maze: Best for Product Teams Using Figma Prototypes

Maze offers seamless Figma integration and focuses on automating prototype-testing workflows for product and design teams.

Pros:

  • Excellent Figma and Adobe XD integration
  • Automated reporting
  • Good fit for early-stage design validation

Cons:

  • Limited depth for qualitative research
  • Smaller participant pool

Best for: Design-first teams validating prototypes and navigation flows.

4. Lyssna (formerly UsabilityHub): Best for Fast Design Feedback

Lyssna focuses on quick-turn, unmoderated studies such as preference tests, first-click tests, and five-second tests.

Pros:

  • Fast turnaround
  • Simple, intuitive interface
  • Affordable for smaller teams

Cons:

  • Limited participant targeting options
  • Narrower study types than Askable

Best for: Designers and researchers running lightweight validation tests.

5. Dovetail: Best for Research Repository and Analysis

Dovetail is primarily a qualitative data repository rather than a testing platform. It’s useful for centralizing and analyzing insights from research studies conducted elsewhere.

Pros:

  • Strong tagging and note-taking features
  • Centralized research hub for large teams

Cons:

  • Doesn’t recruit participants or run studies
  • Requires manual uploads from other tools like Askable or UserTesting

Best for: Research teams centralizing insights from multiple sources.

Final Thoughts on Alternatives to Askable

If your goal is simply to recruit local participants, Askable can still meet basic needs. But if you’re looking to scale research in your organization, integrate testing and analysis, and automate insights, Optimal stands out as the best long-term investment. Its blend of global reach, AI-powered analysis, and proven enterprise support makes it the natural next step for growing research teams. You can try Optimal for free here.

Learn more
1 min read

Why User Interviews Haven't Evolved in 20 Years (And How We're Changing That)

Are we exaggerating when we say that the way the researchers run and analyze user interviews hasn’t changed in 20 years? We don’t think so. When we talk to our customers to try and understand their current workflows, they look exactly the same as they did when we started this business 17 years ago: record, transcribe, analyze manually, create reports. See the problem?

Despite  advances in technology across every industry, the fundamental process of conducting and analyzing user interviews has remained largely unchanged. While we've transformed how we design, develop, and deploy products, the way we understand our users is still trapped in workflows that would feel familiar to product, design and research teams from decades ago.

The Same Old Interview Analysis Workflow 

For most researchers, in the best case scenario, Interview analysis can take several hours over the span of multiple days. Yet in that same timeframe, in part thanks to new and emerging AI tools, an engineering team can design, build, test, and deploy new features. That just doesn't make sense.

The problem isn't that researchers  lack tools. It's that they haven’t had the right ones. Most tools focus on transcription and storage, treating interviews like static documents rather than dynamic sources of intelligence. Testing with just 5 users can uncover 85% of usability problems, yet most teams struggle to complete even basic analysis in time to influence product decisions. Luckily, things are finally starting to change.

When it comes to user research, three things are happening in the industry right now that are forcing a transformation:

  1. The rise of AI means UX research matters more than ever. With AI accelerating product development cycles, the cost of building the wrong thing has never been higher. Companies that invest in UX early cut development time by 33-50%, and with AI, that advantage compounds exponentially.
  2. We're drowning in data and have fewer resources.  We’re seeing the need for UX research increase, while simultaneously UX research teams are more resource constrained than ever. Tasks like analyzing hours of video content to gather insights, just isn’t something teams have time for anymore. 
  3. AI finally understands research. AI has evolved to a place where it can actually provide valuable insights. Not just transcription. Real research intelligence that recognizes patterns, emotions, and the gap between what users say and what they actually mean.

A Dirty Little Research Secret + A Solution 

We’re just going to say it; most user insights from interviews never make it past the recording stage. When it comes to talking to users, the vast majority of researchers in our audience talk about recruiting pain because the most commonly discussed challenge around interviews is usually finding enough participants who match their criteria. But on top of the challenge of finding the right people to talk to, there’s another challenge that’s even worse: finding time to analyze what users tell us. But, what if you had a tool where using AI, the moment you uploaded an interview video, key themes, pain points, and opportunities surfaced automatically? What if you could ask your interview footage questions and get back evidence-based answers with video citations?

This isn't about replacing human expertise, it's about augmenting  it. AI-powered tools can process and categorize data within hours or days, significantly reducing workload. But more importantly, they can surface patterns and connections that human analysts might miss when rushing through analysis under deadline pressure. Thanks to AI, we're witnessing the beginning of a research renaissance and a big part of that is reimagining the way we do user interviews.

Why AI for User Interviews is a Game Changer 

When interview analysis accelerates from weeks to hours, everything changes.

Product teams can validate ideas before building them. Design teams can test concepts in real-time. Engineering teams can prioritize features based on actual user need, not assumptions. Product, Design and Research teams who embrace AI to help with these workflows, will be surfacing insights, generating evidence-backed recommendations, and influencing product decisions at the speed of thought.

We know that 32% of all customers would stop doing business with a brand they loved after one bad experience. Talking to your users more often makes it possible to prevent these experiences by acting on user feedback before problems become critical. When every user insight comes with video evidence, when every recommendation links to supporting clips, when every user story includes the actual user telling it, research stops being opinion and becomes impossible to ignore. When you can more easily gather, analyze and share the content from user interviews those real user voices start to get referenced in executive meetings. Product decisions begin to include user clips. Engineering sprints start to reference actual user needs. Marketing messages reflect real user voices and language.

The best product, design and research teams are already looking for tools that can support this transformation. They know that when interviews become intelligent, the entire organization becomes more user-centric. At Optimal, we're focused on improving the traditional user interviews workflow by incorporating revolutionary AI features into our tools. Stay tuned for exciting updates on how we're reimagining user interviews.

Learn more
1 min read

Optimal vs. Great Question: Why Enterprise Teams Need Comprehensive Research Platforms

The decision between interview-focused research tools and comprehensive user insight platforms fundamentally shapes how teams generate, analyze, and act on user feedback. This choice affects not only immediate research capabilities but also long-term strategic planning and organizational impact. While Great Question focuses primarily on customer interviews and basic panel management with streamlined functionality, Optimal provides more robust capabilities, global participant reach, and advanced analytics infrastructure that the world's biggest brands rely on to build products users genuinely love. Optimal's platform enables teams to conduct sophisticated research, integrate insights across departments, and deliver actionable recommendations that drive meaningful business outcomes.

Why Choose Optimal over Great Question?

Strategic Research Capabilities vs. Interview-Centric Tools

Optimal's Research Leadership: Optimal delivers complete research capabilities spanning information architecture testing, prototype validation, card sorting, tree testing, first-click analysis, live site testing, and qualitative insights, all powered by AI-driven analysis and backed by 17 years of specialized research expertise that transforms user feedback into actionable business intelligence. Optimal's live site testing allows you to test actual websites and web apps without code, enabling continuous optimization and real-time insights post-launch.

Great Question's Limited Research Scope: In contrast, Great Question operates primarily as an interview scheduling and panel management tool with basic survey capabilities, lacking the comprehensive research methodologies and specialized testing tools that enterprise research programs require for strategic impact across the full product development lifecycle.

Enterprise-Ready Research Suite: Optimal serves Fortune 500 clients including Lego, Nike, and Netflix with SOC 2 compliance, enterprise security protocols, and a comprehensive research toolkit that scales with organizational growth and research sophistication.

Workflow Limitations: Great Question's interview-focused approach restricts teams to primarily qualitative methods, requiring additional tools for quantitative validation and specialized testing scenarios that modern product teams demand for comprehensive user understanding.

Participant Quality and Global Reach

Global Research Network: Optimal's 10M+ verified participants across 150+ countries enable sophisticated audience targeting, international market research, and reliable recruitment for any demographic or geographic requirement, from enterprise software buyers in Germany to mobile gamers in Southeast Asia.

Limited Panel Access: Great Question provides access to 3M+ participants with basic recruitment capabilities focused primarily on existing customer panels, limiting research scope for complex audience requirements and international market validation.

Advanced Participant Targeting: Optimal includes sophisticated recruitment filters, managed recruitment services, and quality assurance protocols that ensure research validity and participant engagement across diverse study requirements.

Basic Recruitment Features: Great Question focuses on CRM integration and existing customer recruitment without advanced screening capabilities or specialized audience targeting that complex research studies require.

Research Methodology Depth and Platform Capabilities

Complete Research Methodology Suite: Optimal provides full-spectrum research capabilities including advanced card sorting, tree testing, prototype validation, first-click testing, surveys, and qualitative insights with integrated AI analysis across all methodologies and specialized tools designed for specific research challenges.

Interview-Focused Limitations: Great Question offers elementary research capabilities centered on customer interviews and basic surveys, lacking the specialized testing tools enterprise teams need for information architecture, prototype validation, and quantitative user behavior analysis.

AI-Powered Research Operations: Optimal streamlines research workflows with automated analysis, AI-powered insights, advanced statistical reporting, and seamless collaboration tools that accelerate insight delivery while maintaining analytical rigor. Our new Interviews tool revolutionizes qualitative research, upload interview videos and let AI automatically surface key themes, generate smart highlight reels with timestamped evidence, and produce actionable insights in hours instead of weeks, eliminating the manual synthesis bottleneck.

Manual Analysis Dependencies: Great Question requires significant manual effort for insight synthesis beyond interview transcription, creating workflow inefficiencies that slow research velocity and limit the depth of analysis possible across large datasets.

Where Great Question Falls Short

Great Question may be a good choice for teams who are looking for:

  • Simple customer interview management without complex research requirements
  • Basic panel recruitment focused on existing customers
  • Streamlined workflows for small-scale qualitative studies
  • Budget-conscious solutions prioritizing low cost over comprehensive capabilities
  • Teams primarily focused on customer development rather than strategic UX research

When Optimal Delivers Strategic Value

Optimal becomes essential for:

  • Strategic Research Programs: When user insights drive business strategy, product decisions, and require diverse research methodologies beyond interviews
  • Information Architecture Excellence: Teams requiring specialized testing for navigation, content organization, and user mental models that directly impact product usability
  • Global Organizations: Requiring international research capabilities, market validation, and diverse participant recruitment across multiple regions
  • Quality-Critical Studies: Where participant verification, advanced analytics, statistical rigor, and research validity matter for strategic decision-making
  • Enterprise Compliance: Organizations with security, privacy, and regulatory requirements demanding SOC 2 compliance and enterprise-grade infrastructur
  • Advanced Research Operations: Teams requiring AI-powered insights, comprehensive analytics, specialized testing methodologies, and scalable research capabilities
  • Prototype and Design Validation: Product teams needing early-stage testing, iterative validation, and quantitative feedback on design concepts and user flows

Ready to see how leading brands including Lego, Netflix and Nike achieve better research outcomes? Experience how Optimal's platform delivers user insights that adapt to your team's growing needs and research sophistication.

Learn more
1 min read

The Evolution of UX Research: Digital Twins and the Future of User Insight

Introduction

User Experience (UX) research has always been about people. How they think, how they behave, what they need, and—just as importantly—what they don’t yet realise they need. Traditional UX methodologies have long relied on direct human input: interviews, usability testing, surveys, and behavioral observation. The assumption was clear—if you want to understand people, you have to engage with real humans.

But in 2025, that assumption is being challenged.

The emergence of digital twins and synthetic users—AI-powered simulations of human behavior—is changing how researchers approach user insights. These technologies claim to solve persistent UX research problems: slow participant recruitment, small sample sizes, high costs, and research timelines that struggle to keep pace with product development. The promise is enticing: instantly accessible, infinitely scalable users who can test, interact, and generate feedback without the logistical headaches of working with real participants.

Yet, as with any new technology, there are trade-offs. While digital twins may unlock efficiencies, they also raise important questions: Can they truly replicate human complexity? Where do they fit within existing research practices? What risks do they introduce?

This article explores the evolving role of digital twins in UX research—where they excel, where they fall short, and what their rise means for the future of human-centered design.

The Traditional UX Research Model: Why Change?

For decades, UX research has been grounded in methodologies that involve direct human participation. The core methods—usability testing, user interviews, ethnographic research, and behavioral analytics—have been refined to account for the unpredictability of human nature.

This approach works well, but it has challenges:

  1. Participant recruitment is time-consuming. Finding the right users—especially niche audiences—can be a logistical hurdle, often requiring specialised panels, incentives, and scheduling gymnastics.
  2. Research is expensive. Incentives, moderation, analysis, and recruitment all add to the cost. A single usability study can run into tens of thousands of dollars.
  3. Small sample sizes create risk. Budget and timeline constraints often mean testing with small groups, leaving room for blind spots and bias.
  4. Long feedback loops slow decision-making. By the time research is completed, product teams may have already moved on, limiting its impact.

In short: traditional UX research provides depth and authenticity, but it’s not always fast or scalable.

Digital twins and synthetic users aim to change that.

What Are Digital Twins and Synthetic Users?

While the terms digital twins and synthetic users are sometimes used interchangeably, they are distinct concepts.

Digital Twins: Simulating Real-World Behavior

A digital twin is a data-driven virtual representation of a real-world entity. Originally developed for industrial applications, digital twins replicate machines, environments, and human behavior in a digital space. They can be updated in real time using live data, allowing organisations to analyse scenarios, predict outcomes, and optimise performance.

In UX research, human digital twins attempt to replicate real users' behavioral patterns, decision-making processes, and interactions. They draw on existing datasets to mirror real-world users dynamically, adapting based on real-time inputs.

Synthetic Users: AI-Generated Research Participants

While a digital twin is a mirror of a real entity, a synthetic user is a fabricated research participant—a simulation that mimics human decision-making, behaviors, and responses. These AI-generated personas can be used in research scenarios to interact with products, answer questions, and simulate user journeys.

Unlike traditional user personas (which are static profiles based on aggregated research), synthetic users are interactive and capable of generating dynamic feedback. They aren’t modeled after a specific real-world person, but rather a combination of user behaviors drawn from large datasets.

Think of it this way:

  • A digital twin is a highly detailed, data-driven clone of a specific person, customer segment, or process.
  • A synthetic user is a fictional but realistic simulation of a potential user, generated based on behavioral patterns and demographic characteristics.

Both approaches are still evolving, but their potential applications in UX research are already taking shape.

Where Digital Twins and Synthetic Users Fit into UX Research

The appeal of AI-generated users is undeniable. They can:

  • Scale instantly – Test designs with thousands of simulated users, rather than just a handful of real participants.
  • Eliminate recruitment bottlenecks – No need to chase down participants or schedule interviews.
  • Reduce costs – No incentives, no travel, no last-minute no-shows.
  • Enable rapid iteration – Get user insights in real time and adjust designs on the fly.
  • Generate insights on sensitive topics – Synthetic users can explore scenarios that real participants might find too personal or intrusive.

These capabilities make digital twins particularly useful for:

  • Early-stage concept validation – Rapidly test ideas before committing to development.
  • Edge case identification – Run simulations to explore rare but critical user scenarios.
  • Pre-testing before live usability sessions – Identify glaring issues before investing in human research.

However, digital twins and synthetic users are not a replacement for human research. Their effectiveness is limited in areas where emotional, cultural, and contextual factors play a major role.

The Risks and Limitations of AI-Driven UX Research

For all their promise, digital twins and synthetic users introduce new challenges.

  1. They lack genuine emotional responses.
    AI can analyse sentiment, but it doesn’t feel frustration, delight, or confusion the way a human does. UX is often about unexpected moments—the frustrations, workarounds, and “aha” realisations that define real-world use.
  2. Bias is a real problem.
    AI models are trained on existing datasets, meaning they inherit and amplify biases in those datasets. If synthetic users are based on an incomplete or non-diverse dataset, the research insights they generate will be skewed.
  3. They struggle with novelty.
    Humans are unpredictable. They find unexpected uses for products, misunderstand instructions, and behave irrationally. AI models, no matter how advanced, can only predict behavior based on past patterns—not the unexpected ways real users might engage with a product.
  4. They require careful validation.
    How do we know that insights from digital twins align with real-world user behavior? Without rigorous validation against human data, there’s a risk of over-reliance on synthetic feedback that doesn’t reflect reality.

A Hybrid Future: AI + Human UX Research

Rather than viewing digital twins as a replacement for human research, the best UX teams will integrate them as a complementary tool.

Where AI Can Lead:

  • Large-scale pattern identification
  • Early-stage usability evaluations
  • Speeding up research cycles
  • Automating repetitive testing

Where Humans Remain Essential:

  • Understanding emotion, frustration, and delight
  • Detecting unexpected behaviors
  • Validating insights with real-world context
  • Ethical considerations and cultural nuance

The future of UX research is not about choosing between AI and human research—it’s about blending the strengths of both.

Final Thoughts: Proceeding With Caution and Curiosity

Digital twins and synthetic users are exciting, but they are not a magic bullet. They cannot fully replace human users, and relying on them exclusively could lead to false confidence in flawed insights.

Instead, UX researchers should view these technologies as powerful, but imperfect tools—best used in combination with traditional research methods.

As with any new technology, thoughtful implementation is key. The real opportunity lies in designing research methodologies that harness the speed and scale of AI without losing the depth, nuance, and humanity that make UX research truly valuable.

The challenge ahead isn’t about choosing between human or synthetic research. It’s about finding the right balance—one that keeps user experience truly human-centered, even in an AI-driven world.

This article was researched with the help of Perplexity.ai. 

Learn more
1 min read

UXDX Dublin 2024: Where Chocolate Meets UX Innovation

What happens when you mix New Zealand's finest chocolate with 870 of Europe's brightest UX minds? Pure magic, as we discovered at UXDX Dublin 2024!

A sweet start

Our UXDX journey began with pre-event drinks (courtesy of yours truly, Optimal Workshop) and a special treat from down under - a truckload of Whittaker's chocolate that quickly became the talk of the conference. Our impromptu card sorting exercise with different Whittaker's flavors revealed some interesting preferences, with Coconut Slab emerging as the clear favorite among attendees!

Cross-Functional Collaboration: More Than Just a Buzzword

The conference's core theme of breaking down silos between design, product, and engineering teams resonated deeply with our mission at Optimal Workshop. Andrew Birgiolas from Sephora delivered what I call a "magical performance" on collaboration as a product, complete with an unforgettable moment where he used his shoe to demonstrate communication scenarios (now that's what we call thinking on your feet!).

Purpose-driven design

Frank Gaine's session on organizational purpose was a standout moment, emphasizing the importance of alignment at three crucial levels:

- Company purpose

- Team purpose

- Individual purpose

This multi-layered approach to purpose struck a chord with attendees, reminding us that effective UX research and design must be anchored in clear, meaningful objectives at every level.

The art of communication

One of the most practical takeaways came from Kelle Link's session on navigating enterprise ecosystems. Her candid discussion about the necessity of becoming proficient in deck creation sparked knowing laughter from the audience. As our CEO noted, it's a crucial skill for communicating with senior leadership, board members, and investors - even if it means becoming a "deck ninja" (to use a more family-friendly term).

Standardization meets innovation

Chris Grant's insights on standardization hit home: "You need to standardize everything so things are predictable for a team." This seemingly counterintuitive approach to fostering innovation resonated with our own experience at Optimal Workshop - when the basics are predictable, teams have more bandwidth for tackling the unpredictable challenges that drive real innovation.

Building impactful product teams

Matt Fenby-Taylor's discussion of the "pirate vs. worker bee" persona balance was particularly illuminating. Finding team members who can maintain that delicate equilibrium between creative disruption and methodical execution is crucial for building truly impactful product teams.

Research evolution

A key thread throughout the conference was the evolution of UX research methods. Nadine Piecha's "Beyond Interviews" session emphasized that research is truly a team sport, requiring involvement from designers, PMs, and other stakeholders. This aligns perfectly with our mission at Optimal Workshop to make research more accessible and actionable for everyone.

The AI conversation

The debate on AI's role in design and research between John Cleere and Kevin Hawkins sparked intense discussions. The consensus? AI will augment rather than replace human researchers, allowing us to focus more on strategic thinking and deeper insights - a perspective that aligns with our own approach to integrating AI capabilities.

Looking ahead

As we reflect on UXDX 2024, a few things are clear:

  1. The industry is evolving rapidly, but the fundamentals of human-centered design remain crucial

  1. Cross-functional collaboration isn't just nice to have - it's essential for delivering impactful products

  1. The future of UX research and design is bright, with teams becoming more integrated and methodologies more sophisticated

The power of community

Perhaps the most valuable aspect of UXDX wasn't just the formal sessions, but the connections made over coffee (which we were happy to provide!) and, yes, New Zealand chocolate. The mix of workshops, forums, and networking opportunities created an environment where ideas could flow freely and partnerships could form naturally.

What's next?

As we look forward to UXDX 2025, we're excited to see how these conversations evolve. Will AI transform how we approach UX research? How will cross-functional collaboration continue to develop? And most importantly, which Whittaker's chocolate flavor will reign supreme next year?

One thing's for certain - the UX community is more vibrant and collaborative than ever, and we're proud to be part of its evolution. I’ve said it before and I’ll say it again, the industry has a very bright future. 

See you next year! We’ll remember to bring more Coconut Slab chocolate next time - it seems we've created quite a demand!

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.