Blog

Optimal Blog

Articles and Podcasts on Customer Service, AI and Automation, Product, and more

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Latest

Learn more
1 min read

A Look at Rally + Optimal: From Recruitment to Real Insights

A well-maintained participant panel is more than a time-saver. It sets your team up for better research from day one. The people you recruit and how you track, segment, and manage those relationships over time directly shapes how reliable your findings are.  

With the right setup in place, you can move forward with confidence, knowing that these participants meet your criteria, aren’t overused, and can bring fresh perspectives to each study.

Tools like Rally UXR bring that structure to participant recruitment. It helps you build and manage your own participant panel, keep track of consent and contact history, coordinate logistics, and stay on top of all the moving parts. You can also see things like incentive history and email engagement, making it much easier to decide who to invite and when.

But recruitment is just the starting point. The real value comes next: running the research and turning participant feedback into insights you can actually use.

Using Rally + Optimal together


Whether you’re running unmoderated studies, testing designs, navigation, or content, or conducting usability testing calls, having the right research tools in place is critical. If you’re already using Rally, pairing it with Optimal can connect the dots from recruitment through to insights, without adding friction to your workflow.

You can also use Optimal’s on-demand or custom managed recruitment services, though Rally’s strength lies in building your own custom panel and database.

Here’s how to combine Rally and Optimal into a smooth, efficient research workflow.

Start with intentional recruitment


Define your participant criteria in Rally. Use screening questions not just to qualify participants. Think beyond “does this person qualify?” and start building segments you can reuse: power users vs. casual users, returning users vs. first-timers, people familiar with the old design vs. new. These segments can make it easier to run focused studies and compare results over time.

Build your study in parallel


A simple shift that makes a big difference: don’t wait. Build your study in Optimal before sending invites from Rally. This ensures that when participants are ready, the link can be dropped into Rally, and distribution happens the moment you're ready.

Use the strengths of each platform


Rally handles the relationship and profile management: who's been invited, who's confirmed, who needs a reminder, screener and survey history, consent forms, and more. Optimal handles the research: collecting quantitative and qualitative data, visualising patterns, quantifying usability issues, automating insights with AI, and surfacing the metrics and insights that actually answer your research question. 

With Optimal, you can immediately put Rally-recruited participants into studies including:

  • Prototype testing
  • Live site testing 
  • Card sorting
  • Tree testing
  • First-click testing
  • Surveys

Keep your insights in one place


When it comes to research, scattered insights = lost impact. Using Optimal as your central hub for results, recordings, and analysis makes it easier to share findings with your team and stakeholders, track progress over time, and back up decisions with real evidence. 

The tools you use for recruitment and the tools you use for research aren't just operational choices. They shape your research culture. When recruitment and research are both well-structured, everything runs more smoothly. Teams that invest in structure on both ends of the workflow tend to produce research that's faster, more credible, and more likely to influence decisions. 

Rally and Optimal are powerful on their own. Together, they create a workflow that’s scalable, insight-driven, and built for continuous discovery.

If you're not yet using Optimal, you can start a free trial or book a demo.

Learn more
1 min read

Ethical AI Integration in User Research

Artificial intelligence offers remarkable capabilities for UX research. It can process massive datasets, identify patterns humans might miss, and accelerate insights that traditionally took weeks to uncover. But as the adage goes: with great power comes great responsibility.

As research teams increasingly adopt AI-powered tools, we're facing critical questions about data privacy, algorithmic bias, and ethical use of user information. These aren't just philosophical concerns, they're practical challenges that every research team needs to address.

More data, more risk

AI thrives on data. The more information it can access, the better its pattern recognition and predictive capabilities become. For researchers, this creates a fundamental tension. To gain meaningful insights, you need comprehensive user data, but collecting and processing this data creates privacy risks that traditional research methods didn't face at the same scale.

Think about a typical AI-powered analysis:

  • User session recordings processed to identify usability issues
  • Behavioral data analyzed to understand user journeys
  • Interview transcripts processed for sentiment analysis and theme identification

Each of these activities involves handling sensitive user information. Each creates potential exposure points where data could be misused, breached, or processed in ways users didn't anticipate. The question isn't whether you should use AI but rather how to use it responsibly.

Building privacy into your AI research practice

Privacy can't be an afterthought. It needs to be foundational to how you approach AI-powered research. Collect only the data you actually need. This seems obvious, but AI's hunger for information can encourage overcollection. Before implementing any AI tool, ask: What's the minimum data required to achieve our research goals? Just because you can collect comprehensive behavioral data doesn't mean you should. Be intentional about what you gather and why.

Data security basics also become even more critical when AI is involved. Encryption, secure storage, access controls, these aren't optional. But security goes beyond technology. It includes policies around who can access data, how long it's retained, and what happens when a project concludes. AI systems often retain data to improve their algorithms. Make sure you understand your tools' data retention policies and ensure they align with your privacy commitments. A good example of this is how some tools, like Optimal, offer PII redaction on user interviews to ensure data security and privacy. 

Be transparent with users

Users deserve to know how their data is being used. This goes beyond the standard privacy policy checkbox. When conducting research with AI-powered tools, you need to clearly communicate:

  • What data you're collecting
  • How AI will process that data
  • What insights you're hoping to gain
  • How long you'll retain the information
  • Who else might have access to it

Give users meaningful control. If they're uncomfortable with AI analysis, offer alternatives. If they want their data deleted, make that process straightforward. Transparency builds trust. And trust is the foundation of good research.

The bias problem

Something that all teams who incorporate AI into their research practices need to be aware of is that AI systems can perpetuate and amplify bias. Machine learning algorithms learn from training data. If that data contains biased patterns, and most data does, the AI will replicate those biases in its analysis. This can lead to research insights that systematically overlook certain user groups or misinterpret their needs. For researchers, this creates a serious challenge. You're using AI to understand users, but the AI itself might have blind spots that skew your understanding. Eliminating bias entirely is probably impossible. But you can take concrete steps to minimize its impact.

  1. Diversify your training data. If you're building custom AI models, ensure your training data represents the full diversity of your user base. This includes obvious factors like demographics, but also less visible ones like technical proficiency, language preferences, and usage contexts.
  2. Use multiple analytical approaches. Don't rely solely on AI-generated insights. Combine algorithmic analysis with traditional qualitative methods. When AI flags a pattern, validate it through direct user research. When you see a trend in the data, talk to actual users to understand the context.
  3. Interrogate unexpected findings. When AI produces surprising insights, don't accept them at face value. This skepticism isn't about distrusting AI. It's about using it thoughtfully.
  4. Ensure diverse perspectives on your research team. Bias is easier to spot when you have people from different backgrounds reviewing the work. Build research teams that bring varied perspectives and life experiences. They'll be more likely to notice when AI-generated insights don't ring true for certain user segments.

Navigating third-party AI tools

Most research teams don't build their own AI systems. They use third-party tools that come with built-in AI capabilities. This creates an additional layer of privacy and ethical considerations. Before adopting any AI-powered research tool you need to understand the vendor's data practices. Not all vendors handle data the same way. Choose partners who take privacy seriously.

Stay current with regulations

Data privacy regulations are evolving rapidly. GDPR, CCPA, and emerging laws around AI governance create complex compliance requirements.nEnsure your AI-powered research practices align with relevant regulations in the jurisdictions where you operate. This isn't just about legal compliance, it's about respecting user rights.

The most Important Ethical AI Component: Human judgment 

Here's what ties all of these considerations together: Human judgment must remain central to AI-powered research. AI can process data faster than any human, but it can't recognize when an algorithm is producing biased results or understand the ethical implications of a particular insight. These responsibilities fall to human researchers. And they can't be automated.

At Optimal, we believe AI should enhance research capabilities while respecting user privacy and maintaining ethical standards. That's why we're committed to transparent data practices, secure infrastructure, and tools that put researchers in control. Because the goal isn't just better insights. It's better insights achieved responsibly.

Learn more
1 min read

The Latest from Optimal Interviews: Automating Insights and Building a Research Repository

Since launching Optimal Interviews in December, we've been tracking closely as research, product, and design teams put it to the test. The tool is driving a real transformation in workflows, and we’re energized by the feedback so far.

  • “What took me manually 3 weeks to analyze 4 years ago, with the AI functionality, now took me less than 5 minutes. It’s crazy!”
  • “This changes everything for how we work with interview data.”
  • “The insights were spot on, and I was impressed by how well the tool understood the themes in the interview.”
  • "I tried it for the first time this week. I was impressed by the amount of insights." 

Optimal Interviews was built to remove the friction from one of research's most time-intensive steps: analyzing interview recordings. With automated transcription, AI-generated insights, highlight reels, summaries, and citations, the tool transforms hours of manual review into something that happens in minutes.

But we’re not done yet. We’re constantly building and evolving based on your feedback. With the latest releases like automatic recording, every session can now be captured and stored automatically, helping teams build a centralized user research repository and supporting continuous research.

Here’s a look at how teams are using Optimal Interviews, the latest work in this space, and where we’re headed.

How Teams Are Using Optimal Interviews

Researchers across industries are leveraging Optimal Interviews in a variety of ways. Here are just a few examples from current users:

  • Understanding customer interactions with voice assistants and AI to inform user experience and product development.

  • Studying habits, purchasing patterns, and customer frustrations to optimize experiences and conversions.

  • Evaluating how users navigate and interact with customer-facing websites to improve user experience.

  • Gathering feedback from employees about internal tools and systems to improve workplace efficiency and satisfaction.

Recent Enhancements: New Features for More Automation

It’s been a busy few months, and we’ve shipped several meaningful updates over the past few months. Here’s what’s new:

1. Multilanguage Support for Global Research


Optimal Interviews now supports 13 languages, automatically detecting and transcribing interviews in their original languages. AI Chat is also ready to assist your team in these languages, ensuring a seamless experience no matter what language your team is using.

2. Video Conferencing Integrations


Sync Optimal Interviews with your Google Meet, Zoom, or Microsoft Teams account to automatically generate and attach meeting links to sessions scheduled with the Optimal scheduler.

3. Automatic Recording


You can choose to automatically record and upload sessions scheduled through Optimal, eliminating the need for manual uploads. Sessions can now be captured and stored automatically, enabling teams to conduct continuous research. Accumulate insights over time in a central repository, where they remain always accessible and ready to be explored further with AI Chat.

4. Custom Topics


Custom topics allow you to define specific areas of interest for AI to focus on for interview insights. As more recordings are added, the tool will automatically generate insights based on these topics, so you can easily filter and focus on the data that matters most to you.



What’s Next for Optimal Interviews


Our ultimate goal? To keep finding ways to reduce manual effort. Let Optimal streamline your research workflow, automate time-consuming tasks, and help you build out your qualitative research repository.

We have a number of significant additions in development, including:

Calendar Integrations


Sync you and your team’s calendars (Google and Microsoft) with Optimal Interviews so you can easily schedule and sync you and your team’s interview availability. Avoid double booking and get scheduled sessions automatically added to your calendar.

Enhanced Privacy & Messaging System


Interviewers and participants will be able to message each other directly through Optimal. This helps protect personal contact details e.g. email addresses and reduces unintended bias, such as revealing the study creator’s organization. Teams can coordinate, add clarifications, and follow up more efficiently without exposing personal information.



We'd Love to Hear From You


How are you using Optimal Interviews in your research? What's working well, and what would you like to see us build next?

And if you're just getting started, our Interviews 101 guide is a great place to begin.

Want to learn more about how to harness the full potential of Optimal Interviews and AI Chat? Register for this live training.

Optimal Interviews is updated continuously and shaped with feedback from users. Follow our release notes or share your thoughts via live chat or feature request form to give your feedback and stay in the loop.

Learn more
1 min read

From Interview Insights to Action: Using AI Chat to Deliver Findings into Notion, Jira, Linear, and Confluence

User interviews provide some of the richest insights a product team can uncover. But turning hours of recordings and transcripts into clear insights can often be slow and manual without the right tools.

With automated insights and AI Chat in Optimal Interviews, you can accelerate that entire workflow, from extracting insights from interview recordings to transforming them into outputs that fit directly into the tools your team already works in.

Instead of spending hours summarizing transcripts and translating research into stakeholder updates, AI Chat helps you quickly generate structured outputs for documentation, tickets, and decision-making.

Deliver Interview Insights Directly into the Tools Your Team Uses

AI Chat can surface key themes, quotes, and patterns across participant recordings. Once insights are identified, it can quickly transform them into formats your team already uses.

You can control the output by specifying tone, length, structure, and level of detail directly in your prompt. The more explicit you are about the format you want, the better the output.

Simply specify the details of the deliverable you want, and AI Chat can structure the output for documentation, planning, and product tools.

Here’s how teams can use AI Chat with some of the most common product, design, and research tools.

Notion

Notion is used by many teams for documentation, knowledge bases, product planning, and research repositories.

Example AI Chat prompts

  • Turn these interview insights into a structured Notion research summary with sections for Key Findings, Supporting Quotes, and Recommendations.
  • Create a Notion page outline summarizing onboarding interview insights with headings and bullet points.

Jira

Jira is a widely used issue tracking and project management platform that product and engineering teams rely on to manage work, track bugs, and plan development tasks.

Research insights often lead directly to product improvements, and AI Chat can translate insights into actionable tickets.

Example AI Chat prompts

  • Convert these interview insights into three Jira tickets including title, description, and acceptance criteria.
  • Turn this usability issue into a Jira bug ticket.
  • Create a Jira epic summarizing onboarding improvements suggested by interview feedback.

Linear

Linear is a modern planning and issue tracking tool designed for fast-moving product teams. It’s often used for planning product work, managing projects and engineering tasks, and tracking product improvements.

AI Chat can quickly convert insights into structured Linear issues.

Example AI Chat prompts

  • Convert these insights into Linear.app issues with clear titles, descriptions, and priority levels.
  • Create a Linear.app issue summarizing the navigation problem identified in these interviews.
  • Generate a set of tasks for the Linear.app addressing usability problems mentioned by participants.

Confluence

Confluence is a team collaboration and documentation platform used to share knowledge, publish research reports, and maintain internal documentation.

AI Chat can help transform research findings into polished documentation ready for stakeholders.

Example AI Chat prompts

  • Turn these interview insights into a Confluence page with sections for Background, Findings, and Recommendations.
  • Create a Confluence page explaining the usability issues uncovered in onboarding research.
  • Turn opportunities to improve into concise post-it notes, with one key point per note, written in simple, scannable language to use in a Confluence whiteboard.

Best practice tip: For cleaner, copy-and-paste-ready outputs, consider adding “Do not include citations.” to any of these suggested prompts.


Accelerate the Impact of User Research

By combining automated interview insights with AI Chat, teams can quickly move from recordings to structured insights, and share them in formats that resonate with internal teams and stakeholders.

This makes it easier to clearly communicate what users are saying, build alignment across product, design, and engineering, get buy-in, and turn research insights into decisions that teams are ready to support and action.

Learn more
1 min read

7 Ways to Use AI Chat to Boost Collaboration in Mural, FigJam, and Miro

Collaboration tools like Mural, FigJam, and Miro are staples of how modern teams can brainstorm, map ideas, align on plans, and build together. But a canvas alone can't tell you if you're on the right track or guide you to what comes next when progress stalls. That's where Optimal AI Chat and user insights come in.

By starting or bringing real user insights into the boards your team already works in, you can reduce ambiguity, ground discussions in real research, and accelerate decision-making. 

Here are 7 ways to use AI Chat alongside your collaboration boards.


1. Align on key objectives

Before your next planning session, use Optimal AI Chat to surface relevant insights from your interview recordings. Add a summary directly into your Mural, Miro, or FigJam board so everyone comes in with the same context and understanding of the objectives. Instead of starting with assumptions, your team can start with real user insights and clear trade-offs to discuss.

Try this prompt: "Summarize the key considerations for [decision topic] and flag any trade-offs we should discuss as a team."

AI Chat example

2. Create a user journey map

AI Chat can analyze interview transcripts and video recordings and highlight common jobs to be done, behaviors, and friction points. You can then map those steps visually on your board and identify where the experience breaks down.

Try these prompts: “Summarize the typical jobs to be done for the people we interviewed.”
“For this job you identified [paste job details], detail the journey steps.” 


3. Turn pain points into design and product decisions

AI Chat can analyze recurring themes from your interview recordings and convert them into concrete opportunities your team can explore next. Adding these to your board gives the team a clear starting point rather than a vague list of problems.

Try this prompt:  "Based on these pain points [paste notes or themes], suggest three product improvements we could explore."


4. Sharpen your marketing messaging

Interview insights aren’t just valuable for product, research, and design teams. Marketing teams can also use AI Chat to quickly evaluate messaging, positioning, and customer perception.

When running preference or concept testing interviews, AI Chat can quickly analyze the feedback and suggest positioning directions you can workshop on your board.

Try this prompt: “Suggest positioning options based on the interview feedback.”


5. Facilitate workshops

Running workshops and brainstorming sessions with cross-functional teams can be challenging. Conversations drift, discussions stall, and teams sometimes struggle to focus on the most important issues. 

AI Chat can help you structure the conversation before the workshop even begins by generating discussion guides based on user insights from your interviews. Add the chat outputs directly to your board to guide the session.

Try this prompt: “Generate a structured discussion guide based on the pain points of the interviewees.”


6. Make brainstorming more focused

Open brainstorming can be valuable. It can also be chaotic without clear direction. By leveraging AI Chat, you can guide your brainstorming sessions with intelligent suggestions, topic generation, and idea organization.

Try this prompt: “Generate 10 brainstorm ideas based on these user insights and group them into themes we could explore.”


7. Map complex processes

Visualizing complex processes and systems is easier with tools like Miro, FigJam, and Mural. AI Chat can help you map out each step. AI Chat can help break down a process step-by-step, highlighting decisions, dependencies, and potential friction points based on your interviews. Your team can then map these steps visually and identify opportunities for improvement.

Try this prompt: “Create a step-by-step process map for how users complete [task], including key decisions and potential friction points.”


Using Optimal AI Chat for seamless collaboration

The best collaboration happens when teams have the right information at the right time. 

Optimal AI Chat gives your team a jumpstart for your interview analysis: clearer inputs, faster synthesis, and smarter outputs that translate directly into what you're building on your boards.

Whether you're running a workshop, mapping a user journey, or planning a product launch, AI Chat helps you spend less time getting oriented and more time making decisions.

Ready to see what your team can do with it?
Learn more about best practices for AI Chat or book a demo

Learn more
1 min read

Speed, Quality, and Flexibility: Optimizing Your User Research Recruitment

Recruiting the right participants is one of the biggest challenges teams face when conducting user research. Poor quality or disengaged testers can lead to unreliable data. While bottlenecks in recruitment – long lead times and limited access – can delay studies, reduce research frequency, and slow product development. 

Having flexible options helps you keep moving at pace. Whether you bring your own participants, use Optimal’s recruitment services, or leverage external panel providers, Optimal gives you the flexibility to recruit the right user testers consistently and efficiently so you can launch studies faster, run them more frequently, and quickly scale research across multiple projects.

Here’s a breakdown of your recruitment options with Optimal:

1.  Invite Your Own Participants For Free

Optimal lets you invite your own participants with a study link, QR code, or intercept snippet at no extra cost, giving you full control over who takes part in your studies.

2. Use Any Panel Provider You Prefer

Optimal works seamlessly with any panel provider, such as User Interviews, Respondent, PureSpectrum, Prolific, Dynata, Askable, and Cint.

How it works:

  1. Create and publish an unmoderated study in Optimal, such as a live site test, prototype test, survey, first-click test, card sort or tree test.
  2. Specify your audience criteria in the panel platform.
  3. Add screener questions in your panel provider and/or Optimal.
  4. Add your Optimal study link into the panel provider platform.
  5. Panel provider recruits participants and manages incentives.
  6. See a participant list in Optimal and review participant metrics like completion rate, time taken, and location breakdown.
  7. Optional: Create segments in Optimal for more targeted insights.
  8. Review insights, results, and analytics in Optimal to make informed research decisions.

Certain panel providers, like User Interviews, offer additional benefits through direct integration with Optimal. You can automate participant tracking and see participant status in real time in your panel provider platform as user testers complete your studies.

3. Use Optimal’s Managed Recruitment Services

For teams that want expert support, Optimal’s Managed Recruitment services tap into multiple panels to access  over 20 million participants across 150 countries. Whether you're looking for a broad audience or something highly specific, we can help you find the right people to take part in your study.

Optimal handles the panel selection, incentive management, and criteria refinement. We’ll even review and optimize your screener questions. Get started by submitting your criteria

4. Use Optimal’s On Demand Panel

Looking for another quick recruitment solution? You can order user testers instantly inside the Optimal platform. It’s ideal for B2C research and studies with basic demographic requirements, and Optimal takes care of incentives for you.

Recruitment Flexibility and Quality

You’re never locked into a single approach with Optimal. Instead, you can adapt your recruitment strategy to each study, balancing speed, quality, budget, and scale, while using the same research and user insights platform.

From shareable study links to easy panel workflows and expert support when you need it, you can spend less time managing recruitment and more time gathering actionable user insights.

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.