Learn hub

Learn hub

Get expert-level resources on running research, discovery, and building
an insights-driven culture.

Learn more
1 min read

A Look at Rally + Optimal: From Recruitment to Real Insights

A well-maintained participant panel is more than a time-saver. It sets your team up for better research from day one. The people you recruit and how you track, segment, and manage those relationships over time directly shapes how reliable your findings are.  

With the right setup in place, you can move forward with confidence, knowing that these participants meet your criteria, aren’t overused, and can bring fresh perspectives to each study.

Tools like Rally UXR bring that structure to participant recruitment. It helps you build and manage your own participant panel, keep track of consent and contact history, coordinate logistics, and stay on top of all the moving parts. You can also see things like incentive history and email engagement, making it much easier to decide who to invite and when.

But recruitment is just the starting point. The real value comes next: running the research and turning participant feedback into insights you can actually use.

Using Rally + Optimal together


Whether you’re running unmoderated studies, testing designs, navigation, or content, or conducting usability testing calls, having the right research tools in place is critical. If you’re already using Rally, pairing it with Optimal can connect the dots from recruitment through to insights, without adding friction to your workflow.

You can also use Optimal’s on-demand or custom managed recruitment services, though Rally’s strength lies in building your own custom panel and database.

Here’s how to combine Rally and Optimal into a smooth, efficient research workflow.

Start with intentional recruitment


Define your participant criteria in Rally. Use screening questions not just to qualify participants. Think beyond “does this person qualify?” and start building segments you can reuse: power users vs. casual users, returning users vs. first-timers, people familiar with the old design vs. new. These segments can make it easier to run focused studies and compare results over time.

Build your study in parallel


A simple shift that makes a big difference: don’t wait. Build your study in Optimal before sending invites from Rally. This ensures that when participants are ready, the link can be dropped into Rally, and distribution happens the moment you're ready.

Use the strengths of each platform


Rally handles the relationship and profile management: who's been invited, who's confirmed, who needs a reminder, screener and survey history, consent forms, and more. Optimal handles the research: collecting quantitative and qualitative data, visualising patterns, quantifying usability issues, automating insights with AI, and surfacing the metrics and insights that actually answer your research question. 

With Optimal, you can immediately put Rally-recruited participants into studies including:

  • Prototype testing
  • Live site testing 
  • Card sorting
  • Tree testing
  • First-click testing
  • Surveys

Keep your insights in one place


When it comes to research, scattered insights = lost impact. Using Optimal as your central hub for results, recordings, and analysis makes it easier to share findings with your team and stakeholders, track progress over time, and back up decisions with real evidence. 

The tools you use for recruitment and the tools you use for research aren't just operational choices. They shape your research culture. When recruitment and research are both well-structured, everything runs more smoothly. Teams that invest in structure on both ends of the workflow tend to produce research that's faster, more credible, and more likely to influence decisions. 

Rally and Optimal are powerful on their own. Together, they create a workflow that’s scalable, insight-driven, and built for continuous discovery.

If you're not yet using Optimal, you can start a free trial or book a demo.

Learn more
1 min read

Ethical AI Integration in User Research

Artificial intelligence offers remarkable capabilities for UX research. It can process massive datasets, identify patterns humans might miss, and accelerate insights that traditionally took weeks to uncover. But as the adage goes: with great power comes great responsibility.

As research teams increasingly adopt AI-powered tools, we're facing critical questions about data privacy, algorithmic bias, and ethical use of user information. These aren't just philosophical concerns, they're practical challenges that every research team needs to address.

More data, more risk

AI thrives on data. The more information it can access, the better its pattern recognition and predictive capabilities become. For researchers, this creates a fundamental tension. To gain meaningful insights, you need comprehensive user data, but collecting and processing this data creates privacy risks that traditional research methods didn't face at the same scale.

Think about a typical AI-powered analysis:

  • User session recordings processed to identify usability issues
  • Behavioral data analyzed to understand user journeys
  • Interview transcripts processed for sentiment analysis and theme identification

Each of these activities involves handling sensitive user information. Each creates potential exposure points where data could be misused, breached, or processed in ways users didn't anticipate. The question isn't whether you should use AI but rather how to use it responsibly.

Building privacy into your AI research practice

Privacy can't be an afterthought. It needs to be foundational to how you approach AI-powered research. Collect only the data you actually need. This seems obvious, but AI's hunger for information can encourage overcollection. Before implementing any AI tool, ask: What's the minimum data required to achieve our research goals? Just because you can collect comprehensive behavioral data doesn't mean you should. Be intentional about what you gather and why.

Data security basics also become even more critical when AI is involved. Encryption, secure storage, access controls, these aren't optional. But security goes beyond technology. It includes policies around who can access data, how long it's retained, and what happens when a project concludes. AI systems often retain data to improve their algorithms. Make sure you understand your tools' data retention policies and ensure they align with your privacy commitments. A good example of this is how some tools, like Optimal, offer PII redaction on user interviews to ensure data security and privacy. 

Be transparent with users

Users deserve to know how their data is being used. This goes beyond the standard privacy policy checkbox. When conducting research with AI-powered tools, you need to clearly communicate:

  • What data you're collecting
  • How AI will process that data
  • What insights you're hoping to gain
  • How long you'll retain the information
  • Who else might have access to it

Give users meaningful control. If they're uncomfortable with AI analysis, offer alternatives. If they want their data deleted, make that process straightforward. Transparency builds trust. And trust is the foundation of good research.

The bias problem

Something that all teams who incorporate AI into their research practices need to be aware of is that AI systems can perpetuate and amplify bias. Machine learning algorithms learn from training data. If that data contains biased patterns, and most data does, the AI will replicate those biases in its analysis. This can lead to research insights that systematically overlook certain user groups or misinterpret their needs. For researchers, this creates a serious challenge. You're using AI to understand users, but the AI itself might have blind spots that skew your understanding. Eliminating bias entirely is probably impossible. But you can take concrete steps to minimize its impact.

  1. Diversify your training data. If you're building custom AI models, ensure your training data represents the full diversity of your user base. This includes obvious factors like demographics, but also less visible ones like technical proficiency, language preferences, and usage contexts.
  2. Use multiple analytical approaches. Don't rely solely on AI-generated insights. Combine algorithmic analysis with traditional qualitative methods. When AI flags a pattern, validate it through direct user research. When you see a trend in the data, talk to actual users to understand the context.
  3. Interrogate unexpected findings. When AI produces surprising insights, don't accept them at face value. This skepticism isn't about distrusting AI. It's about using it thoughtfully.
  4. Ensure diverse perspectives on your research team. Bias is easier to spot when you have people from different backgrounds reviewing the work. Build research teams that bring varied perspectives and life experiences. They'll be more likely to notice when AI-generated insights don't ring true for certain user segments.

Navigating third-party AI tools

Most research teams don't build their own AI systems. They use third-party tools that come with built-in AI capabilities. This creates an additional layer of privacy and ethical considerations. Before adopting any AI-powered research tool you need to understand the vendor's data practices. Not all vendors handle data the same way. Choose partners who take privacy seriously.

Stay current with regulations

Data privacy regulations are evolving rapidly. GDPR, CCPA, and emerging laws around AI governance create complex compliance requirements.nEnsure your AI-powered research practices align with relevant regulations in the jurisdictions where you operate. This isn't just about legal compliance, it's about respecting user rights.

The most Important Ethical AI Component: Human judgment 

Here's what ties all of these considerations together: Human judgment must remain central to AI-powered research. AI can process data faster than any human, but it can't recognize when an algorithm is producing biased results or understand the ethical implications of a particular insight. These responsibilities fall to human researchers. And they can't be automated.

At Optimal, we believe AI should enhance research capabilities while respecting user privacy and maintaining ethical standards. That's why we're committed to transparent data practices, secure infrastructure, and tools that put researchers in control. Because the goal isn't just better insights. It's better insights achieved responsibly.

Learn more
1 min read

The Latest from Optimal Interviews: Automating Insights and Building a Research Repository

Since launching Optimal Interviews in December, we've been tracking closely as research, product, and design teams put it to the test. The tool is driving a real transformation in workflows, and we’re energized by the feedback so far.

  • “What took me manually 3 weeks to analyze 4 years ago, with the AI functionality, now took me less than 5 minutes. It’s crazy!”
  • “This changes everything for how we work with interview data.”
  • “The insights were spot on, and I was impressed by how well the tool understood the themes in the interview.”
  • "I tried it for the first time this week. I was impressed by the amount of insights." 

Optimal Interviews was built to remove the friction from one of research's most time-intensive steps: analyzing interview recordings. With automated transcription, AI-generated insights, highlight reels, summaries, and citations, the tool transforms hours of manual review into something that happens in minutes.

But we’re not done yet. We’re constantly building and evolving based on your feedback. With the latest releases like automatic recording, every session can now be captured and stored automatically, helping teams build a centralized user research repository and supporting continuous research.

Here’s a look at how teams are using Optimal Interviews, the latest work in this space, and where we’re headed.

How Teams Are Using Optimal Interviews

Researchers across industries are leveraging Optimal Interviews in a variety of ways. Here are just a few examples from current users:

  • Understanding customer interactions with voice assistants and AI to inform user experience and product development.

  • Studying habits, purchasing patterns, and customer frustrations to optimize experiences and conversions.

  • Evaluating how users navigate and interact with customer-facing websites to improve user experience.

  • Gathering feedback from employees about internal tools and systems to improve workplace efficiency and satisfaction.

Recent Enhancements: New Features for More Automation

It’s been a busy few months, and we’ve shipped several meaningful updates over the past few months. Here’s what’s new:

1. Multilanguage Support for Global Research


Optimal Interviews now supports 13 languages, automatically detecting and transcribing interviews in their original languages. AI Chat is also ready to assist your team in these languages, ensuring a seamless experience no matter what language your team is using.

2. Video Conferencing Integrations


Sync Optimal Interviews with your Google Meet, Zoom, or Microsoft Teams account to automatically generate and attach meeting links to sessions scheduled with the Optimal scheduler.

3. Automatic Recording


You can choose to automatically record and upload sessions scheduled through Optimal, eliminating the need for manual uploads. Sessions can now be captured and stored automatically, enabling teams to conduct continuous research. Accumulate insights over time in a central repository, where they remain always accessible and ready to be explored further with AI Chat.

4. Custom Topics


Custom topics allow you to define specific areas of interest for AI to focus on for interview insights. As more recordings are added, the tool will automatically generate insights based on these topics, so you can easily filter and focus on the data that matters most to you.



What’s Next for Optimal Interviews


Our ultimate goal? To keep finding ways to reduce manual effort. Let Optimal streamline your research workflow, automate time-consuming tasks, and help you build out your qualitative research repository.

We have a number of significant additions in development, including:

Calendar Integrations


Sync you and your team’s calendars (Google and Microsoft) with Optimal Interviews so you can easily schedule and sync you and your team’s interview availability. Avoid double booking and get scheduled sessions automatically added to your calendar.

Enhanced Privacy & Messaging System


Interviewers and participants will be able to message each other directly through Optimal. This helps protect personal contact details e.g. email addresses and reduces unintended bias, such as revealing the study creator’s organization. Teams can coordinate, add clarifications, and follow up more efficiently without exposing personal information.



We'd Love to Hear From You


How are you using Optimal Interviews in your research? What's working well, and what would you like to see us build next?

And if you're just getting started, our Interviews 101 guide is a great place to begin.

Want to learn more about how to harness the full potential of Optimal Interviews and AI Chat? Register for this live training.

Optimal Interviews is updated continuously and shaped with feedback from users. Follow our release notes or share your thoughts via live chat or feature request form to give your feedback and stay in the loop.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.