Optimal Blog
Articles and Podcasts on Customer Service, AI and Automation, Product, and more

At Optimal, we know the reality of user research: you've just wrapped up a fantastic interview session, your head is buzzing with insights, and then... you're staring at hours of video footage that somehow needs to become actionable recommendations for your team.
User interviews and usability sessions are treasure troves of insight, but the reality is reviewing hours of raw footage can be time-consuming, tedious, and easy to overlook important details. Too often, valuable user stories never make it past the recording stage.
That's why we’re excited to announce the launch of early access for Interviews, a brand-new tool that saves you time with AI and automation, turns real user moments into actionable recommendations, and provides the evidence you need to shape decisions, bring stakeholders on board, and inspire action.
Interviews, Reimagined
What once took hours of video review now takes minutes. With Interviews, you get:
- Instant clarity: Upload your interviews and let AI automatically surface key themes, pain points, opportunities, and other key insights.
- Deeper exploration: Ask follow-up questions and anything with AI chat. Every insight comes with supporting video evidence, so you can back up recommendations with real user feedback.
- Automatic highlight reels: Generate clips and compilations that spotlight the takeaways that matter.
- Real user voices: Turn insight into impact with user feedback clips and videos. Share insights and download clips to drive product and stakeholder decisions.

Groundbreaking AI at Your Service
This tool is powered by AI designed for researchers, product owners, and designers. This isn’t just transcription or summarization, it’s intelligence tailored to surface the insights that matter most. It’s like having a personal AI research assistant, accelerating analysis and automating your workflow without compromising quality. No more endless footage scrolling.
The AI used for Interviews as well as all other AI with Optimal is backed by AWS Amazon Bedrock, ensuring that your AI insights are supported with industry-leading protection and compliance.

What’s Next: The Future of Moderated Interviews in Optimal
This new tool is just the beginning. Soon, you’ll be able to manage the entire moderated interview process inside Optimal, from recruitment to scheduling to analysis and sharing.
Here’s what’s coming:
- Recruit users using Optimal’s managed recruitment services.
- View your scheduled sessions directly within Optimal. Link up with your own calendar.
- Connect seamlessly with Zoom, Google Meet, or Teams.
Imagine running your full end-to-end interview workflow, all in one platform. That’s where we’re heading, and Interviews is our first step.
Ready to Explore?
Interviews is available now for our latest Optimal plans with study limits. Start transforming your footage into minutes of clarity and bring your users’ voices to the center of every decision. We can’t wait to see what you uncover.
Want to learn more and see it in action? Join us for our upcoming webinar on Oct 21st at 12 PM PST.
Topics
Research Methods
Popular
All topics
Latest

The Great Debate: Speed vs. Rigor in Modern UX Research
Most product teams treat UX research as something that happens to them: a necessary evil that slows things down or a luxury they can't afford. The best product teams flip this narrative completely. Their research doesn't interrupt their roadmap; it powers it.
"We need insights by Friday."
"Proper research takes at least three weeks."
This conversation happens in product teams everywhere, creating an eternal tension between the need for speed and the demands of rigor. But what if this debate is based on a false choice?
Research that Moves at the Speed of Product
Product development has accelerated dramatically. Two-week sprints are standard. Daily deployment is common. Feature flags allow instant iterations. In this environment, a four-week research study feels like asking a Formula 1 race car to wait for a horse-drawn carriage.
The pressure is real. Product teams make dozens of decisions per sprint, about features, designs, priorities, and trade-offs. Waiting weeks for research on each decision simply isn't viable. So teams face an impossible choice: make decisions without insights or slow down dramatically.
As a result, most teams choose speed. They make educated guesses, rely on assumptions, and hope for the best. Then they wonder why features flop and users churn.
The False Dichotomy
The framing of "speed vs. rigor" assumes these are opposing forces. But the best research teams have learned they're not mutually exclusive, they require different approaches for different situations.
We think about research in three buckets, each serving a different strategic purpose:
Discovery: You're exploring a space, building foundational knowledge, understanding thelandscape before you commit to a direction. This is where you uncover the problems worth solving and identify opportunities that weren't obvious from inside your product bubble.
Fine-Tuning: You have a direction but need to nail the specifics. What exactly should this feature do? How should it work? What's the minimum viable version that still delivers value? This research turns broad opportunities into concrete solutions.
Delivery: You're close to shipping and need to iron out the final details: copy, flows, edge cases. This isn't about validating whether you should build it; it's about making sure you build it right.
Every week, our product, design, research and engineering leads review the roadmap together. We look at what's coming and decide which type of research goes where. The principle is simple: If something's already well-shaped, move fast. If it's risky and hard to reverse, invest in deeper research.
How Fast Can Good Research Be?
The answer is: surprisingly fast, when structured correctly!
For our teams, how deep we go isn't about how much time we have: it's about how much it would hurt to get it wrong. This is a strategic choice that most teams get backwards.
Go deep when the stakes are high, foundational decisions that affect your entire product architecture, things that would be expensive to reverse, moments where you need multiple stakeholders aligned around a shared understanding of the problem.
Move fast when you can afford to be wrong, incremental improvements to existing flows, things you can change easily based on user feedback, places where you want to ship-learn-adjust in tight loops.
Think of it as portfolio management for your research investment. Save your "big research bets" for the decisions that could set you back months, not days. Use lightweight validation for everything else.
And while good research can be fast, speed isn't always the answer. There are definitely situations where deep research needs to run and it takes time. Save those moments for high stakes investments like repositioning your entire product, entering new markets, or pivoting your business model. But be cautious of research perfectionism which is a risk with deep research. Perfection is the enemy of progress. Your research team shouldn’t be asking "Is this research perfect?" but instead "Is this insight sufficient for the decision at hand?"
The research goal should always be appropriate confidence, not perfect certainty.
The Real Trade-Off
The choice shouldn’t be speed vs. rigor, it's between:
- Research that matters (timely, actionable, sufficient confidence)
- Research that doesn't (perfect methodology, late arrival, irrelevant to decisions)
The best research teams have learned to be ruthlessly pragmatic. They match research effort to decision impact. They deliver "good enough" insights quickly for small decisions and comprehensive insights thoughtfully for big ones.
Speed and rigor aren't enemies. They're partners in a portfolio approach where each decision gets the right level of research investment. The teams winning aren't choosing between speed and rigor—they're choosing the appropriate blend for each situation.

UX Masterclass: The Convergence of Product, Design, and Research Workflows
The traditional product development process is a linear one. Research discovers insights, passes the baton to design, who creates solutions and hands off to product management, who delivers requirements to engineering. Clean. Orderly. Completely unrealistic in today’s modern product development lifecycle.
Beyond the Linear Workflow
The old workflow assumed each team had distinct phases that happened in sequence. Research happens first (discover users problems), then design (create the solutions), then product (define the specifications), then engineering (build it). Unfortunately this linear approach added weeks to timelines and created information loss at every handoff.
Smart product teams are starting to approach this differently, collapsing these phases into integrated workflows:
- Collaborative Discovery. Instead of researchers conducting studies alone, the product trio (PM, designer, researcher) participates together. When engineers join user interviews, they understand context that no requirement document could capture.
- Live Design Validation. Rather than waiting for research reports, designers test concepts weekly. Quick iterations based on immediate feedback replace month-long design cycles.
- Integrated Tooling. Teams use platforms where research data and insights across the product development lifecycle, from ideation to optimization, all live in the same place, eliminating information silos and making sure information is shared across teams.
What Collaborative Workflows Look Like in Practice
- Discovery Happens Weekly. Instead of quarterly research projects, teams run continuous user conversations where the whole team participates.
- Design Evolves Daily. There are no waterfall designs handed off to developers, but iterative prototypes tested immediately with users.
- Products Ship Incrementally. Instead of big-bang releases after months of development, product releases small iterations validated every sprint.
- Insights Flow Constantly. Teams don’t wait for learnings at the end of projects, but access real-time feedback loops that give insights immediately.
In leading organizations, these collaborative workflows are already the norm and we’re seeing this more and more across our customer base. The teams managing it the best, are focusing on make these changes intentional, rather than letting them happen chaotically.
As product development accelerates, the teams winning aren't those with the best researchers, designers, or product managers in isolation. They're organizations where these teams work together, where expertise is shared, and where the entire team owns the user experience.

Information Architecture vs Navigation: A Practical UX Guide
When we first think of a beautiful website or app design, we rarely think of content structures, labels, and categories. But that’s exactly where great design and seamless user experiences begin. Beneath fancy fonts, layout, colors, and animations are the real heroes of user-centric design - information architecture and navigation.
Information architecture (IA) is like the blueprint of your website or app - it’s a conceptual content structure of how content is organized and arranged to create seamless interactions. And as useful as your information may be, if your navigation is flawed, users won’t be able to find it. They’ll simply leave your site and look elsewhere.
So, how does navigation and information architecture complement each other to create seamless user experiences?
Understanding Information Architecture (IA)
Information architecture refers to the practice of organizing, structuring, and labeling content and information to enhance the user's understanding and navigation of a website or application. It involves designing an intuitive, user-friendly, and efficient system to help users find and access the information they need easily. Good IA is essential for delivering a positive user experience and ensuring that your users can achieve their goals effectively.
IA is often confused with navigation structure. Navigation is a part of IA, and it refers to the way users move through a website or application. IA involves more than navigation; it encompasses the overall organization, labeling, and structure of content and information.
Three Key Components of IA
There are three key components of IA:
- Organizational structure: Defines how information is organized, including the categories, subcategories, and relationships between them.
- Content structure: The way information is arranged and presented, including the hierarchy of information and the types of content used.
- Navigation structure: Outlines the pathways and components used for navigating through the information, such as menus, links, and search functions.
Navigation: A Vital Element of Information Architecture
Navigation refers to the process of providing users with a means of moving through a website or application to access the information they need. Navigation is an integral part of IA, as it guides users through the organizational structure and content structure of a site, allowing them to find and access the information they require efficiently.
There are several types of navigation, including utility navigation and content navigation. Utility navigation refers to the elements that help users perform specific actions, such as logging in, creating an account, subscribing, or sharing content. Content navigation, on the other hand, refers to the elements used to guide users through the site's content, such as menus, links, and buttons.
Both types of navigation provide users with a roadmap of how the site is organized and how they can access/interact with the information they need. Effective navigation structures are designed to be intuitive and easy to use. The goal is to minimize the time and effort required for users to find and access the information they need.
Key Elements of Effective Navigation
The key elements of effective navigation include clear labeling, logical grouping, and consistency across the site.
- Clear labeling helps users understand what information they can expect to find under each navigation element.
- Logical grouping ensures that related content is grouped together, making it easier for users to find what they need.
- Consistency ensures that users can predict how the site is organized and can find the information they need quickly and easily.
Designing Navigation for a Better User Experience
Since navigation structures need to be intuitive and easy to use, it goes without saying that usability testing is central to determining what is deemed ‘intuitive’ in the first place. What you might deem intuitive, may not be to your target user.
We’ve discussed how clear labeling, logical grouping, and consistency are key elements for designing navigation, but can they be tested and confirmed? One common usability test is called card sorting. Card sorting is a user research technique that helps you discover how people understand, label and categorize information. It involves asking users to sort various pieces of information or content into categories. Researchers use card sorting to inform decisions about product categorization, menu items, and navigation structures. Remember, researching these underlying structures also informs your information architecture - a key factor in determining good website design.
Tree testing is another invaluable research tool for creating intuitive and easy to use navigation structures. Tree testing examines how easy it is for your users to find information using a stripped-back, text-only representation of your website - almost like a sitemap. Rather than asking users to sort information, they are asked to perform a navigation task, for example, “where would you find XYZ product on our site?”. Depending on how easy or difficult users find these tasks gives you a great indication of the strengths and weaknesses of your underlying site structure, which then informs your navigation design.
Combine usability testing and the following tips to nail your next navigation design:
- Keep it simple: Simple navigation structures are easier for users to understand and use. Limit the number of navigation links and group related content together to make it easier for users to find what they need.
- Use clear and descriptive labels: Navigation labels should be clear and descriptive, accurately reflecting the content they lead to. Avoid using vague or confusing labels that could confuse users.
- Make it consistent: Consistency across the navigation structure makes it easier for users to understand how the site is organized and find the information they need. Use consistent labeling, grouping, and placement of navigation elements throughout the site.
- Test and refine: Usability testing is essential for identifying and refining navigation issues. Regular testing can help designers make improvements and ensure the navigation structure remains effective and user-friendly.
Best Practices for Information Architecture and Navigation
Both information architecture and navigation design contribute to great user experience (UX) design by making it easier for users to find the information they need quickly and efficiently. Information architecture helps users understand the relationships between different types of content and how to access them, while navigation design guides users through the content logically and intuitively.
In addition to making it easier for users to find information, great information architecture and navigation design can also help improve engagement and satisfaction. When users can find what they're looking for quickly and easily, they're more likely to stay on your website or application and explore more content. By contrast, poor information architecture and navigation design can lead to frustration, confusion, and disengagement.
So, when it comes to information architecture vs navigation, what are the best practices for design? Great navigation structure generally considers two factors: (1) what you want your users to do and, (2) what your users want to do. Strike a balance between the two, but ultimately your navigation system should focus on the needs of your users. Be sure to use simple language and remember to nest content into user-friendly categories.
Since great navigation design is typically a result of great IA design, it should come as no surprise that the key design principles of IA focus on similar principles. Dan Brown’s eight design principles lay out the best practices of IA design:
- The principle of objects: Content should be treated as a living, breathing thing. It has lifecycles, behaviors, and attributes.
- The principle of choices: Less is more. Keep the number of choices to a minimum.
- The principle of disclosure: Show a preview of information that will help users understand what kind of information is hidden if they dig deeper.
- The principle of examples: Show examples of content when describing the content of the categories.
- The principle of front doors: Assume that at least 50% of users will use a different entry point than the home page.
- The principle of multiple classifications: Offer users several different classification schemes to browse the site’s content.
- The principle of focused navigation: Keep navigation simple and never mix different things.
- The principle of growth: Assume that the content on the website will grow. Make sure the website is scalable.
Summary: How User-Centered Research Elevates Your Information Architecture and Navigation
Information architecture and navigation are the unsung heroes of website design that work in synchrony to create seamless user experiences. Information architecture refers to the practice of organizing and structuring content and information, while navigation guides users through the site's structure and content. Both are integral to creating intuitive user experiences.
In many ways, navigation and information architecture share the same traits necessary for success. They both require clear, logical structure, as well as clear labeling and categorization. Their ability to deliver on these traits often determines how well a website or application meets your users needs. Of course, IA and navigation designs should be anchored by user research and usability testing, like card sorting and tree testing, to ensure user experiences are as intuitive as possible!
That’s where Optimal comes in. As the world’s most loved user insights platform, Optimal empowers teams across design, product, research, and content to uncover how users think, organize, and navigate information. Tools like Card Sorting and Tree Testing help you validate and refine your IA and navigation structures with real users, so you can move from guesswork to confidence. Ready to turn user behavior into better navigation? Try Optimal for free.

5 Signs It's Time to Switch Your Research Platform
How to Know When Your Current Tool Is Holding You Back
Your research platform should accelerate insights, not create obstacles. Yet many enterprise research teams are discovering their tools weren't built for the scale, velocity, and quality standards that today’s product development demands.
If you're experiencing any of these five warning signs, it might be time to evaluate alternatives.
1. Your Research Team Is Creating Internal Queues
The Challenge: When platforms limit concurrent studies, research becomes a first-come-first-served bottleneck and urgent research gets delayed by scheduled projects. In fast-moving businesses, research velocity directly impacts competitiveness. Every queued study is a delayed product launch, a missed market opportunity, or a competitor gaining ground.
The Solution: Enterprise-grade research platforms allow unlimited concurrent studies. Multiple teams can research simultaneously without coordination overhead or artificial constraints. Organizations that remove study volume constraints report 3-4x increases in research velocity within the first quarter of switching platforms.
2. Pricing Has Become Unpredictable
The Problem: When pricing gest too complicated, it becomes unpredictable. Some businesses have per-participant fees, usage caps and seat limits not to mention other hidden charges. Many pricing models weren't designed for enterprise-scale research, they were designed to maximize per-transaction revenue. When you can't predict research costs, you can't plan research roadmaps. Teams start rationing participants, avoiding "expensive" audiences, or excluding stakeholders from platform access to control costs.
The Solution: Transparent, scalable pricing with unlimited seats that grows with your needs. Volume-based plans that reward research investment rather than penalizing growth. No hidden per-participant markups.
3. Participant Quality Is Declining
The Problem: This is the most dangerous sign because it corrupts insights at the source. Low-quality participants create low-quality data, which creates poor product decisions.
Warning signs include:
- Participants using AI assistance during moderated sessions
- Bot-like response patterns in surveys
- Participants who clearly don't meet screening criteria
- Low-effort responses that provide no actionable insight
- Increasing "throw away this response" rates in your analysis
Poor participant quality isn't just frustrating, it's expensive. Research with the wrong participants produces misleading insights that derail product strategy, waste development resources, and damage market positioning.
The Solution: Multi-layer fraud prevention systems. Behavioral verification. AI-response detection. Real-time quality monitoring. 100% quality guarantees backed by participant replacement policies. When product, design and research teams work with brands that offer 100% participant quality guarantees, they know that they can trust their research and make real business decisions from their insights.
4. You Can't Reach Your Actual Target Audience
The Problem: Limited panel reach forces compromises. Example: You need B2B software buyers but you get anyone who's used software. Research with "close enough" participants produces insights that don't apply to your actual market. Product decisions based on proxy audiences fail in real-world application.
The solution: Tools like Optimal that offer 10M+ participants across 150+ countries with genuine niche targeting capabilities. Proven Australian market coverage from broad demographics to specialized B2B audiences. Advanced screening beyond basic demographics.
5. Your Platform Hasn't Evolved with Your Needs
The Problem: You chose your platform 3-5 years ago when you were a smaller team with simpler needs. But your organization has grown, research has become more strategic, and your platform's limitations are now organizational constraints. Platform limitations become organizational limitations. When your tools can't support enterprise workflows, your research function can't deliver enterprise value.
The Solution: Complete research lifecycle support from recruitment to analysis. AI-powered insight generation. Enterprise-grade security and compliance. Dedicated support and onboarding. Integration ecosystems that connect research across your organization.
Why Enterprises Are Switching to Optimal
Leading product, design and research teams are moving to Optimal because it's specifically built to address the pain points outlined above:
- No Study Volume Constraints: Run unlimited concurrent studies across your entire organization
- Transparent, Scalable Pricing: Flexible plans with unlimited seats and predictable costs
- Verified Quality Guarantee: 10M+ participants with multi-layer fraud prevention and 100% quality guarantee
- Enterprise-Grade Platform: Complete research lifecycle tools, AI-powered insights, dedicated support
Next Steps
If you're experiencing any of these five signs, it's worth exploring alternatives. The cost of continuing with inadequate tools, delayed launches, poor data quality, limited research capacity, far outweigh the effort of evaluation.
Start a Free Trial – Test Optimal with your real research projects
Compare Platforms – See detailed capability comparisons
Talk to Our Team – Discuss your specific research needs with Australian experts

5 Alternatives to Askable for User Research and Participant Recruitment
When evaluating tools for user testing and participant recruitment, Askable often appears on the shortlist, especially for teams based in Australia and New Zealand. But in 2025, many researchers are finding Askable’s limitations increasingly difficult to work around: restricted study volume, inconsistent participant quality, and new pricing that limits flexibility.
If you’re exploring Askable alternatives that offer more scalability, higher data quality, and global reach, here are five strong options.
1. Optimal: Best Overall Alternative for Scalable, AI-Powered Research
Optimal is a comprehensive user insights platform supporting the full research lifecycle, from participant recruitment to analysis and reporting. Unlike Askable, which has historically focused on recruitment, Optimal unifies multiple research methods in one platform, including prototype testing, card sorting, tree testing, and AI-assisted interviews.
Why teams switch from Askable to Optimal
1. You can only run one study at a time in Askable
Optimal removes that bottleneck, letting you launch multiple concurrent studies across teams and research methods.
2. Askable’s new pricing limits flexibility
Optimal offers scalable plans with unlimited seats, so teams only pay for what they need.
3. Askable’s participant quality has dropped
Optimal provides access to over 100+ million verified participants worldwide, with strong fraud-prevention and screening systems that eliminate low-effort or AI-assisted responses.
Additional advantages
- End-to-end research tools in one workspace
- AI-powered insight generation that tags and summarizes automatically
- Enterprise-grade reliability with decade-long market trust
- Dedicated onboarding and SLA-backed support
Best for: Teams seeking an enterprise-ready, scalable research platform that eliminates the operational constraints of Askable.
2. UserTesting: Best for Video-Based Moderated Studies
UserTesting remains one of the most established platforms for moderated and unmoderated usability testing. It excels at gathering video feedback from participants in real time.
Pros:
- Large participant pool with strong demographic filters
- Supports moderated sessions and live interviews
- Integrations with design tools like Figma and Miro
Cons:
- Higher cost at enterprise scale
- Less flexible for survey-driven or unmoderated studies compared with Optimal
- The UI has become increasingly complex and buggy as UserTesting has been expanding their platform through acquisitions such as UserZoom and Validately.
Best for: Companies prioritizing live, moderated usability sessions.
3. Maze: Best for Product Teams Using Figma Prototypes
Maze offers seamless Figma integration and focuses on automating prototype-testing workflows for product and design teams.
Pros:
- Excellent Figma and Adobe XD integration
- Automated reporting
- Good fit for early-stage design validation
Cons:
- Limited depth for qualitative research
- Smaller participant pool
Best for: Design-first teams validating prototypes and navigation flows.
4. Lyssna (formerly UsabilityHub): Best for Fast Design Feedback
Lyssna focuses on quick-turn, unmoderated studies such as preference tests, first-click tests, and five-second tests.
Pros:
- Fast turnaround
- Simple, intuitive interface
- Affordable for smaller teams
Cons:
- Limited participant targeting options
- Narrower study types than Askable
Best for: Designers and researchers running lightweight validation tests.
5. Dovetail: Best for Research Repository and Analysis
Dovetail is primarily a qualitative data repository rather than a testing platform. It’s useful for centralizing and analyzing insights from research studies conducted elsewhere.
Pros:
- Strong tagging and note-taking features
- Centralized research hub for large teams
Cons:
- Doesn’t recruit participants or run studies
- Requires manual uploads from other tools like Askable or UserTesting
Best for: Research teams centralizing insights from multiple sources.
Final Thoughts on Alternatives to Askable
If your goal is simply to recruit local participants, Askable can still meet basic needs. But if you’re looking to scale research in your organization, integrate testing and analysis, and automate insights, Optimal stands out as the best long-term investment. Its blend of global reach, AI-powered analysis, and proven enterprise support makes it the natural next step for growing research teams. You can try Optimal for free here.

A beginner’s guide to qualitative and quantitative research
In the field of user research, every method is either qualitative, quantitative – or both. Understandably, there’s some confusion around these 2 approaches and where the different methods are applicable. This article provides a handy breakdown of the different terms and where and why you’d want to use qualitative or quantitative research methods.
Qualitative research
Let’s start with qualitative research, an approach that’s all about the ‘why’. It’s exploratory and not about numbers, instead focusing on reasons, motivations, behaviors and opinions – it’s best at helping you gain insight and delve deep into a particular problem. This type of data typically comes from conversations, interviews and responses to open questions. The real value of qualitative research is in its ability to give you a human perspective on a research question. Unlike quantitative research, this approach will help you understand some of the more intangible factors – things like behaviors, habits and past experiences – whose effects may not always be readily apparent when you’re conducting quantitative research. A qualitative research question could be investigating why people switch between different banks, for example.
When to use qualitative research
Qualitative research is best suited to identifying how people think about problems, how they interact with products and services, and what encourages them to behave a certain way. For example, you could run a study to better understand how people feel about a product they use, or why people have trouble filling out your sign up form. Qualitative research can be very exploratory (e.g., user interviews) as well as more closely tied to evaluating designs (e.g., usability testing). Good qualitative research questions to ask include:
- Why do customers never add items to their wishlist on our website?
- How do new customers find out about our services?
- What are the main reasons people don’t sign up for our newsletter?
How to gather qualitative data
There’s no shortage of methods to gather qualitative data, which commonly takes the form of interview transcripts, notes and audio and video recordings. Here are some of the most widely-used qualitative research methods:
- Usability test – Test a product with people by observing them as they attempt to complete various tasks.
- User interview – Sit down with a user to learn more about their background, motivations and pain points.
- Contextual inquiry – Learn more about your users in their own environment by asking them questions before moving onto an observation activity.
- Focus group – Gather 6 to 10 people for a forum-like session to get feedback on a product.
How many participants will you need?
You don’t often need large numbers of participants for qualitative research, with the average range usually somewhere between 5 to 10 people. You’ll likely require more if you're focusing your work on specific personas, for example, in which case you may need to study 5-10 people for each persona. While this may seem quite low, consider the research methods you’ll be using. Carrying out large numbers of in-person research sessions requires a significant time investment in terms of planning, actually hosting the sessions and analyzing your findings.
Quantitative research
On the other side of the coin you’ve got quantitative research. This type of research is focused on numbers and measurement, gathering data and being able to transform this information into statistics. Given that quantitative research is all about generating data that can be expressed in numbers, there multiple ways you make use of it. Statistical analysis means you can pull useful facts from your quantitative data, for example trends, demographic information and differences between groups. It’s an excellent way to understand a snapshot of your users. A quantitative research question could involve investigating the number of people that upgrade from a free plan to a paid plan.
When to use quantitative research
Quantitative research is ideal for understanding behaviors and usage. In many cases it's a lot less resource-heavy than qualitative research because you don't need to pay incentives or spend time scheduling sessions etc). With that in mind, you might do some quantitative research early on to better understand the problem space, for example by running a survey on your users. Here are some examples of good quantitative research questions to ask:
- How many customers view our pricing page before making a purchase decision?
- How many customers search versus navigate to find products on our website?
- How often do visitors on our website change their password?
How to gather quantitative data
Commonly, quantitative data takes the form of numbers and statistics.
Here are some of the most popular quantitative research methods:
- Card sorts – Find out how people categorize and sort information on your website.
- First-click tests – See where people click first when tasked with completing an action.
- A/B tests – Compare 2 versions of a design in order to work out which is more effective.
- Clickstream analysis – Analyze aggregate data about website visits.
How many participants will you need?
While you only need a small number of participants for qualitative research, you need significantly more for quantitative research. Quantitative research is all about quantity. With more participants, you can generate more useful and reliable data you can analyze. In turn, you’ll have a clearer understanding of your research problem. This means that quantitative research can often involve gathering data from thousands of participants through an A/B test, or with 30 through a card sort. Read more about the right number of participants to gather for your research.
Mixed methods research
While there are certainly times when you’d only want to focus on qualitative or quantitative data to get answers, there’s significant value in utilizing both methods on the same research projects.Interestingly, there are a number of research methods that will generate both quantitative and qualitative data. Take surveys as an example. A survey could include questions that require written answers from participants as well as questions that require participants to select from multiple choices.
Looking back at the earlier example of how people move from a free plan to a paid plan, applying both research approaches to the question will yield a more robust or holistic answer. You’ll know why people upgrade to the paid plan in addition to how many. You can read more about mixed methods research in this article:
Where to from here?
Now that you know the difference between qualitative and quantitative research, the best way to build confidence is to start testing. Hands-on experience is the fastest path to deeper insight. At Optimal, we make it easy to run your first study, no matter your role or research experience.
- Explore our 101 guides to user research
- Start a free trial and discover insights that drive real business impact
- How to encourage people to participate in your study – Seemingly one of the hardest parts of conducting research is finding willing participants. It’s actually not that difficult.