Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

UX

Learn more
1 min read

Ella Stoner: A three-step-tool to help designers break down the barriers of technical jargon

Designing in teams with different stakeholders can be incredibly complex. Each person looks at projects through their own lens, and can potentially introduce jargon and concepts that are confusing to others. Simplicity advocate Ella Stoner knows this scenario all too well. It’s what led her to create an easy three-step tool for recognizing problems and developing solutions. By getting everyone on the same page and creating an understanding of what the simplest solution is, designers can create products with customer needs in mind.

Ella’s background

Ella Stoner is a CX Designer at Spark in New Zealand. She is a creative thought leader and a talented designer who has facilitated over 50 Human Centered Design Workshops. Ella and her team have developed a cloud product that enables businesses to connect with Public Cloud Services such as Amazon, Google and Azure in a human-centric way. She brings a simplistic approach to her work that is reflected in her UX New Zealand talk. It’s about cutting out complex details to establish an agreed starting point that is easily understood by all team members.

Contact Details:

You can find Ella on LinkedIn.

Improving creative confidence 🤠

Ella is confident that she is not the only designer who has felt overwhelmed with technical and industry specific jargon in product meetings. For example, on Ella’s first day as a designer with Spark, she attended a meeting about an HSNS (High Speed Network Services) tool. Ella attempted to use context clues to try and predict what HSNS could mean. However, as the meeting went on, the technical and industry-specific jargon built on each other and Ella struggled to follow what was being said. At one point Ella asked the team to clarify this mysterious term:

“What’s an HSNS and why would the customer use it?” she asked. Much to her surprise, the room was completely silent. The team struggled to answer a basic question, about a term that appeared to be common knowledge during the meeting. There’s a saying, “Why do something simply when you can make it as complicated as possible?”. This happens all too often, where people and teams struggle to communicate with each other, and this results in projects and products that customers don’t understand and can’t use. Ella’s In A Nutshell tool is designed to cut through all that. It creates a base level starting point that’s understood by all, cuts out jargon, and puts the focus squarely on the customer. It:

  • condenses down language and jargon to its simplest form
  • translates everything into common language
  • flips it back to the people who’ll be using it.

Here’s how it works:

First, you complete this phrase as it pertains to your work: “In a nutshell, (project/topic) is (describe what the project or topic is in a few words), that (state what the project/topic does) for (indicate key customer/users and why). In order for this method to work, each of the four categories you insert must be simple and understandable. All acronyms, complex language, and technical jargon must be avoided.  In a literal sense, anyone reading the statement should be able to understand what is being said “in a nutshell.” When you’ve done this, you’ll have a statement that can act as a guide for the goals your project aims to achieve.

Why it matters 🤔

Applying the “In A Nutshell” tool doesn’t take long. However, it's important to write this statement as a team. Ideally, it’s best to write the statement at the start of a project, but you can also write it in the middle if you need to create a reference point, or any time you feel technical jargon creeping in.

Here’s what you’ll need to get started:

  • People with three or more role types (this accommodates varying perspectives to ensure it’s as relevant as possible)
  • A way to capture text - i.e. whiteboard, Slack channel, Miro board
  • An easy voting system - i.e., thumbs up in a chat

Before you start, you may need to pitch the idea to someone in a technical role. If you’re feeling lost or confused, chances are someone else will be too. Breaking down the technical concepts into easy-to-understand and digestible language is of utmost importance:

  1. Explain the Formula to the team..
  2. Individually brainstorm possible answers for each gap for three minutes.
  3. Put every idea up on the board or channel and vote on the best one.

Use the most popular answers as your final “In a Nutshell” statement.

Side note: Keep all the options that come through the brainstorm. They can still be useful in the design process to help form a full picture of what you’re working on, what it should do, who it should be for etc.

Learn more
1 min read

Our latest feature session replay has landed 🥳

What is session replay?

Session replay allows you to record participants completing a card sort without the need for plug-ins or integrations. This great new feature captures the participant's interactions and creates a recording for each participant completing the card sort that you can view in your own time. It’s a great way to identify where users may have struggled to categorize information to correlate with the insights you find in your data.  

Watch the video 📹 👀

How does session replay work?

  • Session replay interacts with a study and nothing else. It does not include audio or face recording in the first release, but we’re working on it for the future.
  • There is no set-up or plug-in required; you control the use of screen replay in the card sort settings.  
  • For enterprise customers, the account admin will be required to turn this feature on for teams to access.
  • Session replay is currently only available on card sort, but it’s coming soon to other study types.

Help article 🩼


Guide to using session replay

How do you activate session replay?

To activate session replay, create a card sort or open an existing card sort that has not yet been launched. Click on ‘set up,’ then ‘settings’; here, you will see the option to turn on session replay for your card sort. This feature will be off by default, and you must turn it on for each card study.

How do I view a session replay?

To view a session replay of a card sort, go to Results > Participants > Select a participant > Session replay. 

I can't see session replay in the card sort settings 👀

If this is the case, you will need to reach out to your organization's account admin to ask for this to be activated at an organizational level. It’s really easy for session replay to be enabled or disabled by the organization admin just by navigating to Settings > Features > Session Replay, where it can be toggled on/off. 

Learn more
1 min read

Using paper prototypes in UX

In UX research we are told again and again that to ensure truly user-centered design, it’s important to test ideas with real users as early as possible. There are many benefits that come from introducing the voice of the people you are designing for in the early stages of the design process. The more feedback you have to work with, the more you can inform your design to align with real needs and expectations. In turn, this leads to better experiences that are more likely to succeed in the real world.It is not surprising then that paper prototypes have become a popular tool used among researchers. They allow ideas to be tested as they emerge, and can inform initial designs before putting in the hard yards of building the real thing. It would seem that they’re almost a no-brainer for researchers, but just like anything out there, along with all the praise, they have also received a fair share of criticism, so let’s explore paper prototypes a little further.

What’s a paper prototype anyway? 🧐📖

Paper prototyping is a simple usability testing technique designed to test interfaces quickly and cheaply. A paper prototype is nothing more than a visual representation of what an interface could look like on a piece of paper (or even a whiteboard or chalkboard). Unlike high-fidelity prototypes that allow for digital interactions to take place, paper prototypes are considered to be low-fidelity, in that they don’t allow direct user interaction. They can also range in sophistication, from a simple sketch using a pen and paper to simulate an interface, through to using designing or publishing software to create a more polished experience with additional visual elements.

Screen Shot 2016-04-15 at 9.26.30 AM
Different ways of designing paper prototypes, using OptimalSort as an example

Showing a research participant a paper prototype is far from the real deal, but it can provide useful insights into how users may expect to interact with specific features and what makes sense to them from a basic, user-centered perspective. There are some mixed attitudes towards paper prototypes among the UX community, so before we make any distinct judgements, let's weigh up their pros and cons.

Advantages 🏆

  • They’re cheap and fastPen and paper, a basic word document, Photoshop. With a paper prototype, you can take an idea and transform it into a low-fidelity (but workable) testing solution very quickly, without having to write code or use sophisticated tools. This is especially beneficial to researchers who work with tight budgets, and don’t have the time or resources to design an elaborate user testing plan.
  • Anyone can do itPaper prototypes allow you to test designs without having to involve multiple roles in building them. Developers can take a back seat as you test initial ideas, before any code work begins.
  • They encourage creativityFrom both the product teams participating in their design, but also from the users. They require the user to employ their imagination, and give them the opportunity express their thoughts and ideas on what improvements can be made. Because they look unfinished, they naturally invite constructive criticism and feedback.
  • They help minimize your chances of failurePaper prototypes and user-centered design go hand in hand. Introducing real people into your design as early as possible can help verify whether you are on the right track, and generate feedback that may give you a good idea of whether your idea is likely to succeed or not.

Disadvantages 😬

  • They’re not as polished as interactive prototypesIf executed poorly, paper prototypes can appear unprofessional and haphazard. They lack the richness of an interactive experience, and if our users are not well informed when coming in for a testing session, they may be surprised to be testing digital experiences on pieces of paper.
  • The interaction is limitedDigital experiences can contain animations and interactions that can’t be replicated on paper. It can be difficult for a user to fully understand an interface when these elements are absent, and of course, the closer the interaction mimics the final product, the more reliable our findings will be.
  • They require facilitationWith an interactive prototype you can assign your user tasks to complete and observe how they interact with the interface. Paper prototypes, however, require continuous guidance from a moderator in communicating next steps and ensuring participants understand the task at hand.
  • Their results have to be interpreted carefullyPaper prototypes can’t emulate the final experience entirely. It is important to interpret their findings while keeping their limitations in mind. Although they can help minimize your chances of failure, they can’t guarantee that your final product will be a success. There are factors that determine success that cannot be captured on a piece of paper, and positive feedback at the prototyping stage does not necessarily equate to a well-received product further down the track.

Improving the interface of card sorting, one prototype at a time 💡

We recently embarked on a research project looking at the user interface of our card-sorting tool, OptimalSort. Our research has two main objectives — first of all to benchmark the current experience on laptops and tablets and identify ways in which we can improve the current interface. The second objective is to look at how we can improve the experience of card sorting on a mobile phone.

Rather than replicating the desktop experience on a smaller screen, we want to create an intuitive experience for mobiles, ensuring we maintain the quality of data collected across devices.Our current mobile experience is a scaled down version of the desktop and still has room for improvement, but despite that, 9 per cent of our users utilize the app. We decided to start from the ground up and test an entirely new design using paper prototypes. In the spirit of testing early and often, we decided to jump right into testing sessions with real users. In our first testing sprint, we asked participants to take part in two tasks. The first was to perform an open or closed card sort on a laptop or tablet. The second task involved using paper prototypes to see how people would respond to the same experience on a mobile phone.

blog_artwork_01-03

Context is everything 🎯

What did we find? In the context of our research project, paper prototypes worked remarkably well. We were somewhat apprehensive at first, trying to figure out the exact flow of the experience and whether the people coming into our office would get it. As it turns out, people are clever, and even those with limited experience using a smartphone were able to navigate and identify areas for improvement just as easily as anyone else. Some participants even said they prefered the experience of testing paper prototypes over a laptop. In an effort to make our prototype-based tasks easy to understand and easy to explain to our participants, we reduced the full card sort to a few key interactions, minimizing the number of branches in the UI flow.

This could explain a preference for the mobile task, where we only asked participants to sort through a handful of cards, as opposed to a whole set.The main thing we found was that no matter how well you plan your test, paper prototypes require you to be flexible in adapting the flow of your session to however your user responds. We accepted that deviating from our original plan was something we had to embrace, and in the end these additional conversations with our participants helped us generate insights above and beyond the basics we aimed to address. We now have a whole range of feedback that we can utilize in making more sophisticated, interactive prototypes.

Whether our success with using paper prototypes was determined by the specific setup of our testing sessions, or simply by their pure usefulness as a research technique is hard to tell. By first performing a card sorting task on a laptop or tablet, our participants approached the paper prototype with an understanding of what exactly a card sort required. Therefore there is no guarantee that we would have achieved the same level of success in testing paper prototypes on their own. What this does demonstrate, however, is that paper prototyping is heavily dependent on the context of your assessment.

Final thoughts 💬

Paper prototypes are not guaranteed to work for everybody. If you’re designing an entirely new experience and trying to describe something complex in an abstracted form on paper, people may struggle to comprehend your idea. Even a careful explanation doesn’t guarantee that it will be fully understood by the user. Should this stop you from testing out the usefulness of paper prototypes in the context of your project? Absolutely not.

In a perfect world we’d test high fidelity interactive prototypes that resemble the real deal as closely as possible, every step of the way. However, if we look at testing from a practical perspective, before we can fully test sophisticated designs, paper prototypes provide a great solution for generating initial feedback.In his article criticizing the use of paper prototypes, Jake Knapp makes the point that when we show customers a paper prototype we’re inviting feedback, not reactions. What we found in our research however, was quite the opposite.

In our sessions, participants voiced their expectations and understanding of what actions were possible at each stage, without us having to probe specifically for feedback. Sure we also received general comments on icon or colour preferences, but for the most part our users gave us insights into what they felt throughout the experience, in addition to what they thought.

Further reading 🧠

Learn more
1 min read

Behind the scenes of UX work on Trade Me's CRM system

We love getting stuck into scary, hairy problems to make things better here at Trade Me. One challenge for us in particular is how best to navigate customer reaction to any change we make to the site, the app, the terms and conditions, and so on. Our customers are passionate both about the service we provide — an online auction and marketplace — and its place in their lives, and are rightly forthcoming when they're displeased or frustrated. We therefore rely on our Customer Service (CS) team to give customers a voice, and to respond with patience and skill to customer problems ranging from incorrectly listed items to reports of abusive behavior.

The CS team uses a Customer Relationship Management (CRM) system, Trade Me Admin, to monitor support requests and manage customer accounts. As the spectrum of Trade Me's services and the complexity of the public website have grown rapidly, the CRM system has, to be blunt, been updated in ways which have not always been the prettiest. Links for new tools and reports have simply been added to existing pages, and old tools for services we no longer operate have not always been removed. Thus, our latest focus has been to improve the user experience of the CRM system for our CS team.

And though on the surface it looks like we're working on a product with only 90 internal users, our changes will have flow on effects to tens of thousands of our members at any given time (from a total number of around 3.6 million members).

The challenges of designing customer service systems

We face unique challenges designing customer service systems. Robert Schumacher from GfK summarizes these problems well. I’ve paraphrased him here and added an issue of my own:

1. Customer service centres are high volume environments — Our CS team has thousands of customer interactions every day, and and each team member travels similar paths in the CRM system.

2. Wrong turns are amplified — With so many similar interactions, a system change that adds a minute more to processing customer queries could slow down the whole team and result in delays for customers.

3. Two people relying on the same system — When the CS team takes a phone call from a customer, the CRM system is serving both people: the CS person who is interacting with it, and the caller who directs the interaction. Trouble is, the caller can't see the paths the system is forcing the CS person to take. For example, in a previous job a client’s CS team would always ask callers two or three extra security questions — not to confirm identites, but to cover up the delay between answering the call and the right page loading in the system.

4. Desktop clutter — As a result of the plethora of tools and reports and systems, the desktop of the average CS team member is crowded with open windows and tabs. They have to remember where things are and also how to interact with the different tools and reports, all of which may have been created independently (ie. work differently). This presents quite the cognitive load.

5. CS team members are expert users — They use the system every day, and will all have their own techniques for interacting with it quickly and accurately. They've also probably come up with their own solutions to system problems, which they might be very comfortable with. As Schumacher says, 'A critical mistake is to discount the expert and design for the novice. In contact centers, novices become experts very quickly.'

6. Co-design is risky — Co-design workshops, where the users become the designers,  are all the rage, and are usually pretty effective at getting great ideas quickly into systems. But expert users almost always end up regurgitating the system they're familiar with, as they've been trained by repeated use of systems to think in fixed ways.

7. Training is expensive — Complex systems require more training so if your call centre has high churn (ours doesn’t – most staff stick around for years) then you’ll be spending a lot of money. …and the one I’ve added:

8. Powerful does not mean easy to learn — The ‘it must be easy to use and intuitive’ design rationale is often the cause of badly designed CRM systems. Designers mistakenly design something simple when they should be designing something powerful. Powerful is complicated, dense, and often less easy to learn, but once mastered lets staff really motor.

Our project focus

Our improvement of Trade Me Admin is focused on fixing the shattered IA and restructuring the key pages to make them perform even better, bringing them into a new code framework. We're not redesigning the reports, tools, code or even the interaction for most of the reports, as this will be many years of effort. Watching our own staff use Trade Me Admin is like watching someone juggling six or seven things.

The system requires them to visit multiple pages, hold multiple facts in their head, pattern and problem-match across those pages, and follow their professional intuition to get to the heart of a problem. Where the system works well is on some key, densely detailed hub pages. Where it works badly, staff have to navigate click farms with arbitrary link names, have to type across the URL to get to hidden reports, and generally expend more effort on finding the answer than on comprehending the answer.

Groundwork

The first thing that we did was to sit with CS and watch them work and get to know the common actions they perform. The random nature of the IA and the plethora of dead links and superseded reports became apparent. We surveyed teams, providing them with screen printouts and three highlighter pens to colour things as green (use heaps), orange (use sometimes) and red (never use). From this, we were able to immediately remove a lot of noise from the new IA. We also saw that specific teams used certain links but that everyone used a core set. Initially focussing on the core set, we set about understanding the tasks under those links.

The complexity of the job soon became apparent – with a complex system like Trade Me Admin, it is possible to do the same thing in many different ways. Most CRM systems are complex and detailed enough for there to be more than one way to achieve the same end and often, it’s not possible to get a definitive answer, only possible to ‘build a picture’. There’s no one-to-one mapping of task to link. Links were also often arbitrarily named: ‘SQL Lookup’ being an example. The highly-trained user base are dependent on muscle memory in finding these links. This meant that when asked something like: “What and where is the policing enquiry function?”, many couldn’t tell us what or where it was, but when they needed the report it contained they found it straight away.

Sort of difficult

Therefore, it came as little surprise that staff found the subsequent card sort task quite hard. We renamed the links to better describe their associated actions, and of course, they weren't in the same location as in Trade Me Admin. So instead of taking the predicted 20 minutes, the sort was taking upwards of 40 minutes. Not great when staff are supposed to be answering customer enquiries!

We noticed some strong trends in the results, with links clustering around some of the key pages and tasks (like 'member', 'listing', 'review member financials', and so on). The results also confirmed something that we had observed — that there is a strong split between two types of information: emails/tickets/notes and member info/listing info/reports.

We built and tested two IAs

pietree results tree testing

After card sorting, we created two new IAs, and then customized one of the IAs for each of the three CS teams, giving us IAs to test. Each team was then asked to complete two tree tests, with 50% doing one first and 50% doing the other first. At first glance, the results of the tree test were okay — around 61% — but 'Could try harder'. We saw very little overall difference between the success of the two structures, but definitely some differences in task success. And we also came across an interesting quirk in the results.

Closer analysis of the pie charts with an expert in Trade Me Admin showed that some ‘wrong’ answers would give part of the picture required. In some cases so much so that I reclassified answers as ‘correct’ as they were more right than wrong. Typically, in a real world situation, staff might check several reports in order to build a picture. This ambiguous nature is hard to replicate in a tree test which wants definitive yes or no answers. Keeping the tasks both simple to follow and comprehensive proved harder than we expected.

For example, we set a task that asked participants to investigate whether two customers had been bidding on each other's auctions. When we looked at the pietree (see screenshot below), we noticed some participants had clicked on 'Search Members', thinking they needed to locate the customer accounts, when the task had presumed that the customers had already been found. This is a useful insight into writing more comprehensive tasks that we can take with us into our next tests.  

What’s clear from analysis is that although it’s possible to provide definitive answers for a typical site’s IAs, for a CRM like Trade Me Admin this is a lot harder. Devising and testing the structure of a CRM has proved a challenge for our highly trained audience, who are used to the current system and naturally find it difficult to see and do things differently. Once we had reclassified some of the answers as ‘correct’ one of the two trees was a clear winner — it had gone from 61% to 69%. The other tree had only improved slightly, from 61% to 63%.

There were still elements with it that were performing sub-optimally in our winning structure, though. Generally, the problems were to do with labelling, where, in some cases, we had attempted to disambiguate those ‘SQL lookup’-type labels but in the process, confused the team. We were left with the dilemma of whether to go with the new labels and make the system initially harder to use for staff but easier to learn for new staff, or stick with the old labels, which are harder to learn. My view is that any new system is going to see an initial performance dip, so we might as well change the labels now and make it better.

The importance of carefully structuring questions in a tree test has been highlighted, particularly in light of the ‘start anywhere/go anywhere’ nature of a CRM. The diffuse but powerful nature of a CRM means that careful consideration of tree test answer options needs to be made, in order to decide ‘how close to 100% correct answer’ you want to get.

Development work has begun so watch this space

It's great to see that our research is influencing the next stage of the CRM system, and we're looking forward to seeing it go live. Of course, our work isn't over— and nor would we want it to be! Alongside the redevelopment of the IA, I've been redesigning the key pages from Trade Me Admin, and continuing to conduct user research, including first click testing using Chalkmark.

This project has been governed by a steadily developing set of design principles, focused on complex CRM systems and the specific needs of their audience. Two of these principles are to reduce navigation and to design for experts, not novices, which means creating dense, detailed pages. It's intense, complex, and rewarding design work, and we'll be exploring this exciting space in more depth in upcoming posts.

Learn more
1 min read

Optimal vs Dovetail: Why Smart Product Teams Choose Unified Research Workflows

Research teams increasingly struggle with tool proliferation, using one platform for surveys, another for usability testing, a third for participant recruitment, then importing everything into analysis tools like Dovetail. This disjointed approach creates data and reporting challenges and workflow inefficiencies for teams looking to do quick, effective user research. Optimal eliminates these pain points by consolidating the entire research process into one seamless platform where teams can recruit, test, and analyze in one easy-to-use platform.

Why Choose Optimal over Dovetail? 

Fragmented Workflow vs. Unified Research Operations

  • Dovetail's Tool Chain Complexity: Dovetail requires teams to coordinate multiple platforms—one for recruitment, another for surveys, a third for usability testing—then import everything for analysis, creating workflow bottlenecks and coordination overhead.
  • Optimal's Streamlined Workflow: Optimal eliminates tool chain management by providing recruitment, testing, and analysis in one platform, enabling researchers to move seamlessly from study design to actionable insights.
  • Context Switching Inefficiency: Dovetail users constantly switch between different tools with different interfaces, learning curves, and data formats, fragmenting focus and slowing research velocity.
  • Focused Research Flow: Optimal's unified interface keeps researchers in flow state, moving efficiently through research phases without context switching or tool coordination.

Data Silos vs. Integrated Intelligence

  • Fragmented Data Sources: Dovetail aggregates data from multiple external sources, but this fragmentation can create inconsistencies, data quality issues, and gaps in analysis that compromise insight reliability.
  • Consistent Data Standards: Optimal's unified platform ensures consistent data collection standards, formatting, and quality controls across all research methods, delivering reliable insights from integrated data sources.
  • Manual Data Coordination: Dovetail teams spend significant time importing, formatting, and reconciling data from different tools before analysis can begin, delaying insight delivery and increasing error risk.
  • Automated Data Integration: Optimal automatically captures and integrates data across all research activities, enabling real-time analysis and immediate insight generation without manual data management.

Limited Data Collection vs. Global Research Capabilities

  • No Native Recruitment: Dovetail's beta participant recruitment add-on lacks the scale and reliability enterprise teams need, forcing dependence on external recruitment services with additional costs and complexity.
  • Global Participant Network: Optimal's 100+ million verified participants across 150+ countries provide comprehensive recruitment capabilities with advanced targeting and quality assurance for any research requirement.
  • Analysis-Only Value: Dovetail's value depends entirely on research volume from external sources, making ROI uncertain for teams with moderate research needs or budget constraints.
  • Complete Research ROI: Optimal delivers immediate value through integrated data collection and analysis capabilities, ensuring consistent ROI regardless of external research dependencies.

Doveetail Challenges: 

Dovetail may slow teams because of challenges with: 

  • Multi-tool coordination requiring significant project management overhead
  • Data fragmentation creating inconsistencies and quality control challenges
  • Context switching between platforms disrupting research flow and focus
  • Manual data import and formatting delaying insight delivery
  • Complex tool chain management requiring specialized technical knowledge

When Optimal is the Right Choice

Optimal becomes essential for:

  • Streamlined Workflows: Teams needing efficient research operations without tool coordination overhead
  • Research Velocity: Projects requiring rapid iteration from hypothesis to validated insights
  • Data Consistency: Studies where integrated data standards ensure reliable analysis and conclusions
  • Focus and Flow: Researchers who need to maintain deep focus without platform switching
  • Immediate Insights: Teams requiring real-time analysis and instant insight generation
  • Resource Efficiency: Organizations wanting to maximize researcher productivity and minimize tool management

Ready to move beyond basic feedback to strategic research intelligence? Experience how Optimal's analytical depth and comprehensive insights drive product decisions that create competitive advantage.

Learn more
1 min read

Optimal vs Ballpark: Why Research Depth Matters More Than Surface-Level Simplicity

For many smaller teams, new research tools like Ballpark look appealing with promises of ease-of-use and quick feedback, but for larger teams, meaningful research that impacts product strategy requires a platform that delivers actionable insights, not just basic metrics. While Ballpark offers surface-level testing, Optimal provides the research depth, deep analysis, and strategic intelligence that teams need when product decisions truly matter.

Why Choose Optimal over Ballpark?

Surface-Level Feedback vs. Strategic Research Intelligence

  • Ballpark's Shallow Analysis: Ballpark focuses on collecting quick feedback through basic surveys and simple preference tests, but lacks the analytical depth needed to understand why users behave as they do or what actions to take based on findings.
  • Optimal's Strategic Insights: Optimal transforms user feedback into strategic intelligence through advanced analytics, behavioral analysis, and AI-powered insights that reveal not just what happened, but why it happened and what to do about it.
  • Limited Research Methodology: Ballpark's toolset centers on simple feedback collection without comprehensive research methods like advanced card sorting, tree testing, or sophisticated user journey analysis.
  • Complete Research Arsenal: Optimal provides the full spectrum of research methodologies needed to understand complex user behaviors, validate design decisions, and guide strategic product development.

Quick Metrics vs. Actionable Intelligence

  • Basic Data Collection: Ballpark provides simple metrics and basic reporting that tell you what happened but leave teams to figure out the 'why' and 'what next' on their own.
  • Intelligent Analysis: Optimal's AI-powered analysis doesn't just collect data—it identifies patterns, predicts user behavior, and provides specific recommendations that guide product decisions.
  • Limited Participant Insights: Ballpark's 3 million participant panel provides basic demographic targeting but lacks the sophisticated segmentation and behavioral profiling needed for nuanced research.
  • Deep User Understanding: Optimal's 100+ million verified participants across 150+ countries enable precise targeting and comprehensive user profiling that reveals deep behavioral insights and cultural nuances.

Startup Risk vs. Enterprise Reliability

  • Unproven Stability: As a recently founded startup with limited funding transparency, Ballpark presents platform stability risks and uncertain long-term viability for enterprise research investments.
  • Proven Enterprise Reliability: Optimal has successfully launched over 100,000 studies with 99.9% uptime guarantee, providing the reliability and stability enterprise organizations require.
  • Limited Support Infrastructure: Ballpark's small team and basic support options cannot match the dedicated account management and enterprise support that strategic research programs demand.
  • Enterprise Support Excellence: Optimal provides dedicated account managers, 24/7 enterprise support, and comprehensive onboarding that ensures research program success.

When to Choose Optimal

Optimal is the best choice for teams looking for: 

  • Actionable Intelligence: When teams need insights that directly inform product strategy and design decisions
  • Behavioral Understanding: Projects requiring deep analysis of why users behave as they do
  • Complex Research Questions: Studies that demand sophisticated methodologies and advanced analytics
  • Strategic Product Decisions: When research insights drive major feature development and business direction
  • Comprehensive User Insights: Teams needing complete user understanding beyond basic preference testing
  • Competitive Advantage: Organizations using research intelligence to outperform competitors

Ready to move beyond basic feedback to strategic research intelligence? Experience how Optimal's analytical depth and comprehensive insights drive product decisions that create competitive advantage.

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.