Optimal Blog
Articles and Podcasts on Customer Service, AI and Automation, Product, and more

A year ago, we looked at the user research market and made a decision.
We saw product teams shipping faster than ever while research tools stayed stuck in time. We saw researchers drowning in manual work, waiting on vendor emails, stitching together fragmented tools. We heard "should we test this?" followed by "never mind, we already shipped."
The dominant platforms got comfortable. We didn't.
Today, we're excited to announce Optimal 3.0, the result of refusing to accept the status quo and building the fresh alternative teams have been asking for.
The Problem: Research Platforms Haven't Evolved
The gap between product velocity and research velocity has never been wider. The situation isn't sustainable. And it's not the researcher's fault. The tools are the problem. They’re:
- Built for specialists only - Complex interfaces that gatekeep research from the rest of the team
- Fragmented ecosystems - Separate tools for recruitment, testing, and analysis that don't talk to each other
- Data in silos - Insights trapped study-by-study with no way to search across everything
- Zero integration - Platforms that force you to abandon your workflow instead of fitting into it
These platforms haven't changed because they don't have to, so we set out to challenge them.
Our Answer: A Complete Ecosystem for Research Velocity
Optimal 3.0 isn't an incremental update to the old way of doing things. It's a fundamental rethinking of what a research platform should be.
Research For All, Not Just Researchers.
For 18 years, we've believed research should be accessible to everyone, not just specialists. Optimal 3.0 takes that principle further.
Unlimited seats. Zero gatekeeping.
Designers can validate concepts without waiting for research bandwidth. PMs can test assumptions without learning specialist tools. Marketers can gather feedback without procurement nightmares. Research shouldn't be rationed by licenses or complexity. It should be a shared capability across your entire team.
A Complete Ecosystem in One Place.
Stop stitching together point solutions.Optimal 3.0 gives you everything you need in one platform:
Recruitment Built In Access millions of verified participants worldwide without the vendor tag. Target by demographics, behaviors, and custom screeners. Launch studies in minutes, not days. No endless email chains. No procurement delays.
Testing That Adapts to You
- Live Site Testing: Test any URL, your production site, staging, or competitors, without code or developer dependencies
- Prototype Testing: Connect Figma and go from design to insights in minutes
- Mobile Testing: Native screen recordings that capture the real user experience
- Enhanced Traditional Methods: Card sorting, tree testing, first-click tests, the methodologically sound foundations we built our reputation on
Learn more about Live Site Testing
AI-Powered Analysis (With Control) Interview analysis used to take weeks. We've reduced it to minutes.
Our AI automatically identifies themes, surfaces key quotes, and generates summaries, while you maintain full control over the analysis.
As one researcher told us: "What took me 4 weeks to manually analyze now took me 5 minutes."
This isn't about replacing researcher judgment. It's about amplifying it. The AI handles the busywork, tagging, organizing, timestamping. You handle the strategic thinking and judgment calls. That's where your value actually lives.
Learn more about Optimal Interviews
Chat Across All Your Data Your research data is now conversational.
Ask questions and get answers instantly, backed by actual video evidence from your studies. Query across multiple Interview studies at once. Share findings with stakeholders complete with supporting clips.
Every insight comes with the receipts. Because stakeholders don't just need insights, they need proof.
A Dashboard Built for Velocity See all your studies, all your data, in one place. Track progress across your entire team. Jump from question to insight in seconds. Research velocity starts with knowing what you have.
Integration Layer
Optimal 3.0 fits your workflow. It doesn't dominate it. We integrate with the tools you already use, Figma, Slack, your existing tech stack, because research shouldn't force you to abandon how you work.
What Didn't Change: Methodological Rigor
Here's what we didn't do: abandon the foundations that made teams trust us.
Card sorting, tree testing, first-click tests, surveys, the methodologically sound tools that Amazon, Google, Netflix, and HSBC have relied on for years are all still here. Better than ever.
We didn't replace our roots. We built on them.
18 years of research methodology, amplified by modern AI and unified in a complete ecosystem.
Why This Matters Now
Product development isn't slowing down. AI is accelerating everything. Competitors are moving faster. Customer expectations are higher than ever.
Research can either be a bottleneck or an accelerator.
The difference is having a platform that:
- Makes research accessible to everyone (not just specialists)
- Provides a complete ecosystem (not fragmented point solutions)
- Amplifies judgment with AI (instead of replacing it)
- Integrates with workflows (instead of forcing new ones)
- Lets you search across all your data (not trapped in silos)
Optimal 3.0 is built for research that arrives before the decision is made. Research that shapes products, not just documents them. Research that helps teams ship confidently because they asked users first.
A Fresh Alternative
We're not trying to be the biggest platform in the market.
We're trying to be the best alternative to the clunky tools that have dominated for years.
Amazon, Google, Netflix, Uber, Apple, Workday, they didn't choose us because we're the incumbent. They chose us because we make research accessible, fast, and actionable.
"Overall, each release feels like the platform is getting better." — Lead Product Designer at Flo
"The one research platform I keep coming back to." — G2 Review
What's Next
This launch represents our biggest transformation, but it's not the end. It's a new beginning.
We're continuing to invest in:
- AI capabilities that amplify (not replace) researcher judgment
- Platform integrations that fit your workflow
- Methodological innovations that maintain rigor while increasing speed
- Features that make research accessible to everyone
Our goal is simple: make user research so fast and accessible that it becomes impossible not to include users in every decision.
See What We've Built
If you're evaluating research platforms and tired of the same old clunky tools, we'd love to show you the alternative.
Book a demo or start a free trial
The platform that turns "should we?" into "we did."
Welcome to Optimal 3.0.
Topics
Research Methods
Popular
All topics
Latest
5 ways to increase user research in your organization
Co-authored by Brandon Dorn, UX designer at Viget.As user experience designers, making sure that websites and tools are usable is a critical component of our work, and conducting user research enables us to assess whether we’re achieving that goal or not. Even if we want to incorporate research, however, certain constraints may stand in our way.
A few years ago, we realized that we were facing this issue at Viget, a digital design agency, and we decided to make an effort to prioritize user research. Almost two years ago, we shared initial thoughts on our progress in this blog post. We’ve continued to learn and grow as researchers since then and hope that what we’ve learned along the way can help your clients and coworkers understand the value of research and become better practitioners. Below are some of those lessons.
Make research a priority for your organization
Before you can do more research, it needs to be prioritized across your entire organization — not just within your design team. To that end, you should:
- Know what you’re trying to achieve. By defining specific goals, you can share a clear message with the broader organization about what you’re after, how you can achieve those goals, and how you will measure success. At Viget, we shared our research goals with everyone at the company. In addition, we talked to the business development and project management teams in more depth about specific ways that they could help us achieve our goals, since they have the greatest impact on our ability to do more research.
- Track your progress. Once you’ve made research a priority, make sure to review your goals on an ongoing basis to ensure that you’re making progress and share your findings with the organization. Six months after the research group at Viget started working on our goals, we held a retrospective to figure out what was working — and what wasn’t.
- Adjust your approach as needed. You won’t achieve your goals overnight. As you put different tactics into action, adjust your approach if something isn’t helping you achieve your goals. Be willing to experiment and don’t feel bad if a specific tactic isn’t successful.
Educate your colleagues and clients
If you want people within your organization to get excited about doing more research, they need to understand what research means. To educate your colleagues and clients, you should:
- Explain the fundamentals of research. If someone has not conducted research before, they may not be familiar or feel comfortable with the vernacular. Provide an overview of the fundamental terminology to establish a basic level of understanding. In a blog post, Speaking the Same Language About Research, we outline how we established a common vocabulary at Viget.
- Help others understand the landscape of research methods. As designers, we feel comfortable talking about different methodologies and forget that that information will be new to many people. Look for opportunities to increase understanding by sharing your knowledge. At Viget, we make this happen in several ways. Internally, we give presentations to the company, organize group viewing sessions for webinars about user research, and lead focused workshops to help people put new skills into practice. Externally, we talk about our services and share knowledge through our blog posts. We are even hosting a webinar about conducting user interviews in November and we'd love for you to join us.
- Incorporate others into the research process. Don't just tell people what research is and why it's important — show them. Look for opportunities to bring more people into the research process. Invite people to observe sessions so they can experience research firsthand or have them take on the role of the notetaker. Another simple way to make people feel involved is to share findings on an ongoing basis rather than providing a report at the end of the process.
Broaden your perspective while refining your skill set
Our commitment to testing assumptions led us to challenge ourselves to do research on every project. While we're dogmatic about this goal, we're decidedly un-dogmatic about the form our research takes from one project to another. To pursue this goal, we seek to:
- Expand our understanding. To instill a culture of research at Viget, we've found it necessary to question our assumptions about what research looks like. Books like Erika Hall’s Just Enough Research teach us the range of possible approaches for getting useful user input at any stage of a project, and at any scale. Reflect on any methodological biases that have become well-worn paths in your approach to research. Maybe your organization is meticulous about metrics and quantitative data, and could benefit from a series of qualitative studies. Maybe you have plenty of anecdotal and qualitative evidence about your product that could be better grounded in objective analysis. Aim to establish a balanced perspective on your product through a diverse set of research lenses, filling in gaps as you learn about new approaches.
- Adjust our approach to project constraints. We've found that the only way to consistently incorporate research in our work is to adjust our approach to the context and constraints of any given project. Client expectations, project type, business goals, timelines, budget, and access to participants all influence the type, frequency, and output of our research. Iterative prototype testing of an email editor, for example, looks very different than post-launch qualitative studies for an editorial website. While some projects are research-intensive, short studies can also be worthwhile.
- Reflect on successes and shortcomings. We have a longstanding practice of holding post-project team retrospectives to reflect on and document lessons for future work. Research has naturally come up in these conversations, and many of the things we've discussed you're reading right now. As an agency with a diverse set of clients, it's been important for us to understand what types of research work for what types of clients, and when. Make sure to take time to ask these questions after projects. Mid-project retrospectives can be beneficial, especially on long engagements, yet it's hard to see the forest when you're in the weeds.
Streamline qualitative research processes 🚄
Learning to be more efficient at planning, conducting, and analyzing research has helped us overturn the idea that some projects merit research while others don't. Remote moderated usability tests are one of our preferred methods, yet, in our experience, the biggest obstacle to incorporating these tests isn't the actual moderating or analyzing, but the overhead of acquiring and scheduling participants. While some agencies contract out the work of recruiting, we've found it less expensive and more reliable to collaborate with our clients to find the right people for our tests. That said, here are some recommendations for holding efficient qualitative tests:
- Know your tools ahead of time. We use a number of tools to plan, schedule, annotate, and analyze qualitative tests (we're inveterate spreadsheet users). Learn your tools beforehand, especially if you're trying something new. Tools should fade into the background during tests, which Reframer does nicely.
- Establish a recruiting process. When working with clients to find participants, we'll often provide an email template tailored to the project for them to send to existing or potential users of their product. This introductory email will contain a screener that asks a few project-related demographic or usage questions, and provides us with participant email addresses which we use to follow-up with a link to a scheduling tool. Once this process is established, the project manager will ensure that the UX designer on the team has a regular flow of participants. The recruiting process doesn't take care of itself – participants cancel, or reschedule, or sometimes don't respond at all – yet establishing an approach ahead of time allows you, the researcher, to focus on the research in the midst of the project.
- Start recruiting early. Don't wait until you've finished writing a testing script to begin recruiting participants. Once you determine the aim and focal points of your study, recruit accordingly. Scripts can be revised and approved in the meantime.
Be proactive about making research happen 🤸
As a generalist design agency, we work with clients whose industries and products vary significantly. While some clients come to us with clear research priorities in mind, others treat it as an afterthought. Rare, however, is the client who is actively opposed to researching their product. More often than not, budget and timelines are the limiting factors. So we try not to make research an ordeal, but instead treat it as part of our normal process even if a client hasn't explicitly asked for it. Common-sense perspectives like Jakob Nielsen’s classic “Discount Usability for the Web” remind us that some research is always better than none, and that some can still be meaningfully pursued. We aren’t pushy about research, of course, but instead try to find a way to make it happen when it isn't a definite priority.
World Usability Day is coming up on November 9, so now is a great time to stop and reflect on how you approach research and to brainstorm ways to improve your process. The tips above reflect some of the lessons we’ve learned at Viget as we’ve tried to improve our own process. We’d love to hear about approaches you’ve used as well.

How we created a content strategy without realizing it
Tania Hockings is the Senior Digital Content Advisor at ACC and has a passion for creating content that caters to users (not the business). Co-presenter and co-author Amy Stoks is a Senior Experience Designer at PwC’s Experience Centre and possesses a love of empathy and collaboration. Ahead of their presentation at UX New Zealand 2017, Tania and Amy share their experience creating a content strategy while working on a project for the Accident Compensation Corporation (ACC’s no-fault scheme helps pay the cost of care for everyone in New Zealand if they’re injured in an accident).It’s a truth universally acknowledged that before you start writing content you’re supposed to have a content strategy. Three months out from launch of the new acc.co.nz beta site, we did not have one. Nor did we have a lot of time, very much resource or a completed content audit.However, we did have:
- Some pretty good design principles, based on user research
- A list of the 37 priority tasks that our users needed to do on the acc.co.nz website
- One content writer (Tania) and one part-time content consultant (Amy)
- A deadline
- Freedom to do whatever we needed to do to get content live for the beta launch.
Here’s a quick look into how we created a content strategy for acc.co.nz without actually realizing it.
Content principles are a great starting point
We needed more direction than those 37 tasks to get writing, so inspired by our design principles, we wrote some equivalent principles for the content. We decided to start with the tried and tested principles already in play by Govt.nz and GOV.UK — we didn’t have time to reinvent the wheel. We ended up with eight principles for how we would write our content:
- Do less: only create what is needed and what we can maintain
- Always have an evidence-based user need: we know why a piece of content is needed and have the evidence to back it up
- Ask for constant feedback: we use feedback and analytics to understand what people want and need to know, as well as what they don’t
- Provide a consistent customer experience: our voice is the same across all platforms and our content is accessible for every kind of user
- Create seamless customer journeys: no dead ends or broken links, no unnecessary information
- Improve how ACC writes: we build good relationships with content contributors and proactively offer content support and training
- Ensure transparent ownership for all content: every piece of content has an owner, a business group and a digital advisor assigned
- Accept that not everything has to live on acc.co.nz: other channels share ACC content, so if ACC isn’t the source of truth for information we don’t put it on acc.co.nz
We made a checklist of what would and wouldn’t live on acc.co.nz according to the principles...and that was pretty much it. We really didn’t have time to do much else because the design of the site was running ahead of us. We also needed to get some content in front of users at our weekly testing sessions.
Sometimes you’ve just gotta get writing
We got stuck into writing those 37 tasks using our pair writing approach, which was also an experiment, but more on that in our UX New Zealand talk. While we wrote, we were living and breathing the content principles: we introduced them to our internal subject experts while we were writing and constantly referred back to the principles to help structure the content.After the beta launch, we had a few more content writers on the team and a bit of time to breathe (but not much!). We actually wrote the principles down and put them into a visual briefing pack to give to the subject experts ahead of our pair writing sessions. This pack covered:
- our principles
- the goal of the project
- the process
- the usual best practice webby stuff.
As we wrote more content, the briefing pack and our process evolved based on what we learned and feedback from our subject experts about what was and wasn’t working.During the same brief intermission, we also had a chance to revisit the content strategy. However, in practice we just did a brainstorm on a whiteboard of what the strategy might be. It looked like this:

And it stayed like that for another six months.We can’t remember if we ever looked at it much, but we felt good knowing it was there.
Seriously, we really need a content strategy...don’t we?
We finally got to the end of the project. The launch date was looming, but still no content strategy. So we booked a room. Three of us agreed to meet to nut it out and finally write our formal content strategy. We talked for a bit, going around in circles, until we realized we’d already done it. The briefing pack was the content strategy. Less a formal document and more a living set of principles of how we had and would continue to work.
Would we do it again?
Yeah, we would. In fact, the ACC digital team is already following the same approach on a new project. Content principles are key: they’re simple, practical to socialize and easy to stick to. We found it really valuable to evolve the strategy as we learned more from user research and subject matter expertise.Of course, it wasn’t all rosy — these projects never are! Some governance and muscle behind what we were doing would have really helped. We found ourselves in some intense stakeholder meetings where we were the first line of defence for the content principles. Unsurprisingly, not everybody agrees with doing less! But we’re pretty sure that having a longer strategy formalized in an official document still wouldn’t have helped us much.The next piece of work for the digital team at ACC is defining that governance and building a collaborative process to design digital products within the organization. The plan is to run informal, regular catch-ups with internal stakeholders to make sure the content strategy is still relevant and working for ACC’s users and the organization itself.
If you remember anything from this blog post, remember this:
Treat your content strategy less like a formal document and more like a working hypothesis that changes and grows as you learn.A formal document might make you feel good, but it’s likely no one is reading it.Whatever format you choose for your content strategy/working hypothesis, make sure you get it signed off and endorsed by the people who matter. You’ll need back up in those inevitably tense project meetings!The acc.co.nz content strategy looks awesome these days — very visual and easy to read. Tania always has a copy in her notebook and carries it with her everywhere. If you’re lucky enough to run into her on the streets of Wellington, she might just show it to you.
Want to hear more? Come to UX New Zealand!
If you'd like to hear more about designing content, plus a bunch of other cool UX-related talks, head along to UX New Zealand 2017 hosted by Optimal Workshop. The conference runs from 11-13 October including a day of fantastic workshops, and you can get your tickets here.
How to do a content audit: How and why
Ah, content audits! If you need to work on a website redesign, information architecture revamp, or a site migration, one of the first things you’ll need to do is a content audit.Most likely a website redesign project will need some amount of re-organization because users can't find anything on the site. First, you’ll need to know who your users are. You’ll find out what it is that the users of your website are doing or looking for. You’ll conduct some preliminary user interviews to ask your project stakeholders what they think their users are looking for. Maybe you’ll get to ask actual users of your website.Then you’ll realize that you could also find out if the website correlates with what your users expect to be able to find. If you only knew what was on your website in the first place...You’ll need to do the content audit. You can’t avoid it now. The spreadsheet comes out. You start your inventory, add the metadata, and add columns to make sense of cells.It was arduous, and it was worth it. Once you do the content audit, you’ll be able to conduct card sorting and affinity mapping to find out what your users are looking for.It reads like a children’s book, doesn’t it? “And then content lived happily ever after...”But, content audits are iterative, not a once in a project cycle activity. Like a routine checkup or a yearly exam, a content audit is essential to keep content relevant and valuable to your users.Conducting a content audit is one of the first steps in putting together a card sort. You need words on a card, right? Your content audit is where you’ll find them.Incidentally, “how to do a content audit” articles are plentiful, but this will be one that focuses on how a card sort makes use of a content audit.
What is a content audit?
Content audits start off as inventories. They tend to be massive spreadsheets that contain, among other things, metadata about the content you’re keeping track of. For example, an inventory may start as a list of companies, occupations, or cities. Then someone may come along and start collecting empirical metadata around these things. The list becomes a list of the top global brand companies, the best occupations for 2017, or the most dangerous cities in the world.Maybe you’re keeping track of what books you have, what you’ve read, what you haven’t read. Maybe you’re keeping an inventory of your kitchen pantry. Maybe you’re collecting a list of movies and films you should watch or keeping a bucket list of places you want to visit.Consider the scope of these three inventories:
- A full content inventory. A complete listing of all site content, including pages, images, videos, and PDFs. If you consider a kitchen taxonomy, this includes everything in the kitchen, (including books, recipe binders, kitchen equipment, refrigerated items).
- A partial content inventory. A subset listing of content slicing across the site. For example, most popular, site hierarchy, or items used within a defined period of time. A partial kitchen inventory would cover everything used in the past 6 months.
- A content sample. A listing of example content from the site. For instance, a specific category or location. A content sample of kitchen inventory could cover the pantry or the spice cabinet.
Quantitative content inventories
Content inventories are the quantitative kind. The purpose of this list is to know how much content you have, and how many of each different kind. There are 12 countries I want to visit in Asia. I’ve already visited 10 of the 50 states in the United States. There are over 800 pages in the five websites that I’m consolidating.Content inventories give you numbers to work with and provide a current state of affairs before you go making changes. You’ll be able to refer to this when you talk to content owners.
Qualitative content inventories
Content audits focus on the qualitative. You add an evaluation of the content (which is qualitative) to that initial simple inventory. The most common is ROT analysis: redundant, obsolete, and trivial. Another type of analysis looks at tone and voice. Kristina Halvorson, in her book Content Strategy for the Web, chunks qualitative data into six groups: usability, knowledge level, findability, actionability, audience, and accuracy.The information you collect in your audit depends on what you want to know. If “easy to understand” is a KPI that you or the business wants to measure, then you’ll want to include readability scores. Every content audit is custom-fit for the purposes of each project.In essence, content audits are lists of things you want to track and your assessment of that thing — whatever that thing is. Things can be physical content as well as digital. Are those spices too old and should be thrown out? Is that travel destination in the midst of political turmoil and should it be taken off the list (for now)? Is that movie now available for streaming? How many of those 800 pages are worth of keeping or updating and how many of those could we archive and take offline?
Content audits are pivotal documents
Content audits are living documents. They need to be updated on a regular basis to maintain a certain level of content quality and relevance. They are pivotal documents, shared across various disciplines, used for various purposes.Search engine optimization (SEO) tools create site crawls that capture page titles, URLs, page elements, and position within a site hierarchy. They are spreadsheets that look and feel deceptively like content inventories. And they essentially are.Content audits are converging:
- SEO specialists conduct SEO content audits to identify thin content, accessibility, indexability, duplicate content and such.
- Content strategists and information architects conduct inventories and audits to determine what content exists, where it lives, when it was last updated, and who owns it.
- Taxonomists mine content inventories for categories and content terminology.
- Search analysts collect keywords to supplement site search.
Content audits are pivotal documents that have many different uses.Someone adds site analytics to the document, then readability scores, then BOOM! There are now even more ways to pivot the table — top landing pages, top pageviews, highest bounce rate, high word count, low word count, oldest content, newest content — where do you want to start?
Designing for conversational user interfaces: A Q&A with Stratis Valachis
Stratis Valachis, senior user experience designer at Aviva’s Digital Innovation Garage, took some time out of his busy schedule to answer some questions about designing for conversational user interfaces (CUI). Learn more about his processes for research and design for CUI, what he thinks the future will look like, and some of the biggest challenges he’s faced while designing for CUI.Stratis will be speaking at MUXL2017, the third annual conference around Mobile User Experience in London on the 10th of November at City, University of London. Using case studies through talks and workshops, the conference will cover Core UX principles as well as emerging topics such as AI (Chatbots), VR (AR) & IOT.
What does the research and design process for conversational interfaces look like?
Like any design project, you should always start by identifying user needs and real problems. Research how users solve that problem currently and then evaluate for which use cases you can remove friction and enhance the experience by utilizing a conversational interface.Don't try to chat-ify or voice-ify your product just because it's a cool trend. In many ways conversational interfaces (CUIs), both voice and visual, have more usability constraints than traditional GUI. For example, it’s hard to interrupt the conversation to recover from errors, you can't easily skim through information, progress is linear and you very often need to rely on recall.Users make conscious compromises about which type of interface they want to use.This means that a solution utilizing a CUI needs to offer an obvious benefit for your chosen use case, otherwise users won't use your product. That's why special emphasis should be placed on early research about the context in which users will use your product and on why a CUI could provide a better experience. When you begin the design phase, a good practice would be to craft a personality for your interface. Studies have shown that because humans are empathetic, they will assign human character attributes to your CUI anyway, so it's better to make sure this is defined through design. This works really well for platforms like Google Home and Facebook Messenger, which make it clear to the user that each product built on them is a different entity from the default assistant.Some channels like Alexa, though, don't make that distinction clear. In these cases, you need to make sure that the character of your CUI doesn’t significantly deviate from the personality of the default assistant, otherwise you'll mess with their mental model and create confusion. For example, when you're ordering an Uber with Alexa, it’s Alexa that speaks back to you: "Alexa, ask Uber for a ride." "Sure, there's an Uber less than a minute away, would you like me to order it?". While on Google Home, the Google Assistant makes it clear that it passes you over to Uber "Hi, I'm Uber, how can I help?".After you define the personality, start drafting out the core experience of your product.If you're working on a visual CUI, type the conversation down like a screenplay. If you're working with voice, act the dialogue out with your colleagues and use voice simulators to see how it feels in the channel you're designing for. This will make it easier to decide the direction you'd like to follow and will also help you initiate conversations with stakeholders.At this stage, you will be ready to start designing your user flows to define the functionality at a granular level. Again, understanding context is crucial. Make sure you think of the different scenarios in which users will interact with your product and the ways they're likely to phrase their input. User testing is key for this.
What are some of the biggest challenges you've faced designing for CUI?
Setting the right expectations for users. That applies to both visual and voice interfaces. There's a gap between the mental model users have of what most AI products with conversational interfaces can do, and what they are actually capable of doing. That was a common pattern I've seen in user testing sessions even with users who had previous experience in the conversational channel that was being tested. As a designer your challenge is to make the affordances and constraints clear in a way that feels like a natural part of the conversation and mitigates disappointment from unrealistic expectations. Another challenge is trying to cater for all the different ways people will phrase the requests. The key here is to invest time and resources in user research and NLP (natural language processing) services. If you feel that this is out of scope for your project, you may consider limiting the options for your users as trying to guide them to say things in a certain way will not work. Good examples of this are Facebook Messenger bots which now allow developers to remove the input field entirely from the experience in order to prevent users from making requests that can't be supported.
How do you think CUI is going to change the way designers and researchers do their work?
It might require designers and researchers to slightly alter some techniques they're using (for example thinking aloud during user testing doesn't work with voice interfaces) but the fundamentals will stay the same. You still need to focus on understanding the problem, explore different solutions through divergent thinking, converge, develop and continuously iterate based on user feedback. The exciting thing is that these new technologies significantly expand our toolbox and offer new interesting ways to solve problems for our users.
What improvements to this kind of technology do you wish to see? How would you like this technology to progress in the future?
I would like to see a more widespread integration of voice interfaces with visuals and GUI interaction patterns. A good example of the benefits of this approach is Amazon's Fire TV. Users can converse with the system via voice when it's more efficient than the alternative interaction options (for example, searching for a movie) but use their remote control to interact with visual UI elements for tasks that would be tedious to perform through voice. For example, selecting a movie cover to reveal descriptive text and then skimming through it helps you gauge whether the plot is interesting faster than if you had to consume this information through a conversation. This hybrid approach utilizes the best of each world to create a stronger experience. I think we will see this type of interface a lot more in the future. Think of Iron Man and J.A.R.V.I.S.
Any advice for young designers and researchers hoping to get into this part of the industry?
Invest time in learning best practices for crafting good dialogue. It's a crucial skill for designers in this field. Google and Amazon's design guidelines are a good starting point. This doesn't mean you should omit training and improving your knowledge in usability for traditional interfaces. Most of the principles are time-proof and channel agnostic and will help you greatly with conversational interfaces.Another thing you should make sure you do is stay up to date with the latest trends. The technology evolves very fast so you need to stay ahead of curve. Attend meetups, work on personal projects and participate in hackathons to practice and learn from the experts.As long as you're really passionate about the field, there will be plenty of opportunities for you to get involved and contribute. We're still in the early stages of mainstream adoption of the technology, so we have the chance to make significant impact on the evolution of the field and shape best practices for years to come, which is really exciting!
New to Reframer: Tag groups and custom colors
We’ve been working hard over the last few weeks on some nifty little changes to the way you can use, create and manage tags in Reframer. Today, we’re happy to announce that all these changes are now live. Yay!Now, you can group your tags and add custom colors. Learn a little bit about how these new features work and how you can use them below.
More flexibility with your tags
We made these changes so that the notetaking and analysis stages of your research projects are simpler. Previously, Reframer tags would all be the same shade of blue and organized alphabetically — not always the easiest way to find the tags you’re looking for. Now you can add as many tag groups as you like to your Reframer sessions to make managing and finding your tags much quicker and more effortless. Even better, you can also apply colors to these groups. Pick from our selected palette of colors, or choose a custom one through the hex color code chart. Grouping your tags gives you the freedom to organize your research observations in the way that makes sense to your project. Here are some of the many ways you can group and color your research observations.
- Based on sentiment (happy, sad, angry, confused etc.)
- Based on products
- Based on tasks — for example, testing a certain section or feature of a website
- Based on devices used
- Based on defining personas
- Based on period of time/time of day

These new tag features give you a lot of flexibility — each group doesn’t have to contain only one color. You can add multiple colors within one group (for example, within a ‘sentiment’ group you could select red for ‘angry’, green for ‘happy’ etc).For a bit more guidance on how to write effective research observations in Reframer, check out this article we wrote for UX Mastery.We hope you like these new tag features for Reframer. As always, if you have any feedback please let us know on support@optimalworkshop.comAlternatively, comment below with some of the ways you’ve used these new features in your research.

13 time-saving tips and tools for conducting great user interviews
User interviews are a great research method you can use to gain qualitative data about your users, and understand what they think and feel. But they can be quite time consuming, which can sometimes put people off doing them altogether.They can be a bit of a logistical nightmare to organize. You need to recruit participants, nail down a time and place, bring your gear, and come up with a Plan B if people don’t show up. All of this can take up a fair bit of back and forthing between your research team and other people, and it’s a real headache when you have a deadline to work to.So, how can you reap the great rewards and insights that user interviews provide, while spending less time planning and organizing them? Here are 15 tips and tools to help get you started.
Preparation
1) Come up with a checklist
Checklists can be lifesavers, especially when your brain is running 100 miles an hour and you’re wondering if you’ve forgotten to even introduce yourself to your participant.Whether you’re doing your research remotely or in person, it always helps to have a list of all the tasks you need to do so you can check them off one by one.A great checklist should include:
- the items you need to bring to your sessions (notebooks, laptop, pens, water, and do NOT forget your laptop charger!)
- any links you need to send to your interviewee if speaking to them remotely (Google Hangouts, webex etc.)
- a reminder to get consent to record your interview session
- a reminder to hit the record button
Scripts are also useful for cutting down time. Instead of “umm-ing” and “ahh-ing” your way through your interview, you can have a general idea of what you’ll talk about. Scripts will likely change between each project, but having a loose template that you can chop and change pretty easily will help you save time in the future.Some basic things you’ll want to include in your script:
- an introduction of yourself, and some ice-breaker questions to build a rapport with your participant
- your research goals and objectives — what/who you’re doing this research for and why
- how your research will be used
- the questions you’re going to ask
- tying up loose ends — answering questions from your participant and thanking them very much for their time.
2) Build up a network of participants to choose from
This is another tip that requires a bit of legwork at the start, but saves lots of hassle later on. If you build up a great network of people willing to take part in your research, recruiting can become much easier.Perhaps you can set up a research panel that people can opt into through your website (something we’ve done here at Optimal Workshop that has been a huge help). If you’re working internally and need to interview users at your own company, you can do a similar thing. Reach out to managers or team leaders to get employees on board, get creative with incentives, reward people with thanks or cakes in public — there are loads of ideas.
3) Do your interviews remotely
Remote user research is great. It allows you to talk to all types of people anywhere in the world, without having to spend time and money for travel to get to them.There are many different tools you can use to conduct your user interview remotely.Some easy to use and free ones are Google Hangouts and Skype. As a bonus, it’s likely your participants will already have one of these installed, saving them time and hassle — just don’t forget to record your session.Here are a couple of recording tools you can use:
- QuickTime
- iShowU HD
- Pamela for Skype
4) Rehearse, rehearse, rehearse
Make sure you’re not wasting any precious research time and rehearse your interview with a colleague or friend. This will help you figure out anything you’ve missed, or what could potentially go wrong that could cause you time delays and headaches on the day.
- Do your questions make sense, and are they the right kinds of questions?
- Test your responses — are you making sure you stay neutral so you don’t lead your participants along?
- Does your script flow naturally? Or does it sound too scripty?
- Are there any areas that technology could become a hindrance, and how can you make sure you avoid this?
5) Use scheduling tools to book sessions for you
Setting up meetings with colleagues can be difficult, but when you’re reaching out to participants who are volunteering their precious time it can be a nightmare.Make it easier for all involved and use an easy scheduling tool to get rid of most of the hard work.Simply enter in a few times that you’re free to host sessions, and your participants can select which ones work for them.Here are a couple of tools to get you started:
- Calendly
- NeedtoMeet
- Boomerang Calendar
- ScheduleOnce
Don’t forget to automate the reminder emails to save yourself some time. Some of the above tools can sort that out for you!
In-session
6) Avoid talking about yourself — stick to your script!
When you’re trying to build a rapport with your participant, it’s easy to go overboard, get off track and waste precious research time. Avoid talking about yourself too much, and focus on asking about your participant, how they feel, and what they think. Make sure you keep your script handy so you know if you’re heading in the wrong direction.
7) Record interviews, transcribe later
In many user interview scenarios, you’ll have a notetaker to jot down key observations as your session goes on. But if you don’t have the luxury of a notetaker, you’ll likely be relying on yourself to take notes. This can be really distracting when you’re interviewing someone, and will also take up precious research time. Instead, record your interview and only note down timestamps when you come across a key observation.
8) Don’t interrupt
Ever had something to say and started to explain it to someone, only to get interrupted then lose your train of thought? This can happen to your participants if you’re not careful, which can mean delays with getting the information you need. Stay quiet, and give your participant a few seconds before asking what they’re thinking.
9) Don’t get interrupted
If you’re hosting your interview at your office, let your coworkers know so they don’t interrupt you. Hang a sign up on the door of your meeting room and make sure you close the door. If you’re going out of your office, pick a location that’s quiet and secluded like a meeting room at a library, or a quiet corner in a cafe.
10) Take photos of the environment
If you’re interviewing users in their own environment, there are many little details that can help you with your research. But you could spend ages taking note of all these details in your session. You can get a good idea of what your participant’s day is like by snapping some images of their workstations, tech they use, and the office as a whole. Use your phone and pop these into Evernote or Dropbox to analyze later.
Analysis
11) Use Reframer to analyze your data
Qualitative research produces very powerful data, but it also produces a lot of it. It can take you and your team hours, even days, to go through it all.Use a qualitative research tool such as Reframer to tag your observations so you can easily build themes and find patterns in your data while saving hours of analysis. Tags might be related to a particular subject you’re discussing with a participant, a really valuable quote, or even certain problems your participants have encountered — it all depends on your project.
12) Make collaboration simple
Instead of spending hours writing up some of your findings on Post-it notes and sticking them up on a wall to discuss with your teammates, you can quickly and easily do this online with Trello or MURAL. This is definitely a big timesaver if you’ve got some team members who work remotely.
13) Make your findings easy to read
Presenting your findings to stakeholders can be difficult, and extremely time consuming if you need to explain it all in easy-to-understand terms. Save time and make it easier for your stakeholders by compiling your findings into an infographic, engaging data visualization, or slideshow presentation. Just make sure you bring all the stats you need to answer any questions from stakeholders.For more actionable tips and tricks from UX professionals all over the world, check out our latest ebook. Download and print out templates and checklists, and become a pro for your next user interview.Get our new ebook
Related reading
- "Individual interviews" - An article from Usability.gov explaining how, when, and why you should conduct user interviews with individuals.
- "Interviewing users" - A guide from Nielsen Norman Group showing us the situations in which user interviews are best suited, and the kind of information they can provide us.
- "Open-ended versus close-ended questions in user research" - Another article from Nielsen Norman Group explaining the differences in question types.