March 12, 2025
2

Efficient Research: Maximizing the ROI of Understanding Your Customers

Introduction

User research is invaluable, but in fast-paced environments, researchers often struggle with tight deadlines, limited resources, and the need to prove their impact. In our recent UX Insider webinar, Weidan Li, Senior UX Researcher at Seek, shared insights on Efficient Research—an approach that optimizes Speed, Quality, and Impact to maximize the return on investment (ROI) of understanding customers.

At the heart of this approach is the Efficient Research Framework, which balances these three critical factors:

  • Speed – Conducting research quickly without sacrificing key insights.
  • Quality – Ensuring rigor and reliability in findings.
  • Impact – Making sure research leads to meaningful business and product changes.

Within this framework, Weidan outlined nine tactics that help UX researchers work more effectively. Let’s dive in.

1. Time Allocation: Invest in What Matters Most

Not all research requires the same level of depth. Efficient researchers prioritize their time by categorizing projects based on urgency and impact:

  • High-stakes decisions (e.g., launching a new product) require deep research.
  • Routine optimizations (e.g., tweaking UI elements) can rely on quick testing methods.
  • Low-impact changes may not need research at all.

By allocating time wisely, researchers can avoid spending weeks on minor issues while ensuring critical decisions are well-informed.

2. Assistance of AI: Let Technology Handle the Heavy Lifting

AI is transforming UX research, enabling faster and more scalable insights. Weidan suggests using AI to:

  • Automate data analysis – AI can quickly analyze survey responses, transcripts, and usability test results.
  • Generate research summaries – Tools like ChatGPT can help synthesize findings into digestible insights.
  • Speed up recruitment – AI-powered platforms can help find and screen participants efficiently.

While AI can’t replace human judgment, it can free up researchers to focus on higher-value tasks like interpreting results and influencing strategy.

3. Collaboration: Make Research a Team Sport

Research has a greater impact when it’s embedded into the product development process. Weidan emphasizes:

  • Co-creating research plans with designers, PMs, and engineers to align on priorities.
  • Involving stakeholders in synthesis sessions so insights don’t sit in a report.
  • Encouraging non-researchers to run lightweight studies, such as A/B tests or quick usability checks.

When research is shared and collaborative, it leads to faster adoption of insights and stronger decision-making.

4. Prioritization: Focus on the Right Questions

With limited resources, researchers must choose their battles wisely. Weidan recommends using a prioritization framework to assess:

  • Business impact – Will this research influence a high-stakes decision?
  • User impact – Does it address a major pain point?
  • Feasibility – Can we conduct this research quickly and effectively?

By filtering out low-priority projects, researchers can avoid research for research’s sake and focus on what truly drives change.

5. Depth of Understanding: Go Beyond Surface-Level Insights

Speed is important, but efficient research isn’t about cutting corners. Weidan stresses that even quick studies should provide a deep understanding of users by:

  • Asking why, not just what – Observing behavior is useful, but uncovering motivations is key.
  • Using triangulation – Combining methods (e.g., usability tests + surveys) to validate findings.
  • Revisiting past research – Leveraging existing insights instead of starting from scratch.

Balancing speed with depth ensures research is not just fast, but meaningful.

6. Anticipation: Stay Ahead of Research Needs

Proactive researchers don’t wait for stakeholders to request studies—they anticipate needs and set up research ahead of time. This means:

  • Building a research roadmap that aligns with upcoming product decisions.
  • Running continuous discovery research so teams have a backlog of insights to pull from.
  • Creating self-serve research repositories where teams can find relevant past studies.

By anticipating research needs, UX teams can reduce last-minute requests and deliver insights exactly when they’re needed.

7. Justification of Methodology: Explain Why Your Approach Works

Stakeholders may question research methods, especially when they seem time-consuming or expensive. Weidan highlights the importance of educating teams on why specific methods are used:

  • Clearly explain why qualitative research is needed when stakeholders push for just numbers.
  • Show real-world examples of how past research has led to business success.
  • Provide a trade-off analysis (e.g., “This method is faster but provides less depth”) to help teams make informed choices.

A well-justified approach ensures research is respected and acted upon.

8. Individual Engagement: Tailor Research Communication to Your Audience

Not all stakeholders consume research the same way. Weidan recommends adapting insights to fit different audiences:

  • Executives – Focus on high-level impact and key takeaways.
  • Product teams – Provide actionable recommendations tied to specific features.
  • Designers & Engineers – Share usability findings with video clips or screenshots.

By delivering insights in the right format, researchers increase the likelihood of stakeholder buy-in and action.

9. Business Actions: Ensure Research Leads to Real Change

The ultimate goal of research is not just understanding users—but driving business decisions. To ensure research leads to action:

  • Follow up on implementation – Track whether teams apply the insights.
  • Tie findings to key metrics – Show how research affects conversion rates, retention, or engagement.
  • Advocate for iterative research – Encourage teams to re-test and refine based on new data.

Research is most valuable when it translates into real business outcomes.

Final Thoughts: Research That Moves the Needle

Efficient research is not just about doing more, faster—it’s about balancing speed, quality, and impact to maximize its influence. Weidan’s nine tactics help UX researchers work smarter by:


✔️  Prioritizing high-impact work
✔️  Leveraging AI and collaboration
✔️  Communicating research in a way that drives action

By adopting these strategies, UX teams can ensure their research is not just insightful, but transformational.

Watch the full webinar here

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Usability Testing: what, how and why?

Knowing and understanding why and how your users use your product can be invaluable for getting to the nitty gritty of usability. Where they get stuck and where they fly through. Delving deep with probing questions into motivation or skimming over looking for issues can equally be informative.

Usability testing can be done in several ways, each way has its benefits. Put super simply, usability testing literally is testing how useable your product is for your users. If your product isn't useable users will not stick around or very often complete their task, let alone come back for more.

What is usability testing? 🔦

Usability testing is a research method used to evaluate how easy something is to use by testing it with representative users.

These tests typically involve observing a participant as they work through a series of tasks involving the product being tested. Having conducted several usability tests, you can analyze your observations to identify the most common issues.

We go into the three main methods of usability testing:

  1. Moderated and unmoderated
  2. Remote or in person
  3. Explorative, assessment or comparative

1. Moderated or unmoderated usability testing 👉👩🏻💻

Moderated usability testing is done in-person or remotely by a researcher who introduces the test to participants, answers their queries, and asks follow-up questions. Often these tests are done in real time with participants and can involve other research stakeholders. Moderated testing usually produces more in-depth results thanks to the direct interaction between researchers and test participants. However, this can be expensive to organize and run.

Top tip: Use moderated testing to investigate the reasoning behind user behavior.

Unmoderated usability testing is done without direct supervision; likely participants are in their own homes and/or using their own devices to browse the website that is being tested. And often at their own pace.  The cost of unmoderated testing is lower, though participant answers can remain superficial and making follow-up questions can be difficult.

Top tip: Use unmoderated testing to test a very specific question or observe and measure behavior patterns.

2. Research or in-person usability testing 🕵

Remote usability testing is done over the internet or by phone. Allowing the participants to have the time and space to work in their own environment and at their own pace. This however doesn’t give the researcher much in the way of contextual data because you’re unable to ask questions around intention or probe deeper if the participant makes a particular decision. Remote testing doesn’t go as deep into a participant’s reasoning, but it allows you to test large numbers of people in different geographical areas using fewer resources.

Top tip: Use remote testing when a large group of participants are needed and the questions asked can be direct and unambiguous.

In-person usability testing, as the name suggests, is done in the presence of a researcher. In-person testing does provide contextual data as researchers can observe and analyze body language and facial expressions. You’re also often able to converse with participants and find out more about why they do something. However, in-person testing can be expensive and time-consuming: you have to find a suitable space, block out a specific date, and recruit (and often pay) participants.

Top tip: In-person testing gives researchers more time and insight into motivation for decisions.

3. Explorative, Assessment or comparative testing 🔍

These three usability testing methods generate different types of information:

Explorative testing is open-ended. Participants are asked to brainstorm, give opinions, and express emotional impressions about ideas and concepts. The information is typically collected in the early stages of product development and helps researchers pinpoint gaps in the market, identify potential new features, and workshop new ideas.

Assessment research is used to test a user's satisfaction with a product and how well they are able to use it. It's used to evaluate general functionality.

Comparative research methods involve asking users to choose which of two solutions they prefer, and they may be used to compare a product with its competitors.

Top tip: Depending on what research is being done, and how much qualitative or quantitative data is wanted.

Which method is right for you? 🧐

Whether the testing is done in-person, remote, moderated or unmoderated will depend on your purpose, what you want out of the testing, and to some extent your budget. 

Depending on what you are testing, each of the usability testing methods we explored here can offer an answer. If you are at the development stage of a product it can be useful to conduct a usability test on the entire product. Checking the intuitive usability of your website, to ensure users can make the best decisions, quickly. Or adding, changing or upgrading a product can also be the moment to check on a specific question around usability. Planning and understanding your objectives are key to selecting the right usability testing option for your project.

Let's take a look at a couple of examples of usability testing.

1. Lab based, in-person moderated testing - mid-life website

Imagine you have a website that sells sports equipment. Over time your site has become cluttered and disorganized, much like a bricks and mortar store may. You’ve noticed a drop in sales in certain areas. How do you find out what is going wrong or where users are getting lost? Having an in-person, lab (or other controlled environment), moderated usability test with users you can set tasks, watch (and record) what they do.

The researcher can literally be standing or sitting next to the participant throughout, recording contextual information such as how they interacted with the mouse, laptop or even the seat. Watching for cues as to the comfort of the participant and asking questions about why they make decisions can provide richer insights. Maybe they wanted purple yoga pants, but couldn’t find the ‘yoga’ section which was listed under gym rather than a clothing section.

Meaning you can look at how your stock is organised, or even investigate undertaking a card sort. This provides robust and fully rounded feedback on users behaviours, expectations and experiences. Providing data that can directly be turned into actionable directives when redeveloping the website. 

2. Remote, moderated assessment testing - app product development

You are looking at launching an app for parents to access for information and updates for the school. It’s still in development stage and at this point you want to know how easy the app is to use. Setting some very specific set tasks for participants to complete the app can be sent to them and they can be left to complete (or not). Providing feedback and comments around the usability.

The next step may be to use first click testing to see how and where the interface is clicked and where participants may be spending time, or becoming lost. Whilst the feedback and data gathered from this testing can be light, it will be very direct to the questions asked. And will provide data to back up (or possibly not) what assumptions were made.

3. Moderated, In-person, explorative testing - new product development

You’re right at the start of the development process. The idea is new and fresh and the basics are being considered. What better way to get an understanding of what your users’ truly want than an explorative study.

Open-ended questions with participants in a one-on-one environment (or possibly in groups) can provide rich data and insights for the development team. Imagine you have an exciting new promotional app that you are developing for a client. There are similar apps on the market but none as exciting as what your team has dreamt up. By putting it (and possibly the competitors) to participants they can give direct feedback on what they like, love and loathe.

They can also help brainstorm ideas or better ways to make the app work, or improve the interface. All of this done, before there is money sunk in development.

Wrap up 🌯

Key objectives will dictate which usability testing method will deliver the answers to your questions.

Whether it’s in-person, remote, moderated or comparative with a bit of planning you can gather data around your users very real experience of your product. Identify issues, successes and failures. Addressing your user experience with real data, and knowledge can but lead to a more intuitive product.

Learn more
1 min read

Meera Pankhania: From funding to delivery - Ensuring alignment from start to finish

It’s a chicken and egg situation when it comes to securing funding for a large transformation program in government. On one hand, you need to submit a business case and, as part of that, you need to make early decisions about how you might approach and deliver the program of work. On the other hand, you need to know enough about the problem you are going to solve to ensure you have sufficient funding to understand the problem better, hire the right people, design the right service, and build it the right way. 

Now imagine securing hundreds of millions of dollars to design and build a service, but not feeling confident about what the user needs are. What if you had the opportunity to change this common predicament and influence your leadership team to carry out alignment activities, all while successfully delivering within the committed time frames?

Meera Pankhania, Design Director and Co-founder of Propel Design, recently spoke at UX New Zealand, the leading UX and IA conference in New Zealand hosted by Optimal Workshop, on traceability and her learnings from delivering a $300 million Government program.

In her talk, Meera helps us understand how to use service traceability techniques in our work and apply them to any environment - ensuring we design and build the best service possible, no matter the funding model.

Background on Meera Pankhania

As a design leader, Meera is all about working on complex, purpose-driven challenges. She helps organizations take a human-centric approach to service transformation and helps deliver impactful, pragmatic outcomes while building capability and leading teams through growth and change.

Meera co-founded Propel Design, a strategic research, design, and delivery consultancy in late 2020. She has 15 years of experience in service design, inclusive design, and product management across the private, non-profit, and public sectors in both the UK and Australia. 

Meera is particularly interested in policy and social design. After a stint in the Australian Public Service, Meera was appointed as a senior policy adviser to the NSW Minister for Customer Service, Hon. Victor Dominello MP. In this role, she played a part in NSW’s response to the COVID pandemic, flexing her design leadership skills in a new, challenging, and important context.

Contact Details:

Email address: meera@propeldesign.com.au

Find Meera on LinkedIn  

From funding to delivery: ensuring alignment from start to finish 🏁🎉👏

Meera’s talk explores a fascinating case study within the Department of Employment Services (Australia) where a substantial funding investment of around $300 million set the stage for a transformative journey. This funding supported the delivery of a revamped Employment Services Model, which had the goal of delivering better services to job seekers and employers, and a better system for providers within this system. The project had a focus on aligning teams prior to delivery, which resulted in a huge amount of groundwork for Meera.

Her journey involved engaging various stakeholders within the department, including executives, to understand the program as a whole and what exactly needed to be delivered. “Traceability” became the watchword for this project, which is laid out in three phases.

  • Phase 1: Aligning key deliverables
  • Phase 2: Ensuring delivery readiness
  • Phase 3: Building sustainable work practices

Phase 1: Aligning key deliverables 🧮

Research and discovery (pre-delivery)

Meera’s work initially meant conducting extensive research and engagement with executives, product managers, researchers, designers, and policymakers. Through this process, a common theme was identified – the urgent (and perhaps misguided) need to start delivering! Often, organizations focus on obtaining funding without adequately understanding the complexities involved in delivering the right services to the right users, leading to half-baked delivery.

After this initial research, some general themes started to emerge:

  1. Assumptions were made that still needed validation
  2. Teams weren’t entirely sure that they understood the user’s needs
  3. A lack of holistic understanding of how much research and design was needed

The conclusion of this phase was that “what” needed to be delivered wasn’t clearly defined. The same was true for “how” it would be delivered.

Traceability

Meera’s journey heavily revolved around the concept of "traceability” and sought to ensure that every step taken within the department was aligned with the ultimate goal of improving employment services. Traceability meant having a clear origin and development path for every decision and action taken. This is particularly important when spending taxpayer dollars!

So, over the course of eight weeks (which turned out to be much longer), the team went through a process of combing through documents in an effort to bring everything together to make sense of the program as a whole. This involved some planning, user journey mapping, and testing and refinement. 

Documenting Key Artifacts

Numerous artifacts and documents played a crucial role in shaping decisions. Meera and her team gathered and organized these artifacts, including policy requirements, legislation, business cases, product and program roadmaps, service maps, and blueprints. The team also included prior research insights and vision documents which helped to shape a holistic view of the required output.

After an effort of combing through the program documents and laying everything out, it became clear that there were a lot of gaps and a LOT to do.

Prioritising tasks

As a result of these gaps, a process of task prioritization was necessary. Tasks were categorized based on a series of factors and then mapped out based on things like user touch points, pain points, features, business policy, and technical capabilities.

This then enabled Meera and the team to create Product Summary Tiles. These tiles meant that each product team had its own summary ahead of a series of planning sessions. It gave them as much context (provided by the traceability exercise) as possible to help with planning. Essentially, these tiles provided teams with a comprehensive overview of their projects i.e. what their user needs, what certain policies require them to deliver, etc.  

Phase 2: Ensuring delivery readiness 🙌🏻

Meera wanted every team to feel confident that we weren’t doing too much or too little in order to design and build the right service, the right way.

Standard design and research check-ins were well adopted, which was a great start, but Meera and the team also built a Delivery Readiness Tool. It was used to assess a team's readiness to move forward with a project. This tool includes questions related to the development phase, user research, alignment with the business case, consideration of policy requirements, and more. Ultimately, it ensures that teams have considered all necessary factors before progressing further. 

Phase 3: Building sustainable work practices 🍃

As the program progressed, several sustainable work practices emerged which Government executives were keen to retain going forward.

Some of these included:

  • ResearchOps Practice: The team established a research operations practice, streamlining research efforts and ensuring that ongoing research was conducted efficiently and effectively.
  • Consistent Design Artifacts: Templates and consistent design artifacts were created, reducing friction and ensuring that teams going forward started from a common baseline.
  • Design Authority and Ways of Working: A design authority was established to elevate and share best practices across the program.
  • Centralized and Decentralized Team Models: The program showcased the effectiveness of a combination of centralized and decentralized team models. A central design team provided guidance and support, while service design leads within specific service lines ensured alignment and consistency.

Why it matters 🔥

Meera's journey serves as a valuable resource for those working on complex design programs, emphasizing the significance of aligning diverse stakeholders and maintaining traceability. Alignment and traceability are critical to ensuring that programs never lose sight of the problem they’re trying to solve, both from the user and organization’s perspective. They’re also critical to delivering on time and within budget!

Traceability key takeaways 🥡

  • Early Alignment Matters: While early alignment is ideal, it's never too late to embark on a traceability journey. It can uncover gaps, increase confidence in decision-making, and ensure that the right services are delivered.
  • Identify and audit: You never know what artifacts will shape your journey. Identify everything early, and don’t be afraid to get clarity on things you’re not sure about.
  • Conducting traceability is always worthwhile: Even if you don’t find many gaps in your program, you will at least gain a high level of confidence that your delivery is focused on the right things.

Delivery readiness key takeaways 🥡

  • Skills Mix is Vital: Assess and adapt team member roles to match their skills and experiences, ensuring they are positioned optimally.
  • Not Everyone Shares the Same Passion: Recognize that not everyone will share the same level of passion for design and research. Make the relevance of these practices clear to all team members.

Sustainability key takeaways 🥡

  • One Size Doesn't Fit All: Tailor methodologies, templates, and practices to the specific needs of your organization.
  • Collaboration is Key: Foster a sense of community and collective responsibility within teams, encouraging shared ownership of project outcomes.

Learn more
1 min read

Clara Kliman-Silver: AI & design: imagining the future of UX

In the last few years, the influence of AI has steadily been expanding into various aspects of design. In early 2023, that expansion exploded. AI tools and features are now everywhere, and there are two ways designers commonly react to it:

  • With enthusiasm for how they can use it to make their jobs easier
  • With skepticism over how reliable it is, or even fear that it could replace their jobs

Google UX researcher Clara Kliman-Silver is at the forefront of researching and understanding the potential impact of AI on design into the future. This is a hot topic that’s on the radar of many designers as they grapple with what the new normal is, and how it will change things in the coming years.

Clara’s background 

Clara Kliman-Silver spends her time studying design teams and systems, UX tools and designer-developer collaboration. She’s a specialist in participatory design and uses generative methods to investigate workflows, understand designer-developer experiences, and imagine ways to create UIs. In this work, Clara looks at how technology can be leveraged to help people make things, and do it more efficiently than they currently are.

In today’s context, that puts generative AI and machine learning right in her line of sight. The way this technology has boomed in recent times has many people scrambling to catch up - to identify the biggest opportunities and to understand the risks that come with it. Clara is a leader in assessing the implications of AI. She analyzes both the technology itself and the way people feel about it to forecast what it will mean into the future.

Contact Details:

You can find Clara in LinkedIn or on Twitter @cklimansilver

What role should artificial intelligence play in UX design process? 🤔

Clara’s expertise in understanding the role of AI in design comes from significant research and analysis of how the technology is being used currently and how industry experts feel about it. AI is everywhere in today’s world, from home devices to tech platforms and specific tools for various industries. In many cases, AI automation is used for productivity, where it can speed up processes with subtle, easy to use applications.

As mentioned above, the transformational capabilities of AI are met with equal parts of enthusiasm and skepticism. The way people use AI, and how they feel about it is important, because users need to be comfortable implementing the technology in order for it to make a difference. The question of what value AI brings to the design process is ongoing. On one hand, AI can help increase efficiency for systems and processes. On the other hand, it can exacerbate problems if the user's intentions are misunderstood.

Access for all 🦾

There’s no doubt that AI tools enable novices to perform tasks that, in years gone by, required a high level of expertise. For example, film editing was previously a manual task, where people would literally cut rolls of film and splice them together on a reel. It was something only a trained editor could do. Now, anyone with a smartphone has access to iMovie or a similar app, and they can edit film in seconds.

For film experts, digital technology allows them to speed up tedious tasks and focus on more sophisticated aspects of their work. Clara hypothesizes that AI is particularly valuable when it automates mundane tasks. AI enables more individuals to leverage digital technologies without requiring specialist training. Thus, AI has shifted the landscape of what it means to be an “expert” in a field. Expertise is about more than being able to simply do something - it includes having the knowledge and experience to do it for an informed reason. 

Research and testing 🔬

Clara performs a lot of concept testing, which involves recognizing the perceived value of an approach or method. Concept testing helps in scenarios where a solution may not address a problem or where the real problem is difficult to identify. In a recent survey, Clara describes two predominant benefits designers experienced from AI:

  1. Efficiency. Not only does AI expedite the problem solving process, it can also help efficiently identify problems. 
  2. Innovation. Generative AI can innovate on its own, developing ideas that designers themselves may not have thought of.

The design partnership 🤝🏽

Overall, Clara says UX designers tend to see AI as a creative partner. However, most users don’t yet trust AI enough to give it complete agency over the work it’s used for. The level of trust designers have exists on a continuum, where it depends on the nature of the work and the context of what they’re aiming to accomplish. Other factors such as where the tech comes from, who curated it and who’s training the model also influences trust. For now, AI is largely seen as a valued tool, and there is cautious optimism and tentative acceptance for its application. 

Why it matters 💡

AI presents as potentially one of the biggest game-changers to how people work in our generation. Although AI has widespread applications across sectors and systems, there are still many questions about it. In the design world, systems like DALL-E allow people to create AI-generated imagery, and auto layout in various tools allows designers to iterate more quickly and efficiently.

Like many other industries, designers are wondering where AI might go in the future and what it might look like. The answer to these questions has very real implications for the future of design jobs and whether they will exist. In practice, Clara describes the current mood towards AI as existing on a continuum between adherence and innovation:

  • Adherence is about how AI helps designers follow best practice
  • Innovation is at the other end of the spectrum, and involves using AI to figure out what’s possible

The current environment is extremely subjective, and there’s no agreed best practice. This makes it difficult to recommend a certain approach to adopting AI and creating permanent systems around it. Both the technology and the sentiment around it will evolve through time, and it’s something designers, like all people, will need to maintain good awareness of.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.