February 20, 2024
3 min

Meera Pankhania: From funding to delivery - Ensuring alignment from start to finish

It’s a chicken and egg situation when it comes to securing funding for a large transformation program in government. On one hand, you need to submit a business case and, as part of that, you need to make early decisions about how you might approach and deliver the program of work. On the other hand, you need to know enough about the problem you are going to solve to ensure you have sufficient funding to understand the problem better, hire the right people, design the right service, and build it the right way. 

Now imagine securing hundreds of millions of dollars to design and build a service, but not feeling confident about what the user needs are. What if you had the opportunity to change this common predicament and influence your leadership team to carry out alignment activities, all while successfully delivering within the committed time frames?

Meera Pankhania, Design Director and Co-founder of Propel Design, recently spoke at UX New Zealand, the leading UX and IA conference in New Zealand hosted by Optimal Workshop, on traceability and her learnings from delivering a $300 million Government program.

In her talk, Meera helps us understand how to use service traceability techniques in our work and apply them to any environment - ensuring we design and build the best service possible, no matter the funding model.

Background on Meera Pankhania

As a design leader, Meera is all about working on complex, purpose-driven challenges. She helps organizations take a human-centric approach to service transformation and helps deliver impactful, pragmatic outcomes while building capability and leading teams through growth and change.

Meera co-founded Propel Design, a strategic research, design, and delivery consultancy in late 2020. She has 15 years of experience in service design, inclusive design, and product management across the private, non-profit, and public sectors in both the UK and Australia. 

Meera is particularly interested in policy and social design. After a stint in the Australian Public Service, Meera was appointed as a senior policy adviser to the NSW Minister for Customer Service, Hon. Victor Dominello MP. In this role, she played a part in NSW’s response to the COVID pandemic, flexing her design leadership skills in a new, challenging, and important context.

Contact Details:

Email address: meera@propeldesign.com.au

Find Meera on LinkedIn  

From funding to delivery: ensuring alignment from start to finish 🏁🎉👏

Meera’s talk explores a fascinating case study within the Department of Employment Services (Australia) where a substantial funding investment of around $300 million set the stage for a transformative journey. This funding supported the delivery of a revamped Employment Services Model, which had the goal of delivering better services to job seekers and employers, and a better system for providers within this system. The project had a focus on aligning teams prior to delivery, which resulted in a huge amount of groundwork for Meera.

Her journey involved engaging various stakeholders within the department, including executives, to understand the program as a whole and what exactly needed to be delivered. “Traceability” became the watchword for this project, which is laid out in three phases.

  • Phase 1: Aligning key deliverables
  • Phase 2: Ensuring delivery readiness
  • Phase 3: Building sustainable work practices

Phase 1: Aligning key deliverables 🧮

Research and discovery (pre-delivery)

Meera’s work initially meant conducting extensive research and engagement with executives, product managers, researchers, designers, and policymakers. Through this process, a common theme was identified – the urgent (and perhaps misguided) need to start delivering! Often, organizations focus on obtaining funding without adequately understanding the complexities involved in delivering the right services to the right users, leading to half-baked delivery.

After this initial research, some general themes started to emerge:

  1. Assumptions were made that still needed validation
  2. Teams weren’t entirely sure that they understood the user’s needs
  3. A lack of holistic understanding of how much research and design was needed

The conclusion of this phase was that “what” needed to be delivered wasn’t clearly defined. The same was true for “how” it would be delivered.

Traceability

Meera’s journey heavily revolved around the concept of "traceability” and sought to ensure that every step taken within the department was aligned with the ultimate goal of improving employment services. Traceability meant having a clear origin and development path for every decision and action taken. This is particularly important when spending taxpayer dollars!

So, over the course of eight weeks (which turned out to be much longer), the team went through a process of combing through documents in an effort to bring everything together to make sense of the program as a whole. This involved some planning, user journey mapping, and testing and refinement. 

Documenting Key Artifacts

Numerous artifacts and documents played a crucial role in shaping decisions. Meera and her team gathered and organized these artifacts, including policy requirements, legislation, business cases, product and program roadmaps, service maps, and blueprints. The team also included prior research insights and vision documents which helped to shape a holistic view of the required output.

After an effort of combing through the program documents and laying everything out, it became clear that there were a lot of gaps and a LOT to do.

Prioritising tasks

As a result of these gaps, a process of task prioritization was necessary. Tasks were categorized based on a series of factors and then mapped out based on things like user touch points, pain points, features, business policy, and technical capabilities.

This then enabled Meera and the team to create Product Summary Tiles. These tiles meant that each product team had its own summary ahead of a series of planning sessions. It gave them as much context (provided by the traceability exercise) as possible to help with planning. Essentially, these tiles provided teams with a comprehensive overview of their projects i.e. what their user needs, what certain policies require them to deliver, etc.  

Phase 2: Ensuring delivery readiness 🙌🏻

Meera wanted every team to feel confident that we weren’t doing too much or too little in order to design and build the right service, the right way.

Standard design and research check-ins were well adopted, which was a great start, but Meera and the team also built a Delivery Readiness Tool. It was used to assess a team's readiness to move forward with a project. This tool includes questions related to the development phase, user research, alignment with the business case, consideration of policy requirements, and more. Ultimately, it ensures that teams have considered all necessary factors before progressing further. 

Phase 3: Building sustainable work practices 🍃

As the program progressed, several sustainable work practices emerged which Government executives were keen to retain going forward.

Some of these included:

  • ResearchOps Practice: The team established a research operations practice, streamlining research efforts and ensuring that ongoing research was conducted efficiently and effectively.
  • Consistent Design Artifacts: Templates and consistent design artifacts were created, reducing friction and ensuring that teams going forward started from a common baseline.
  • Design Authority and Ways of Working: A design authority was established to elevate and share best practices across the program.
  • Centralized and Decentralized Team Models: The program showcased the effectiveness of a combination of centralized and decentralized team models. A central design team provided guidance and support, while service design leads within specific service lines ensured alignment and consistency.

Why it matters 🔥

Meera's journey serves as a valuable resource for those working on complex design programs, emphasizing the significance of aligning diverse stakeholders and maintaining traceability. Alignment and traceability are critical to ensuring that programs never lose sight of the problem they’re trying to solve, both from the user and organization’s perspective. They’re also critical to delivering on time and within budget!

Traceability key takeaways 🥡

  • Early Alignment Matters: While early alignment is ideal, it's never too late to embark on a traceability journey. It can uncover gaps, increase confidence in decision-making, and ensure that the right services are delivered.
  • Identify and audit: You never know what artifacts will shape your journey. Identify everything early, and don’t be afraid to get clarity on things you’re not sure about.
  • Conducting traceability is always worthwhile: Even if you don’t find many gaps in your program, you will at least gain a high level of confidence that your delivery is focused on the right things.

Delivery readiness key takeaways 🥡

  • Skills Mix is Vital: Assess and adapt team member roles to match their skills and experiences, ensuring they are positioned optimally.
  • Not Everyone Shares the Same Passion: Recognize that not everyone will share the same level of passion for design and research. Make the relevance of these practices clear to all team members.

Sustainability key takeaways 🥡

  • One Size Doesn't Fit All: Tailor methodologies, templates, and practices to the specific needs of your organization.
  • Collaboration is Key: Foster a sense of community and collective responsibility within teams, encouraging shared ownership of project outcomes.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Best UX Research Methods for Every Phase of Product Development

What is UX research?

User experience (UX) research, or user research as it’s commonly referred to, is an important part of the product design process. Primarily, UX research involves using different research methods to gather information about how your users interact with your product. It is an essential part of developing, building and launching a product that truly meets the requirements of your users. 

UX research is essential at all stages of a products' life cycle:

  1. Planning
  2. Building
  3. Introduction
  4. Growth & Maturity

While there is no one single time to conduct UX research it is best-practice to continuously gather information throughout the lifetime of your product. The good news is many of the UX research methods do not fit just one phase either, and can (and should) be used repeatedly. After all, there are always new pieces of functionality to test and new insights to discover. We introduce you to best-practice UX research methods for each lifecycle phase of your product.

1. Product planning phase

While the planning phase it is about creating a product that fits your organization, your organization’s needs and meeting a gap in the market it’s also about meeting the needs, desires and requirements of your users. Through UX research you’ll learn which features are necessary to be aligned with your users. And of course, user research lets you test your UX design before you build, saving you time and money.

Qualitative Research Methods

Usability Testing - Observational

One of the best ways to learn about your users and how they interact with your product is to observe them in their own environment. Watch how they accomplish tasks, the order they do things, what frustrates them, and what makes the task easier and/or more enjoyable for your subject. The data can be collated to inform the usability of your product, improving intuitive design, and what resonates with users.

Competitive Analysis

Reviewing products already in the market can be a great start to the planning process. Why are your competitors’ products successful and how well do they behave for users. Learn from their successes, and even better build on where they may not be performing the best and find your niche in the market.

Quantitative Research Methods

Surveys and Questionnaires

Surveys are useful for collecting feedback or understanding attitudes. You can use the learnings from your survey of a subset of users to draw conclusions about a larger population of users.

There are two types of survey questions:

Closed questions are designed to capture quantitative information. Instead of asking users to write out answers, these questions often use multi-choice answers.

Open questions are designed to capture qualitative information such as motivations and context.  Typically, these questions require users to write out an answer in a text field.

2. Product building phase

Once you've completed your product planning research, you’re ready to begin the build phase for your product. User research studies undertaken during the build phase enable you to validate the UX team’s deliverables before investing in the technical development.

Qualitative Research Methods

Focus groups

Generally involve 5-10 participants and include demographically similar individuals. The study is set up so that members of the group can interact with one another and can be carried out in person or remotely.


Besides learning about the participants’ impressions and perceptions of your product, focus group findings also include what users believe to be a product’s most important features, problems they might encounter while using the product, as well as their experiences with other products, both good and bad.

Quantitative Research Methods

Card sorting gives insight into how users think. Tools like card sorting reveal where your users expect to find certain information or complete specific tasks. This is especially useful for products with complex or multiple navigations and contributes to the creation of an intuitive information architecture and user experience.

Tree testing gives insight into where users expect to find things and where they’re getting lost within your product. Tools like tree testing help you test your information architecture.
Card sorting and tree testing are often used together. Depending on the purpose of your research and where you are at with your product, they can provide a fully rounded view of your information architecture.

3. Product introduction phase

You’ve launched your product, wahoo! And you’re ready for your first real life, real time users. Now it’s time to optimize your product experience. To do this, you’ll need to understand how your new users actually use your product.

Qualitative Research Methods

Usability testing involves testing a product with users. Typically it involves observing users as they try to follow and complete a series of tasks. As a result you can evaluate if the design is intuitive and if there are any usability problems.

User Interviews - A user interview is designed to get a deeper understanding of a particular topic. Unlike a usability test, where you’re more likely to be focused on how people use your product, a user interview is a guided conversation aimed at better understanding your users. This means you’ll be capturing details like their background, pain points, goals and motivations.

Quantitative Research Methods

A/B Testing is a way to compare two versions of a design in order to work out which is more effective. It’s typically used to test two versions of the same webpage, for example, using a different headline, image or call to action to see which one converts more effectively. This method offers a way to validate smaller design choices where you might not have the data to make an informed decision, like the color of a button or the layout of a particular image.

Flick-click testing shows you where people click first when trying to complete a task on a website. In most cases, first-click testing is performed on a very simple wireframe of a website, but it can also be carried out on a live website using a tool like first-time clicking.

4. Growth and maturity phase

If you’ve reached the growth stage, fantastic news! You’ve built a great product that’s been embraced by your users. Next on your to-do list is growing your product by increasing your user base and then eventually reaching maturity and making a profit on your hard work.

Growing your product involves building new or advanced features to satisfy specific customer segments. As you plan and build these enhancements, go through the same research and testing process you used to create the first release. The same holds true for enhancements as well as a new product build — user research ensures you’re building the right thing in the best way for your customers.

Qualitative research methods

User interviews will focus on how your product is working or if it’s missing any features, enriching your knowledge about your product and users.

It allows you to test your current features, discover new possibilities for additional features and think about discarding  existing ones. If your customers aren’t using certain features, it might be time to stop supporting them to reduce costs and help you grow your profits during the maturity stage.

Quantitative research methods

Surveys and questionnaires can help gather information around which features will work best for your product, enhancing and improving the user experience. 

A/B testing during growth and maturity occurs within your sales and onboarding processes. Making sure you have a smooth onboarding process increases your conversion rate and reduces wasted spend — improving your bottom line.

Final Thoughts: Why Continuous UX Research Matters

UX research testing throughout the lifecycle of your product helps you continuously evolve and develop a product that responds to what really matters - your users.

Talking to, testing, and knowing your users will allow you to push your product in ways that make sense with the data to back up decisions. Go forth and create the product that meets your organizations needs by delivering the very best user experience for your users.

Learn more
1 min read

Tips for recruiting quality research participants

If there’s one universal truth in user research, it’s that at some point you’re going to need to find people to actually take part in your studies. Be it a large number of participants for quantitative research or a select number for in-depth, in-person user interviews. Finding the right people (and number) of people can be a hurdle.

With the right strategy, you can source exactly the right participants for your next research project.

We share a practical step-by-step guide on how to find participants for user experience research.

The difficulties/challenges of user research recruiting 🏋️

It has to be acknowledged that there are challenges when recruiting research participants. You may recognize some of these:

  • There are so many channels and methods you can use to find participants, different channels will work better for different projects.
  • Repeatedly using the same channels and methods will result in diminishing returns (i.e. burning out participants).
  • It’s a lengthy and complex process, and some projects don’t have the luxury of time.
  • Offering the right incentives and distributing them is time-consuming.
  • It’s hard to manage participants during long-term or recurring studies, such as customer research projects.

We’ll simplify the process, talk about who the right participants are, and unpack some of the best ways to find them. Removing these blocks can be the easiest way to move forward.

Who are the right participants for different types of research? 🤔

1. The first step to a successful participant recruitment strategy is clarifying the goals of your user research and which methods you intend to use. Ask yourself:

  • What is the purpose of our research?
  • How do we plan to understand that?

2. Define who your ideal research participant is. Who is going to have the answers to your questions?

3. Work out your research recruitment strategy. That starts by understanding the differences between recruiting for qualitative and quantitative research.

Recruiting for qualitative vs. quantitative research 🙋🏻

Quantitative research recruiting is a numbers game. For your data analysis to be meaningful and statistically significant, you need a lot of data. This means you need to do a lot of research with a lot of people. When recruiting for quantitative research, you first have to define the population (the entire group you want to study). From there, you choose a sampling method that allows you to create a sample—a randomly selected subset of the population who will participate in your study.

Qualitative recruiting involves far fewer participants, but you do need to find a selection of ‘perfect’ participants. Those that fit neatly into your specific demographic, geographic, psychographic, and behavioral criteria relevant to your study. Recruiting quality participants for qualitative studies involves non-random sampling, screening, and plenty of communication.

How many participants do you need? 👱🏻👩👩🏻👧🏽👧🏾

How many participants to include in a qualitative research study is one of the most heavily discussed topics in user research circles. In most cases, you can get away with 5 people – that’s the short answer. With 5 people, you’ll uncover most of the main issues with the thing you’re testing. Depending on your research project there could be as many as 50 participants, but with each additional person, there is an additional cost (money and time).

Quantitative research is obviously quite different. With studies like card sorts and tree tests, you need higher participant numbers to get statistically meaningful results. Anywhere from 20 - 500 participants, again coming back to the purpose of your test and your research budget. These are usually easier and quicker to implement therefore the additional cost is lower.

User research recruitment - step by step 👟

Let’s get into your research recruitment strategy to find the best participants for your research project. There are 5 clear steps to get you through to the research stage:

1. Identify your ideal participants

Who are they? What do they do? How old are they? Do they already use your product? Where do they live? These are all great questions to get you thinking about who exactly you need to answer your research questions. The demographic and geographic detail of your participants are important to the quality of your research results.

2. Screen participants

Screening participants will weed out those that may not be suitable for your specific project. This can be as simple as asking if the participants have used a product similar to yours. Or coming back to your key identified demographic requirements and removing anyone that doesn’t fit these criteria.

3. Find prospective participants

This is important and can be time-consuming. For qualitative research projects, you can look within your organization or ask over social media for willing participants. Or if you’re short on time look at a participant recruitment service, which takes your requirements and has a catalog of available persons to call on. There’s a cost involved, but the time saving can negate this. For qualitative surveys, a great option can be a live intercept on your website or app that interrupts users and asks them to complete a short questionnaire.

4. Research incentives

In some cases you will need to provide incentives. This could be offering a prize or discount for those who complete online qualitative surveys. Or a fixed sum for those that take part in longer format quantitative studies.

5. Scheduling with participants

Once you have waded through the emails, options, and communication from your inquiries make a list of appropriate participants. Schedule time to do the research, either in person or remotely. Be clear about expectations and how long it will take. And what the incentive to take part is.

Tips to avoid participant burnout 📛

You’ve got your participants sorted and have a great pool of people to call on. If you keep hitting the same group of people time and time again, you will experience the law of diminishing returns. Constantly returning to the same pool of participants will eventually lead to fatigue. And this will impact the quality of your research because it’s based on interviewing the same people with the same views.

There are 2 ways to avoid this problem:

  1. Use a huge database of potential participant targets.
  2. Use a mixture of different recruitment strategies and channels.

Of course, it might be unavoidable to hit the same audience repeatedly when you’re testing your product development among your customer base.

Wrap up 🌯

Understanding your UX research recruitment strategy is crucial to recruiting quality participants. A clear idea of your purpose, who your ideal participants are, and how to find them takes time and experience. 

And to make life easier you can always leave your participant recruitment with us. With a huge catalog of quality participants all at your fingertips on our app, we can recruit the right people quickly.

Check out more here.

Learn more
1 min read

Clara Kliman-Silver: AI & design: imagining the future of UX

In the last few years, the influence of AI has steadily been expanding into various aspects of design. In early 2023, that expansion exploded. AI tools and features are now everywhere, and there are two ways designers commonly react to it:

  • With enthusiasm for how they can use it to make their jobs easier
  • With skepticism over how reliable it is, or even fear that it could replace their jobs

Google UX researcher Clara Kliman-Silver is at the forefront of researching and understanding the potential impact of AI on design into the future. This is a hot topic that’s on the radar of many designers as they grapple with what the new normal is, and how it will change things in the coming years.

Clara’s background 

Clara Kliman-Silver spends her time studying design teams and systems, UX tools and designer-developer collaboration. She’s a specialist in participatory design and uses generative methods to investigate workflows, understand designer-developer experiences, and imagine ways to create UIs. In this work, Clara looks at how technology can be leveraged to help people make things, and do it more efficiently than they currently are.

In today’s context, that puts generative AI and machine learning right in her line of sight. The way this technology has boomed in recent times has many people scrambling to catch up - to identify the biggest opportunities and to understand the risks that come with it. Clara is a leader in assessing the implications of AI. She analyzes both the technology itself and the way people feel about it to forecast what it will mean into the future.

Contact Details:

You can find Clara in LinkedIn or on Twitter @cklimansilver

What role should artificial intelligence play in UX design process? 🤔

Clara’s expertise in understanding the role of AI in design comes from significant research and analysis of how the technology is being used currently and how industry experts feel about it. AI is everywhere in today’s world, from home devices to tech platforms and specific tools for various industries. In many cases, AI automation is used for productivity, where it can speed up processes with subtle, easy to use applications.

As mentioned above, the transformational capabilities of AI are met with equal parts of enthusiasm and skepticism. The way people use AI, and how they feel about it is important, because users need to be comfortable implementing the technology in order for it to make a difference. The question of what value AI brings to the design process is ongoing. On one hand, AI can help increase efficiency for systems and processes. On the other hand, it can exacerbate problems if the user's intentions are misunderstood.

Access for all 🦾

There’s no doubt that AI tools enable novices to perform tasks that, in years gone by, required a high level of expertise. For example, film editing was previously a manual task, where people would literally cut rolls of film and splice them together on a reel. It was something only a trained editor could do. Now, anyone with a smartphone has access to iMovie or a similar app, and they can edit film in seconds.

For film experts, digital technology allows them to speed up tedious tasks and focus on more sophisticated aspects of their work. Clara hypothesizes that AI is particularly valuable when it automates mundane tasks. AI enables more individuals to leverage digital technologies without requiring specialist training. Thus, AI has shifted the landscape of what it means to be an “expert” in a field. Expertise is about more than being able to simply do something - it includes having the knowledge and experience to do it for an informed reason. 

Research and testing 🔬

Clara performs a lot of concept testing, which involves recognizing the perceived value of an approach or method. Concept testing helps in scenarios where a solution may not address a problem or where the real problem is difficult to identify. In a recent survey, Clara describes two predominant benefits designers experienced from AI:

  1. Efficiency. Not only does AI expedite the problem solving process, it can also help efficiently identify problems. 
  2. Innovation. Generative AI can innovate on its own, developing ideas that designers themselves may not have thought of.

The design partnership 🤝🏽

Overall, Clara says UX designers tend to see AI as a creative partner. However, most users don’t yet trust AI enough to give it complete agency over the work it’s used for. The level of trust designers have exists on a continuum, where it depends on the nature of the work and the context of what they’re aiming to accomplish. Other factors such as where the tech comes from, who curated it and who’s training the model also influences trust. For now, AI is largely seen as a valued tool, and there is cautious optimism and tentative acceptance for its application. 

Why it matters 💡

AI presents as potentially one of the biggest game-changers to how people work in our generation. Although AI has widespread applications across sectors and systems, there are still many questions about it. In the design world, systems like DALL-E allow people to create AI-generated imagery, and auto layout in various tools allows designers to iterate more quickly and efficiently.

Like many other industries, designers are wondering where AI might go in the future and what it might look like. The answer to these questions has very real implications for the future of design jobs and whether they will exist. In practice, Clara describes the current mood towards AI as existing on a continuum between adherence and innovation:

  • Adherence is about how AI helps designers follow best practice
  • Innovation is at the other end of the spectrum, and involves using AI to figure out what’s possible

The current environment is extremely subjective, and there’s no agreed best practice. This makes it difficult to recommend a certain approach to adopting AI and creating permanent systems around it. Both the technology and the sentiment around it will evolve through time, and it’s something designers, like all people, will need to maintain good awareness of.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.