February 20, 2024
3 min

Meera Pankhania: From funding to delivery - Ensuring alignment from start to finish

It’s a chicken and egg situation when it comes to securing funding for a large transformation program in government. On one hand, you need to submit a business case and, as part of that, you need to make early decisions about how you might approach and deliver the program of work. On the other hand, you need to know enough about the problem you are going to solve to ensure you have sufficient funding to understand the problem better, hire the right people, design the right service, and build it the right way. 

Now imagine securing hundreds of millions of dollars to design and build a service, but not feeling confident about what the user needs are. What if you had the opportunity to change this common predicament and influence your leadership team to carry out alignment activities, all while successfully delivering within the committed time frames?

Meera Pankhania, Design Director and Co-founder of Propel Design, recently spoke at UX New Zealand, the leading UX and IA conference in New Zealand hosted by Optimal Workshop, on traceability and her learnings from delivering a $300 million Government program.

In her talk, Meera helps us understand how to use service traceability techniques in our work and apply them to any environment - ensuring we design and build the best service possible, no matter the funding model.

Background on Meera Pankhania

As a design leader, Meera is all about working on complex, purpose-driven challenges. She helps organizations take a human-centric approach to service transformation and helps deliver impactful, pragmatic outcomes while building capability and leading teams through growth and change.

Meera co-founded Propel Design, a strategic research, design, and delivery consultancy in late 2020. She has 15 years of experience in service design, inclusive design, and product management across the private, non-profit, and public sectors in both the UK and Australia. 

Meera is particularly interested in policy and social design. After a stint in the Australian Public Service, Meera was appointed as a senior policy adviser to the NSW Minister for Customer Service, Hon. Victor Dominello MP. In this role, she played a part in NSW’s response to the COVID pandemic, flexing her design leadership skills in a new, challenging, and important context.

Contact Details:

Email address: meera@propeldesign.com.au

Find Meera on LinkedIn  

From funding to delivery: ensuring alignment from start to finish 🏁🎉👏

Meera’s talk explores a fascinating case study within the Department of Employment Services (Australia) where a substantial funding investment of around $300 million set the stage for a transformative journey. This funding supported the delivery of a revamped Employment Services Model, which had the goal of delivering better services to job seekers and employers, and a better system for providers within this system. The project had a focus on aligning teams prior to delivery, which resulted in a huge amount of groundwork for Meera.

Her journey involved engaging various stakeholders within the department, including executives, to understand the program as a whole and what exactly needed to be delivered. “Traceability” became the watchword for this project, which is laid out in three phases.

  • Phase 1: Aligning key deliverables
  • Phase 2: Ensuring delivery readiness
  • Phase 3: Building sustainable work practices

Phase 1: Aligning key deliverables 🧮

Research and discovery (pre-delivery)

Meera’s work initially meant conducting extensive research and engagement with executives, product managers, researchers, designers, and policymakers. Through this process, a common theme was identified – the urgent (and perhaps misguided) need to start delivering! Often, organizations focus on obtaining funding without adequately understanding the complexities involved in delivering the right services to the right users, leading to half-baked delivery.

After this initial research, some general themes started to emerge:

  1. Assumptions were made that still needed validation
  2. Teams weren’t entirely sure that they understood the user’s needs
  3. A lack of holistic understanding of how much research and design was needed

The conclusion of this phase was that “what” needed to be delivered wasn’t clearly defined. The same was true for “how” it would be delivered.

Traceability

Meera’s journey heavily revolved around the concept of "traceability” and sought to ensure that every step taken within the department was aligned with the ultimate goal of improving employment services. Traceability meant having a clear origin and development path for every decision and action taken. This is particularly important when spending taxpayer dollars!

So, over the course of eight weeks (which turned out to be much longer), the team went through a process of combing through documents in an effort to bring everything together to make sense of the program as a whole. This involved some planning, user journey mapping, and testing and refinement. 

Documenting Key Artifacts

Numerous artifacts and documents played a crucial role in shaping decisions. Meera and her team gathered and organized these artifacts, including policy requirements, legislation, business cases, product and program roadmaps, service maps, and blueprints. The team also included prior research insights and vision documents which helped to shape a holistic view of the required output.

After an effort of combing through the program documents and laying everything out, it became clear that there were a lot of gaps and a LOT to do.

Prioritising tasks

As a result of these gaps, a process of task prioritization was necessary. Tasks were categorized based on a series of factors and then mapped out based on things like user touch points, pain points, features, business policy, and technical capabilities.

This then enabled Meera and the team to create Product Summary Tiles. These tiles meant that each product team had its own summary ahead of a series of planning sessions. It gave them as much context (provided by the traceability exercise) as possible to help with planning. Essentially, these tiles provided teams with a comprehensive overview of their projects i.e. what their user needs, what certain policies require them to deliver, etc.  

Phase 2: Ensuring delivery readiness 🙌🏻

Meera wanted every team to feel confident that we weren’t doing too much or too little in order to design and build the right service, the right way.

Standard design and research check-ins were well adopted, which was a great start, but Meera and the team also built a Delivery Readiness Tool. It was used to assess a team's readiness to move forward with a project. This tool includes questions related to the development phase, user research, alignment with the business case, consideration of policy requirements, and more. Ultimately, it ensures that teams have considered all necessary factors before progressing further. 

Phase 3: Building sustainable work practices 🍃

As the program progressed, several sustainable work practices emerged which Government executives were keen to retain going forward.

Some of these included:

  • ResearchOps Practice: The team established a research operations practice, streamlining research efforts and ensuring that ongoing research was conducted efficiently and effectively.
  • Consistent Design Artifacts: Templates and consistent design artifacts were created, reducing friction and ensuring that teams going forward started from a common baseline.
  • Design Authority and Ways of Working: A design authority was established to elevate and share best practices across the program.
  • Centralized and Decentralized Team Models: The program showcased the effectiveness of a combination of centralized and decentralized team models. A central design team provided guidance and support, while service design leads within specific service lines ensured alignment and consistency.

Why it matters 🔥

Meera's journey serves as a valuable resource for those working on complex design programs, emphasizing the significance of aligning diverse stakeholders and maintaining traceability. Alignment and traceability are critical to ensuring that programs never lose sight of the problem they’re trying to solve, both from the user and organization’s perspective. They’re also critical to delivering on time and within budget!

Traceability key takeaways 🥡

  • Early Alignment Matters: While early alignment is ideal, it's never too late to embark on a traceability journey. It can uncover gaps, increase confidence in decision-making, and ensure that the right services are delivered.
  • Identify and audit: You never know what artifacts will shape your journey. Identify everything early, and don’t be afraid to get clarity on things you’re not sure about.
  • Conducting traceability is always worthwhile: Even if you don’t find many gaps in your program, you will at least gain a high level of confidence that your delivery is focused on the right things.

Delivery readiness key takeaways 🥡

  • Skills Mix is Vital: Assess and adapt team member roles to match their skills and experiences, ensuring they are positioned optimally.
  • Not Everyone Shares the Same Passion: Recognize that not everyone will share the same level of passion for design and research. Make the relevance of these practices clear to all team members.

Sustainability key takeaways 🥡

  • One Size Doesn't Fit All: Tailor methodologies, templates, and practices to the specific needs of your organization.
  • Collaboration is Key: Foster a sense of community and collective responsibility within teams, encouraging shared ownership of project outcomes.

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

The Evolution of UX Research: Digital Twins and the Future of User Insight

Introduction

User Experience (UX) research has always been about people. How they think, how they behave, what they need, and—just as importantly—what they don’t yet realise they need. Traditional UX methodologies have long relied on direct human input: interviews, usability testing, surveys, and behavioral observation. The assumption was clear—if you want to understand people, you have to engage with real humans.

But in 2025, that assumption is being challenged.

The emergence of digital twins and synthetic users—AI-powered simulations of human behavior—is changing how researchers approach user insights. These technologies claim to solve persistent UX research problems: slow participant recruitment, small sample sizes, high costs, and research timelines that struggle to keep pace with product development. The promise is enticing: instantly accessible, infinitely scalable users who can test, interact, and generate feedback without the logistical headaches of working with real participants.

Yet, as with any new technology, there are trade-offs. While digital twins may unlock efficiencies, they also raise important questions: Can they truly replicate human complexity? Where do they fit within existing research practices? What risks do they introduce?

This article explores the evolving role of digital twins in UX research—where they excel, where they fall short, and what their rise means for the future of human-centered design.

The Traditional UX Research Model: Why Change?

For decades, UX research has been grounded in methodologies that involve direct human participation. The core methods—usability testing, user interviews, ethnographic research, and behavioral analytics—have been refined to account for the unpredictability of human nature.

This approach works well, but it has challenges:

  1. Participant recruitment is time-consuming. Finding the right users—especially niche audiences—can be a logistical hurdle, often requiring specialised panels, incentives, and scheduling gymnastics.
  2. Research is expensive. Incentives, moderation, analysis, and recruitment all add to the cost. A single usability study can run into tens of thousands of dollars.
  3. Small sample sizes create risk. Budget and timeline constraints often mean testing with small groups, leaving room for blind spots and bias.
  4. Long feedback loops slow decision-making. By the time research is completed, product teams may have already moved on, limiting its impact.

In short: traditional UX research provides depth and authenticity, but it’s not always fast or scalable.

Digital twins and synthetic users aim to change that.

What Are Digital Twins and Synthetic Users?

While the terms digital twins and synthetic users are sometimes used interchangeably, they are distinct concepts.

Digital Twins: Simulating Real-World Behavior

A digital twin is a data-driven virtual representation of a real-world entity. Originally developed for industrial applications, digital twins replicate machines, environments, and human behavior in a digital space. They can be updated in real time using live data, allowing organisations to analyse scenarios, predict outcomes, and optimise performance.

In UX research, human digital twins attempt to replicate real users' behavioral patterns, decision-making processes, and interactions. They draw on existing datasets to mirror real-world users dynamically, adapting based on real-time inputs.

Synthetic Users: AI-Generated Research Participants

While a digital twin is a mirror of a real entity, a synthetic user is a fabricated research participant—a simulation that mimics human decision-making, behaviors, and responses. These AI-generated personas can be used in research scenarios to interact with products, answer questions, and simulate user journeys.

Unlike traditional user personas (which are static profiles based on aggregated research), synthetic users are interactive and capable of generating dynamic feedback. They aren’t modeled after a specific real-world person, but rather a combination of user behaviors drawn from large datasets.

Think of it this way:

  • A digital twin is a highly detailed, data-driven clone of a specific person, customer segment, or process.
  • A synthetic user is a fictional but realistic simulation of a potential user, generated based on behavioral patterns and demographic characteristics.

Both approaches are still evolving, but their potential applications in UX research are already taking shape.

Where Digital Twins and Synthetic Users Fit into UX Research

The appeal of AI-generated users is undeniable. They can:

  • Scale instantly – Test designs with thousands of simulated users, rather than just a handful of real participants.
  • Eliminate recruitment bottlenecks – No need to chase down participants or schedule interviews.
  • Reduce costs – No incentives, no travel, no last-minute no-shows.
  • Enable rapid iteration – Get user insights in real time and adjust designs on the fly.
  • Generate insights on sensitive topics – Synthetic users can explore scenarios that real participants might find too personal or intrusive.

These capabilities make digital twins particularly useful for:

  • Early-stage concept validation – Rapidly test ideas before committing to development.
  • Edge case identification – Run simulations to explore rare but critical user scenarios.
  • Pre-testing before live usability sessions – Identify glaring issues before investing in human research.

However, digital twins and synthetic users are not a replacement for human research. Their effectiveness is limited in areas where emotional, cultural, and contextual factors play a major role.

The Risks and Limitations of AI-Driven UX Research

For all their promise, digital twins and synthetic users introduce new challenges.

  1. They lack genuine emotional responses.
    AI can analyse sentiment, but it doesn’t feel frustration, delight, or confusion the way a human does. UX is often about unexpected moments—the frustrations, workarounds, and “aha” realisations that define real-world use.
  2. Bias is a real problem.
    AI models are trained on existing datasets, meaning they inherit and amplify biases in those datasets. If synthetic users are based on an incomplete or non-diverse dataset, the research insights they generate will be skewed.
  3. They struggle with novelty.
    Humans are unpredictable. They find unexpected uses for products, misunderstand instructions, and behave irrationally. AI models, no matter how advanced, can only predict behavior based on past patterns—not the unexpected ways real users might engage with a product.
  4. They require careful validation.
    How do we know that insights from digital twins align with real-world user behavior? Without rigorous validation against human data, there’s a risk of over-reliance on synthetic feedback that doesn’t reflect reality.

A Hybrid Future: AI + Human UX Research

Rather than viewing digital twins as a replacement for human research, the best UX teams will integrate them as a complementary tool.

Where AI Can Lead:

  • Large-scale pattern identification
  • Early-stage usability evaluations
  • Speeding up research cycles
  • Automating repetitive testing

Where Humans Remain Essential:

  • Understanding emotion, frustration, and delight
  • Detecting unexpected behaviors
  • Validating insights with real-world context
  • Ethical considerations and cultural nuance

The future of UX research is not about choosing between AI and human research—it’s about blending the strengths of both.

Final Thoughts: Proceeding With Caution and Curiosity

Digital twins and synthetic users are exciting, but they are not a magic bullet. They cannot fully replace human users, and relying on them exclusively could lead to false confidence in flawed insights.

Instead, UX researchers should view these technologies as powerful, but imperfect tools—best used in combination with traditional research methods.

As with any new technology, thoughtful implementation is key. The real opportunity lies in designing research methodologies that harness the speed and scale of AI without losing the depth, nuance, and humanity that make UX research truly valuable.

The challenge ahead isn’t about choosing between human or synthetic research. It’s about finding the right balance—one that keeps user experience truly human-centered, even in an AI-driven world.

This article was researched with the help of Perplexity.ai. 

Learn more
1 min read

Understanding UI design and its principles

Wireframes. Mockups. HTML. Fonts. Elements. Users. If you’re familiar with user interface design, these terms will be your bread and butter.An integral part of any website or application, user interface design is also arguably one of the most important. This is because your design is what your users see and interact with. If your site or app functions poorly and looks terrible, that’s what your users are going to remember.

But isn’t UX design and UI design the same thing? Or is there just an extremely blurred line between the two? What’s involved with UI design and, more importantly, what makes good design?

What is UI design exactly?

If you’re wondering how to test UI on your website, it’s a good idea to first learn some of the differences between UX and UI design. Although UI design and UX design look similar when written down, they’re actually two totally separate things. However, they should most definitely complement each other.

UX design, according to Nielsen Norman Group, “encompasses all aspects of the end-user's interaction with the company, its services, and its products.” Meanwhile, UI design focuses more on a user’s interaction, the overall design, look and feel of a system. The two still sound similar, right?For those of you still trying to wrap your ahead around the difference, Nielsen Norman Group has a great analogy up on its site that helps to explain it:

"As an example, consider a website with movie reviews. Even if the UI for finding a film is perfect, the UX will be poor for a user who wants information about a small independent release if the underlying database only contains movies from the major studios.”

This just goes to show the complementary relationship between the two and why it’s so important.User interface was popularized in the early 1970s, partly thanks to Fuji Xerox’s ‘Xerox Alto Workstation’ — an early personal computer dubbed “the origin of the PC”. This machine used various icons, multi windows, a mouse, and e-mail, which meant that some sort of design and design principles were needed to create consistency for the future. It was here that human-centred UI was born. UI design also covers graphical user interface design (GUI design). A GUI is the software or interface that works as the medium between a user and the computer.

It uses a number of graphical elements, such as screen cursors, menus, and icons so that users can easily navigate a system. This is also something that has stemmed from Fuji Xerox back in the late 1970s and early 1980s.Since then, UI has developed quickly and so has its design principles. When the Xerox Alto Workstation was first born, Fuji Xerox came up with eight of its own design principles. These were:

  • Metaphorically digitize the desk environment
  • Operating on display instead of entering on keyboard
  • What you see is what you get
  • Universal but fewer commands
  • Same operation for the same job at different places
  • Operating computers as easily as possible
  • No need to transfer to different jobs
  • System customized as desired by users

Over time, these principles have evolved and now you’ll likely find many more added to this list. Here are just a few of the most important ones identified in “Characteristics of graphical and web user interfaces” by Wilbert Galitz.

UI design principles:

Principle #1: Clarity

Usability.gov says that the “best interfaces are almost invisible to the user”.Everything in the system, from visual elements, functions, and text, needs to be clear and simple. This includes layout as well as the words used — stay away from jargon and complex terms or analogies that users won’t understand.Aesthetic appeal also fits into this principle. Ensure colors and graphics are used in a simple manner, and elements are grouped in a way that makes sense.

Principle #2: Consistency

The system should have the same or similar functions, uses and look throughout it for consistency. For example, the same color scheme should be used throughout an app, or the terminology on a website should be consistent throughout. Users should also have an idea of what to expect when they use your system. As an example, picture a retail shopping app. You’d expect that any other retail shopping app out there will have similar basic functions: a place to log in or create an account, account settings, a way to navigate and browse stock, a way to purchase stock at the press of a button. However, this doesn’t mean copying another app or website exactly; there should just be consistency so users know what to expect when they encounter your system.Apple even states an “app should respect its users and avoid forcing them to learn new ways to do things for no other reason than to be different”.

Principle #3: Flexibility and customizability

Is there more than one way people can access your system and its functions? Can people perform tasks in a number of different ways, too?Providing your users with a flexible system means people are more in control of what they’re doing. Galitz mentions this can also be done through allowing system customization.Don’t forget use on other kinds of devices, too. In a time when Google is using mobile-friendliness as a ranking signal, and research from Ericsson shows smartphones accounted for 75% of all mobile phone sales in Q4 2015, you know that being flexible is important.

Examples of good UI design

For a list of some of the best user interface examples, check out last year’s Webby Awards category for Best Interface Design. The 2016 category winner was the Reuters TV Web App, while the People’s Choice winner was AssessYourRisk.org.As an aside, this is the second year that the Webby Awards introduced this category — just goes to show how important it is to have good UI design!While you don’t want your site or application to look exactly the same as these winners, you still want yours to function well and be aesthetically pleasing.

To help you get there, there are a number of UI design tools and UI software available.Here’s a list of some of the many out there:

  • UXPin - An online UI design tool that allows you to create wireframes, mockups, and prototypes all on one platform.
  • InVision - A prototyping and collaboration tool. More in-depth than Balsamiq, and it allows you to go from mockup to high-fidelity in minutes.
  • Balsamiq - A simple mockups tool for wireframing, which allows users to test out ideas in the early stage of interface design.
  • Atomic - An interface design tool that allows you to design in your browser and collaborate with others on your projects.

Have you got any favorite UI design examples, or tips for beautiful design? We’d love to see them — comment below and let us know!

Further reading

Learn more
1 min read

The importance of roles in making meaningful project experiences

In this post, Daniel Szuc describes why it’s important to go beyond how we may see our roles traditionally when only focusing on job titles. By exploring other roles, as outlined in the post that follows, we can all play a part in helping to glue a team together, making project work easier for all and creating a more positive environment to help in making meaningful project experiences.

Collaboration is hard and needs practice 🙂↔️

“Collaboration” is a term that gets thrown around in workplaces to encourage people to work together better. Sometimes, though, the people using the term may not understand the range of skills required to make collaboration work well, including (but not limited to) listening, expression, empathy, and curiosity.

Each of these skills requires practice.

So asking people to simply collaborate, without understanding the skills required nor the necessary spaces to practice these skills, may well frustrate people more than it helps.

Misalignment 😤

As work hums along in a team, it’s easy for misalignment to creep in. Misalignments are caused by a lack of communication, limited time, poor project management, and micro/macro issues that are addressed too late, causing friction between people. If specific roles are not put in place, these frictions can create difficult work environments, making coming to work unpleasant.

Teams may lack common artifacts to help them communicate with a shared language, which in turn helps connect a project and business narrative together. Importantly, this helps aggregate what a team learns together from customer interviews to help improve a product or service.In effect, there is no light leading the way, so people can get lost in details that have nothing to do with a common and well understood purpose.

Roles beyond a job title 👔

When we speak about roles, we are not referring to traditional job titles such as project manager, developer, and designer, for example. Rather, we mean roles that everyone can play at various points in a project, helping others do their job well and the team deliver on making meaningful experiences.Roles, beyond job titles or the tasks inherent in those titles, help people think in integrated and holistic ways beyond their official job title.

At times, our work requires that we delve deeply into design details; in other situations, we are required to step back and see how all the elements of our work connect in delivering solutions that are part of a broader narrative.As members of teams, we can work more effectively – whether it’s by advancing ideas or in recognizing when it’s time to consider alternative approaches.

Four roles for making meaningful experiences 🎢

We have identified four roles to encourage making meaningful experiences for the team and customers, as well as to encourage integrated ways of working:

  1. Facilitators can define approaches that guide the process of informing, sense-making, and evaluating. They can craft agendas for working sessions and identify what problems need attention. Facilitators can also manage interactions between functions, aggregate a team’s learnings, and map these learnings to shared artifacts. They identify themes that require further study and set goals for the team’s next sessions.
  1. Mentors need to be aware of approaches and skills that require ongoing development and practice, and organize safe spaces in which people can practice, using them over and over during working sessions and across projects. Mentors should work closely with facilitators and custodians to identify the knowledge that the team has captured and map it to a learning program for team members, with a focus on informing, sense-making, and evaluating.
  1. Connectors create artifacts that help bridge gaps and make interactions between people feel more fluid, connecting people’s skills and roles.
  1. Custodians maintain the knowledge base that forms over time and leverage it in creating approaches and courses that help our project teammates to improve at what they do.

Practicing shared skills within roles ⚙️

Independent of whether a person works in management, engineering, product management, design, user research, or some other function, there is a common set of skills of which people need to remain aware: skills that help make our project teams’ collective efforts better.Because there is an intention to integrate ways of working, collective learning makes teamwork effective and results in more meaningful experiences. Working sessions, in which people from different teams or functions come together to solve a problem, provide a common space to focus on that problem, define approaches to help solve the problem, and work through issues together.

A team can identify the skills they practice, reflect on any gaps that may require them to expand their practice, and aggregate their learnings in common artifacts. These then help form and guide a project narrative with which the team resonates or can critique.In understanding the ways in which we work together – in essence, developing empathy for each other – we may see other benefits in addition to the work we produce.One benefit could be to move away from a blind focus on just tools and processes towards a primary focus on how we approach our work together or how we think about problems within the context of a project.

The ways in which we interact with each other suggest that we should look at the following roles, again independent of function or job title:

  1. Informing a problem - What evidence or learnings have we gained to date? What outstanding questions do we need to answer? How would the answers inform the solution to a problem we’re solving now or over time?
  2. Making sense of the data we have - How can we make sense of our learnings as they pertain to specific questions or larger themes that we need to understand and for which we need to design solutions over time?
  3. Evaluating designs - How can we evaluate designs and iteratively improve a product or service and its positioning over time?

Questions for future consideration 💭

  • What roles resonate with you more?
  • What roles do you think are missing?
  • What skills do you need to practice in order to help your team make more meaningful experiences?
  • What skills do you think are missing?
  • What gaps, if any, do you recognize between roles on project teams?
  • What frictions exist on a team and why do you think they occur?
  • How can customer interviews – as one approach to understanding customer stories – encourage constant cycles of informing, sense-making and learning in the spirit of the learning organisation, so to help glue team practices together and create integrated ways of work?

Acknowledgements

Thanks to Josephine Wong for contributing to this piece. For more, see Integrated Approaches to Constant Personal Learning, Improvement, and Maturity.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.