April 24, 2019
6 min

6 things to consider when setting up a research practice

With UX research so closely tied to product success, setting up a dedicated research practice is fast becoming important for many organizations. It’s not an easy process, especially for organizations that have had little to do with research, but the end goal is worth the effort.

But where exactly are you supposed to start? This article provides 6 key things to keep in mind when setting up a research practice, and should hopefully ensure you’ve considered all of the relevant factors.

1) Work out what your organization needs

The first and most simple step is to take stock of the current user research situation within the organization. How much research is currently being done? Which teams or individuals are talking to customers on an ongoing basis? Consider if there are any major pain points with the current way research is being carried out or bottlenecks in getting research insights to the people that need them. If research isn't being practiced, identify teams or individuals that don't currently have access to the resources they need, and consider ways to make insights available to the people that need them.

2) Consolidate your insights

UX research should be communicating with nearly every part of an organization, from design teams to customer support, engineering departments and C-level management. The insights that stem from user research are valuable everywhere. Of course, the opposite is also true: insights from support and sales are useful for understanding customers and how the current product is meeting people's needs.

When setting up a research practice, identify which teams you should align with, and then reach out. Sit down with these teams and explore how you can help each other. For your part, you’ll probably need to explain the what and why of user research within the context of your organization, and possibly even explain at a basic level some of the techniques you use and the data you can obtain.

Then, get in touch with other teams with the goal of learning from them. A good research practice needs a strong connection to other parts of the business with the express purpose of learning. For example, by working with your organization’s customer support team, you’ll have a direct line to some of the issues that customers deal with on a regular basis. A good working relationship here means they’ll likely feed these insights back to you, in order to help you frame your research projects.

By working with your sales team, they’ll be able to share issues prospective customers are dealing with. You can follow up on this information with research, the results of which can be fed into the development of your organization’s products.

It can also be fruitful to develop an insights repository, where researchers can store any useful insights and log research activities. This means that sales, customer support and other interested parties can access the results of your research whenever they need to.

When your research practice is tightly integrated other key areas of the business, the organization is likely to see innumerable benefits from the insights>product loop.

3) Figure out which tools you will use

By now you’ve hopefully got an idea of how your research practice will fit into the wider organization – now it’s time to look at the ways in which you’ll do your research. We’re talking, of course, about research methods and testing tools.

We won’t get into every different type of method here (there are plenty of other articles and guides for that), but we will touch on the importance of qualitative and quantitative methods. If you haven’t come across these terms before, here’s a quick breakdown:

  • Qualitative research – Focused on exploration. It’s about discovering things we cannot measure with numbers, and often involves speaking with users through observation or user interviews.
  • Quantitative research – Focused on measurement. It’s all about gathering data and then turning this data into usable statistics.

All user research methods are designed to deliver either qualitative or quantitative data, and as part of your research practice, you should ensure that you always try to gather both types. By using this approach, you’re able to generate a clearer overall picture of whatever it is you’re researching.

Next comes the software. A solid stack of user research testing tools will help you to put research methods into practice, whether for the purposes of card sorting, carrying out more effective user interviews or running a tree test.

There are myriad tools available now, and it can be difficult to separate the useful software from the chaff. Here’s a list of research and productivity tools that we recommend.

Tools for research

Here’s a collection of research tools that can help you gather qualitative and quantitative data, using a number of methods.

  • Treejack – Tree testing can show you where people get lost on your website, and help you take the guesswork out of information architecture decisions. Like OptimalSort, Treejack makes it easy to sort through information and pairs this with in-depth analysis features.
  • dScout – Imagine being able to get video snippets of your users as they answer questions about your product. That’s dScout. It’s a video research platform that collects in-context “moments” from a network of global participants, who answer your questions either by video or through photos.
  • Ethnio – Like dScout, this is another tool designed to capture information directly from your users. It works by showing an intercept pop-up to people who land on your website. Then, once they agree, it runs through some form of research.
  • OptimalSort – Card sorting allows you to get perspective on whatever it is you’re sorting and understand how people organize information. OptimalSort makes it easier and faster to sort through information, and you can access powerful analysis features.
  • Reframer – Taking notes during user interviews and usability tests can be quite time-consuming, especially when it comes to analyze the data. Reframer gives individuals and teams a single tool to store all of their notes, along with a set of powerful analysis features to make sense of their data.
  • Chalkmark – First-click testing can show you what people click on first in a user interface when they’re asked to complete a task. This is useful, as when people get their first click correct, they’re much more likely to complete their task. Chalkmark makes the process of setting up and running a first-click test easy. What’s more, you’re given comprehensive analysis tools, including a click heatmap.

Tools for productivity

These tools aren’t necessarily designed for user research, but can provide vital links in the process.

  • Whimsical – A fantastic tool for user journeys, flow charts and any other sort of diagram. It also solves one of the biggest problems with online whiteboards – finicky object placement.
  • Descript – Easily transcribe your interview and usability test audio recordings into text.
  • Google Slides – When it inevitably comes time to present your research findings to stakeholders, use Google Slides to create readable, clear presentations.

4) Figure out how you’ll track findings over time

With some idea of the research methods and testing tools you’ll be using to collect data, now it’s time to think about how you’ll manage all of this information. A carefully ordered spreadsheet and folder system can work – but only to an extent. Dedicated software is a much better choice, especially given that you can scale these systems much more easily.

A dedicated home for your research data serves a few distinct purposes. There’s the obvious benefit of being able to access all of your findings whenever you need them, which means it’s much easier to create personas if the need arises. A dedicated home also means your findings will remain accessible and useful well into the future.

When it comes to software, Reframer stands as one of the better options for creating a detailed customer insights repository as you’re able to capture your sessions directly in the tool and then apply tags afterwards. You can then easily review all of your observations and findings using the filtering options. Oh, and there’s obviously the analysis side of the tool as well.

If you’re looking for a way to store high-level findings – perhaps if you’re intending to share this data with other parts of your organization – then a tool like Confluence or Notion is a good option. These tools are basically wikis, and include capable search and navigation options too.

5) Where will you get participants from?

A pool of participants you can draw from for your user research is another important part of setting up a research practice. Whenever you need to run a study, you’ll have real people you can call on to test, ask questions and get feedback from.

This is where you’ll need to partner other teams, likely sales and customer support. They’ll have direct access to your customers, so make sure to build a strong relationship with these teams. If you haven’t made introductions, it can helpful to put together a one-page sheet of information explaining what UX research is and the benefits of working with your team.

You may also want to consider getting in some external help. Participant recruitment services are a great way to offload the heavy lifting of sourcing quality participants – often one of the hardest parts of the research process.

6) Work out how you'll communicate your research

Perhaps one of the most important parts of being a user researcher is taking the findings you uncover and communicating them back to the wider organization. By feeding insights back to product, sales and customer support teams, you’ll form an effective link between your organization’s customers and your organization. The benefits here are obvious. Product teams can build products that actually address customer pain points, and sales and support teams will better understand the needs and expectations of customers.

Of course, it isn’t easy to communicate findings. Here are a few tips:

  • Document your research activities: With a clear record of your research, you’ll find it easier to pull out relevant findings and communicate these to the right teams.
  • Decide who needs what: You’ll probably find that certain roles (like managers) will be best served by a high-level overview of your research activities (think a one-page summary), while engineers, developers and designers will want more detailed research findings.

Continue reading

Share this article
Author
Optimal
Workshop

Related articles

View all blog articles
Learn more
1 min read

Using paper prototypes in UX

In UX research we are told again and again that to ensure truly user-centered design, it’s important to test ideas with real users as early as possible. There are many benefits that come from introducing the voice of the people you are designing for in the early stages of the design process. The more feedback you have to work with, the more you can inform your design to align with real needs and expectations. In turn, this leads to better experiences that are more likely to succeed in the real world.It is not surprising then that paper prototypes have become a popular tool used among researchers. They allow ideas to be tested as they emerge, and can inform initial designs before putting in the hard yards of building the real thing. It would seem that they’re almost a no-brainer for researchers, but just like anything out there, along with all the praise, they have also received a fair share of criticism, so let’s explore paper prototypes a little further.

What’s a paper prototype anyway? 🧐📖

Paper prototyping is a simple usability testing technique designed to test interfaces quickly and cheaply. A paper prototype is nothing more than a visual representation of what an interface could look like on a piece of paper (or even a whiteboard or chalkboard). Unlike high-fidelity prototypes that allow for digital interactions to take place, paper prototypes are considered to be low-fidelity, in that they don’t allow direct user interaction. They can also range in sophistication, from a simple sketch using a pen and paper to simulate an interface, through to using designing or publishing software to create a more polished experience with additional visual elements.

Screen Shot 2016-04-15 at 9.26.30 AM
Different ways of designing paper prototypes, using OptimalSort as an example

Showing a research participant a paper prototype is far from the real deal, but it can provide useful insights into how users may expect to interact with specific features and what makes sense to them from a basic, user-centered perspective. There are some mixed attitudes towards paper prototypes among the UX community, so before we make any distinct judgements, let's weigh up their pros and cons.

Advantages 🏆

  • They’re cheap and fastPen and paper, a basic word document, Photoshop. With a paper prototype, you can take an idea and transform it into a low-fidelity (but workable) testing solution very quickly, without having to write code or use sophisticated tools. This is especially beneficial to researchers who work with tight budgets, and don’t have the time or resources to design an elaborate user testing plan.
  • Anyone can do itPaper prototypes allow you to test designs without having to involve multiple roles in building them. Developers can take a back seat as you test initial ideas, before any code work begins.
  • They encourage creativityFrom both the product teams participating in their design, but also from the users. They require the user to employ their imagination, and give them the opportunity express their thoughts and ideas on what improvements can be made. Because they look unfinished, they naturally invite constructive criticism and feedback.
  • They help minimize your chances of failurePaper prototypes and user-centered design go hand in hand. Introducing real people into your design as early as possible can help verify whether you are on the right track, and generate feedback that may give you a good idea of whether your idea is likely to succeed or not.

Disadvantages 😬

  • They’re not as polished as interactive prototypesIf executed poorly, paper prototypes can appear unprofessional and haphazard. They lack the richness of an interactive experience, and if our users are not well informed when coming in for a testing session, they may be surprised to be testing digital experiences on pieces of paper.
  • The interaction is limitedDigital experiences can contain animations and interactions that can’t be replicated on paper. It can be difficult for a user to fully understand an interface when these elements are absent, and of course, the closer the interaction mimics the final product, the more reliable our findings will be.
  • They require facilitationWith an interactive prototype you can assign your user tasks to complete and observe how they interact with the interface. Paper prototypes, however, require continuous guidance from a moderator in communicating next steps and ensuring participants understand the task at hand.
  • Their results have to be interpreted carefullyPaper prototypes can’t emulate the final experience entirely. It is important to interpret their findings while keeping their limitations in mind. Although they can help minimize your chances of failure, they can’t guarantee that your final product will be a success. There are factors that determine success that cannot be captured on a piece of paper, and positive feedback at the prototyping stage does not necessarily equate to a well-received product further down the track.

Improving the interface of card sorting, one prototype at a time 💡

We recently embarked on a research project looking at the user interface of our card-sorting tool, OptimalSort. Our research has two main objectives — first of all to benchmark the current experience on laptops and tablets and identify ways in which we can improve the current interface. The second objective is to look at how we can improve the experience of card sorting on a mobile phone.

Rather than replicating the desktop experience on a smaller screen, we want to create an intuitive experience for mobiles, ensuring we maintain the quality of data collected across devices.Our current mobile experience is a scaled down version of the desktop and still has room for improvement, but despite that, 9 per cent of our users utilize the app. We decided to start from the ground up and test an entirely new design using paper prototypes. In the spirit of testing early and often, we decided to jump right into testing sessions with real users. In our first testing sprint, we asked participants to take part in two tasks. The first was to perform an open or closed card sort on a laptop or tablet. The second task involved using paper prototypes to see how people would respond to the same experience on a mobile phone.

blog_artwork_01-03

Context is everything 🎯

What did we find? In the context of our research project, paper prototypes worked remarkably well. We were somewhat apprehensive at first, trying to figure out the exact flow of the experience and whether the people coming into our office would get it. As it turns out, people are clever, and even those with limited experience using a smartphone were able to navigate and identify areas for improvement just as easily as anyone else. Some participants even said they prefered the experience of testing paper prototypes over a laptop. In an effort to make our prototype-based tasks easy to understand and easy to explain to our participants, we reduced the full card sort to a few key interactions, minimizing the number of branches in the UI flow.

This could explain a preference for the mobile task, where we only asked participants to sort through a handful of cards, as opposed to a whole set.The main thing we found was that no matter how well you plan your test, paper prototypes require you to be flexible in adapting the flow of your session to however your user responds. We accepted that deviating from our original plan was something we had to embrace, and in the end these additional conversations with our participants helped us generate insights above and beyond the basics we aimed to address. We now have a whole range of feedback that we can utilize in making more sophisticated, interactive prototypes.

Whether our success with using paper prototypes was determined by the specific setup of our testing sessions, or simply by their pure usefulness as a research technique is hard to tell. By first performing a card sorting task on a laptop or tablet, our participants approached the paper prototype with an understanding of what exactly a card sort required. Therefore there is no guarantee that we would have achieved the same level of success in testing paper prototypes on their own. What this does demonstrate, however, is that paper prototyping is heavily dependent on the context of your assessment.

Final thoughts 💬

Paper prototypes are not guaranteed to work for everybody. If you’re designing an entirely new experience and trying to describe something complex in an abstracted form on paper, people may struggle to comprehend your idea. Even a careful explanation doesn’t guarantee that it will be fully understood by the user. Should this stop you from testing out the usefulness of paper prototypes in the context of your project? Absolutely not.

In a perfect world we’d test high fidelity interactive prototypes that resemble the real deal as closely as possible, every step of the way. However, if we look at testing from a practical perspective, before we can fully test sophisticated designs, paper prototypes provide a great solution for generating initial feedback.In his article criticizing the use of paper prototypes, Jake Knapp makes the point that when we show customers a paper prototype we’re inviting feedback, not reactions. What we found in our research however, was quite the opposite.

In our sessions, participants voiced their expectations and understanding of what actions were possible at each stage, without us having to probe specifically for feedback. Sure we also received general comments on icon or colour preferences, but for the most part our users gave us insights into what they felt throughout the experience, in addition to what they thought.

Further reading 🧠

Learn more
1 min read

Collating your user testing notes

It’s been a long day. Scratch that - it’s been a long week! Admit it. You loved every second of it.

Twelve hour days, the mad scramble to get the prototype ready in time, the stakeholders poking their heads in occasionally, dealing with no-show participants and the excitement around the opportunity to speak to real life human beings about product or service XYZ. Your mind is exhausted but you are buzzing with ideas and processing what you just saw. You find yourself sitting in your war room with several pages of handwritten notes and with your fellow observers you start popping open individually wrapped lollies leftover from the day’s sessions. Someone starts a conversation around what their favourite flavour is and then the real fun begins. Sound familiar? Welcome to the post user testing debrief meeting.

How do you turn those scribbled notes and everything rushing through your mind into a meaningful picture of the user experience you just witnessed? And then when you have that picture, what do you do next? Pull up a bean bag, grab another handful of those lollies we feed our participants and get comfy because I’m going to share my idiot-proof, step by step guide for turning your user testing notes into something useful.

Let’s talk

Get the ball rolling by holding a post session debrief meeting while it’s all still fresh your collective minds. This can be done as one meeting at the end of the day’s testing or you could have multiple quick debriefs in between testing sessions. Choose whichever options works best for you but keep in mind this needs to be done at least once and before everyone goes home and forgets everything. Get all observers and facilitators together in any meeting space that has a wall like surface that you can stick post its to - you can even use a window! And make sure you use real post its - the fake ones fall off!

Mark your findings (Tagging)

Before you put sharpie to post it, it’s essential to agree as a group on how you will tag your observations. Tagging the observations now will make the analysis work much easier and help you to spot patterns and themes. Colour coding the post its is by far the simplest and most effective option and how you assign the colours is entirely up to you. You could have a different colour for each participant or testing session, you could have different colours to denote participant attributes that are relevant to your study eg senior staff and junior staff, or you could use different colours to denote specific testing scenarios that were used. There’s many ways you could carve this up and there’s no right or wrong way. Just choose the option that suits you and your team best because you’re the ones who have to look at it and understand it. If you only have one colour post it eg yellow, you could colour code the pen colours you use to write on the notes or include some kind of symbol to help you track them.

Processing the paper (Collating)

That pile of paper is not going to process itself! Your next job as a group is to work through the task of transposing your observations to post it notes. For now, just stick them to the wall in any old way that suits you. If you’re the organising type, you could group them by screen or testing scenario. The positioning will all change further down the process, so at this stage it’s important to just keep it simple. For issues that occur repeatedly across sessions, just write them down on their own post its- doubles will be useful to see further down the track.In addition to  holding a debrief meetings, you also need to round up everything that was used to capture the testing session/s. And I mean EVERYTHING.

Handwritten notes, typed notes, video footage and any audio recordings need to be reviewed just in case something was missed. Any handwritten notes should be typed to assist you with the completion of the report. Don’t feel that you have to wait until the testing is completed before you start typing up your notes because you will find they pile up very quickly and if your handwriting is anything like mine…. Well let’s just say my short term memory is often required to pick up the slack and even that has it’s limits. Type them up in between sessions where possible and save each session as it’s own document. I’ll often use the testing questions or scenario based tasks to structure my typed notes and I find that makes it really easy to refer back to.Now that you’ve processed all the observations, it’s time to start sorting your observations to surface behavioural patterns and make sense of it all.

Spotting patterns and themes through affinity diagramming

Affinity diagramming is a fantastic tool for making sense of user testing observations. In fact it’s just about my favourite way to make sense of any large mass of information. It’s an engaging and visual process that grows and evolves like a living creature taking on a life of its own. It also builds on the work you’ve just done which is a real plus!By now, testing is over and all of your observations should all be stuck to a wall somewhere. Get everyone together again as a group and step back and take it all in. Just let it sit with you for a moment before you dive in. Just let it breathe. Have you done that? Ok now as individuals working at the same time, start by grouping things that you think belong together. It’s important to just focus on the content of the labels and try to ignore the colour coded tagging at this stage, so if session one was blue post its don’t group all the blue ones together just because they’re all blue! If you get stuck, try grouping by topic or create two groups eg issues and wins and then chunk the information up from there.

You will find that the groups will change several times over the course of the process  and that’s ok because that’s what it needs to do.While you do this, everyone else will be doing the same thing - grouping things that make sense to them.  Trust me, it’s nowhere near as chaotic as it sounds! You may start working as individuals but it won’t be long before curiosity kicks in and the room is buzzing with naturally occurring conversation.Make sure you take a step back regularly and observe what everyone else is doing and don’t be afraid to ask questions and move other people’s post its around- no one owns it! No matter how silly something may seem just put it there because it can be moved again. Have a look at where your tagged observations have ended up. Are there clusters of colour? Or is it more spread out? What that means will depend largely on how you decided to tag your findings. For example if you assigned each testing session its own colour and you have groups with lot’s of different colours in them you’ll find that the same issue was experienced by multiple people.Next, start looking at each group and see if you can break them down into smaller groups and at the same time consider the overall picture for bigger groups eg can the wall be split into say three high level groups.Remember, you can still change your groups at anytime.

Thinning the herd (Merging)

Once you and your team are happy with the groups, it’s time to start condensing the size of this beast. Look for doubled up findings and stack those post its on top of each other to cut the groups down- just make sure you can still see how many there were. The point of merging is to condense without losing anything so don’t remove something just because it only happened once. That one issue could be incredibly serious. Continue to evaluate and discuss as a group until you are happy. By now clear and distinct groups of your observations should have emerged and at a glance you should be able to identify the key findings from your study.

A catastrophe or a cosmetic flaw? (Scoring)

Scoring relates to how serious the issues are and how bad the consequences of not fixing them are. There are arguments for and against the use of scoring and it’s important to recognise that it is just one way to communicate your findings.I personally rarely use scoring systems. It’s not really something I think about when I’m analysing the observations. I rarely rank one problem or finding over another. Why? Because all data is good data and it all adds to the overall picture.I’ve always been a huge advocate for presenting the whole story and I will never diminish the significance of a finding by boosting another. That said, I do understand the perspective of those who place metrics around their findings. Other designers have told me they feel that it allows them to quantify the seriousness of each issue and help their client/designer/boss make decisions about what to do next.We’ve all got our own way of doing things, so I’ll leave it up to you to choose whether or not you score the issues. If you decide to score your findings there are a number of scoring systems you can use and if I had to choose one, I quite like Jakob Nielsen’s methodology for the simple way it takes into consideration multiple factors. Ultimately you should choose the one that suits your working style best.

Let’s say you did decide to score the issues. Start by writing down each key finding on it’s own post it and move to a clean wall/ window. Leave your affinity diagram where it is. Divide the new wall in half: one side for wins eg findings that indicate things that tested well and the other for issues. You don’t need to score the wins but you do need to acknowledge what went well because knowing what you’re doing well is just as important as knowing where you need to improve. As a group (wow you must be getting sick of each other! Make sure you go out for air from time to time!) score the issues based on your chosen methodology.Once you have completed this entire process you will have everything you need to write a kick ass report.

What could possibly go wrong? (and how to deal with it)

No process is perfect and there are a few potential dramas to be aware of:

People jumping into solution mode too early

In the middle of the debrief meeting, someone has an epiphany. Shouts of We should move the help button! or We should make the yellow button smaller! ring out and the meeting goes off the rails.I’m not going to point fingers and blame any particular role because we’ve all done it, but it’s important to recognise that’s not why we’re sitting here. The debrief meeting is about digesting and sharing what you and the other observers just saw. Observing and facilitating user testing is a privilege. It’s a precious thing that deserves respect and if you jump into solution mode too soon, you may miss something. Keep the conversation on track by appointing a team member to facilitate the debrief meeting.

Storage problems

Handwritten notes taken by multiple observers over several days of testing adds up to an enormous pile of paper. Not only is it a ridiculous waste of paper but they have to be securely stored for three months following the release of the report. It’s not pretty. Typing them up can solve that issue but it comes with it’s own set of storage related hurdles. Just like the handwritten notes, they need to be stored securely. They don’t belong on SharePoint or in the share drive or any other shared storage environment that can be accessed by people outside your observer group. User testing notes are confidential and are not light reading for anyone and everyone no matter how much they complain. Store any typed notes in a limited access storage solution that only the observers have access to and if anyone who shouldn’t be reading them asks, tell them that they are confidential and the integrity of the research must be preserved and respected.

Time issues

Before the storage dramas begin, you have to actually pick through the mountain of paper. Not to mention the video footage, and the audio and you have to chase up that sneaky observer who disappeared when the clock struck 5. All of this takes up a lot of time. Another time related issue comes in the form of too much time passing in between testing sessions and debrief meetings. The best way to deal with both of these issues  is to be super organised and hold multiple smaller debriefs in between sessions where possible. As a group, work out your time commitments before testing begins and have a clear plan in place for when you will meet.  This will prevent everything piling up and overwhelming you at the end.

Disagreements over scoring

At the end of that long day/week we’re all tired and discussions around scoring the issues can get a little heated. One person’s showstopper may be another person’s mild issue. Many of the ranking systems use words as well as numbers to measure the level of severity and it’s easy to get caught up in the meaning of the words and ultimately get sidetracked from the task at hand. Be proactive and as a group set ground rules upfront for all discussions. Determine how long you’ll spend discussing an issue and what you will do in the event that agreement cannot be reached. People want to feel heard and they want to feel like their contributions are valued. Given that we are talking about an iterative process, sometimes it’s best just to write everything down to keep people happy and merge and cull the list in the next iteration. By then they’ve likely had time to reevaluate their own thinking.

And finally...

We all have our own ways of making sense of our user testing observations and there really is no right or wrong way to go about it. The one thing I would like to reiterate is the importance of collaboration and teamwork. You cannot do this alone, so please don’t try. If you’re a UX team of one, you probably already have a trusted person that you bounce ideas off. They would be a fantastic person to do this with. How do you approach this process? What sort of challenges have you faced? Let me know in the comments below.

Learn more
1 min read

Radical Collaboration: how teamwork really can make the dream work

Natalie and Lulu have forged a unique team culture that focuses on positive outputs (and outcomes) for their app’s growing user base. In doing so, they turned the traditional design approach on its head and created a dynamic and supportive team. 

Natalie, Director of Design at Hatch, and Lulu, UX Design Specialist, recently spoke at UX New Zealand, the leading UX and IA conference in New Zealand hosted by Optimal Workshop, on their concept of “radical collaboration”.

In their talk, Nat and Lulu share their experience of growing a small app into a big player in the finance sector, and their unique approach to teamwork and culture which helped achieve it.

Background on Natalie Ferguson and Lulu Pachuau

Over the last two decades, Lulu and Nat have delivered exceptional customer experiences for too many organizations to count. After Nat co-founded Hatch, she begged Lulu to join her on their audacious mission: To supercharge wealth building in NZ. Together, they created a design and product culture that inspired 180,000 Kiwi investors to join in just 4 years.

Contact Details:

Email: natalie@sixfold.co.nz

LinkedIn: https://www.linkedin.com/in/natalieferguson/ and https://www.linkedin.com/in/lulupach/

Radical Collaboration - How teamwork makes the dream work 💪💪💪

Nat and Lulu discuss how they nurtured a team culture of “radical collaboration” when growing the hugely popular app Hatch, based in New Zealand. Hatch allows everyday New Zealanders to quickly and easily trade in the U.S. share market. 

The beginning of the COVID pandemic spelled huge growth for Hatch and caused significant design challenges for the product. This growth meant that the app had to grow from a baby startup to one that could operate at scale - virtually overnight. 

In navigating this challenge, Nat and Lulu coined the term radical collaboration, which aims to “dismantle organizational walls and supercharge what teams achieve”. Radical collaboration has six key pillars, which they discuss alongside their experience at Hatch.

Pillar #1: When you live and breathe your North star

Listening to hundreds of their customers’ stories, combined with their own personal experiences with money, compelled Lulu and Nat to change how their users view money. And so, “Grow the wealth of New Zealanders” became a powerful mission statement, or North Star, for Hatch. The mission was to give people the confidence and the ability to live their own lives with financial freedom and control. Nat and Lulu express the importance of truly believing in the mission of your product, and how this can become a guiding light for any team. 

Pillar #2: When you trust each other so much, you’re happy to give up control

As Hatch grew rapidly, trusting each other became more and more important. Nat and Lulu state that sometimes you need to take a step back and stop fueling growth for growth’s sake. It was at this point that Nat asked Lulu to join the team, and Nat’s first request was for Lulu to be super critical about the product design to date - no feedback was out of bounds. Letting go, feeling uncomfortable, and trusting your team can be difficult, but sometimes it’s what you need in order to drag yourself out of status quo design. This resulted in a brief hiatus from frantic delivery to take stock and reprioritize what was important - something that can be difficult without heavy doses of trust!

Pillar #3: When everyone wears all the hats

During their journey, the team at Hatch heard lots of stories from their users. Many of these stories were heard during “Hatcheversery Calls”, where team members would call users on their sign-up anniversary to chat about their experience with the app. Some of these calls were inspiring, insightful, and heartwarming.

Everyone at Hatch made these calls – designers, writers, customer support, engineers, and even the CEO. Speaking to strangers in this way was a challenge for some, especially since it was common to field technical questions about the business. Nevertheless, asking staff to wear many hats like this turned the entire team into researchers and analysts. By forcing ourselves and our team outside of our comfort zone, we forced each other to see the whole picture of the business, not just our own little piece.

Pillar #4: When you do what’s right, not what’s glam

In an increasingly competitive industry, designers and developers are often tempted to consistently deliver new and exciting features. In response to rapid growth, rather than adding more features to the app, Lulu and Nat made a conscious effort to really listen to their customers to understand what problems they needed solving. 

As it turned out, filing overseas tax returns was a significant and common problem for their customers - it was difficult and expensive. So, the team at Hatch devised a tax solution. This solution was developed by the entire team, with almost no tax specialists involved until the very end! This process was far from glamorous and it often fell outside of standard job descriptions. However, the team eventually succeeded in simplifying a notoriously difficult process and saved their customers a massive headache.

Pillar #5: When you own the outcome, not your output.

Over time Hatch’s user base changed from being primarily confident, seasoned investors, to being first-time investors. This new user group was typically scared of investing and often felt that it was only a thing wealthy people did.

At this point, Hatch felt it was necessary to take a step back from delivering updates to take stock of their new position. This meant deeply understanding their customers’ journey from signing up, to making their first trade. Once this was intimately understood, the team delivered a comprehensive onboarding process which increased the sign-up conversion rate by 10%!

Pillar #6: When you’re relentlessly committed to making it work

Nat and Lulu describe a moment when Allbirds wanted to work with Hatch to allow ordinary New Zealanders to be involved in their IPO launch on the New York stock exchange. Again, this task faced numerous tax and trade law challenges, and offering the service seemed like yet another insurmountable task. The team at Hatch nearly gave up several times during this project, but everyone was determined to get this feature across the line – and they did. As a result, New Zealanders were some of the few regular investors from outside the U.S that were able to take part in Albirds IPO. 

Why it matters 💥

Over four years, Hatch grew to 180,000 users who collectively invested over $1bn. Nat and Lulu’s success underscores the critical role of teamwork and collaboration in achieving exceptional user experiences. Product teams should remember that in the rapidly evolving tech industry, it's not just about delivering the latest features; it's about fostering a positive and supportive team culture that buys into the bigger picture.

The Hatch team grew to be more than team members and technical experts. They grew in confidence and appreciated every moving part of the business. Product teams can draw inspiration from Hatch's journey, where designers, writers, engineers, and even the CEO actively engaged with users, challenged traditional design decisions, and prioritized solving actual user problems. This approach led to better, more user-centric outcomes and a deep understanding of the end-to-end user experience.

Most importantly, through the good times and tough, the team grew to trust each other. The mission weaved its way through each member of the team, which ultimately manifested in positive outcomes for the user and the business.

Nat and Lulu’s concept of radical collaboration led to several positive outcomes for Hatch:

  • It changed the way they did business. Information was no longer held in the minds of a few individuals – instead, it was shared. People were able to step into other people's roles seamlessly. 
  • Hatch achieved better results faster by focusing on the end-to-end experience of the app, rather than by adding successive features. 
  • The team became more nimble – potential design/development issues were anticipated earlier because everyone knew what the downstream impacts of a decision would be.

Over the next week, Lulu and Nat encourage designers and researchers to get outside of their comfort zone and:

  • Visit customer support team
  • Pick up the phone and call a customer
  • Challenge status quo design decisions. Ask, does this thing solve an end-user problem?

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.