The list of of things we can do for growth is unlimited.

Worse, we each operate in a world where many of our tests for growth will fall flat. In fact, I’ve never met ANYONE who doesn’t fail the majority of their growth experiments.

Is your process for growth to try stuff and see what sticks? Unfortunately, this almost never works. The idea of doing tests is good, but the way you test is important.

In this post, I’ll show you how to 2X your growth experiments.

After going through a way think about growth tests, I’ll share a SIMPLE Experiment Spreadsheet I’ve used with clients.

For growth teams smaller than ~5 people too much process is likely an overoptimization, however this spreadsheet includes some suggested columns to help prioritize growth tests even if you’re experimenting on your own.

How to think about growth experiments: start small

This seems counterintuitive. It’s natural to want to focus on the high impact, high effort activities we expect to offer us larger results.

But with growth, your big ideas are JUST as likely to fail as your smaller ideas.

From what I’ve seen working with over a hundred new products, it’s hard to start by going big. If you’re focused on moving that one metric this month, I’d be skeptical of any test longer than a couple days.

To find the most leverage focus as much as you can on the high impact, LOW EFFORT activities.

growth test framework

As your growth goals get harder to hit every month, you can start to come in more prepared for the bigger experiments.

Here’s how the process looks like.

Case Study #1: Solving activation at Teachable

At Teachable we had the HARD problem of onboarding new users.

It’s not easy to convince someone to move all their content over to a completely new platform and start selling courses online with our product.

teachable activation

The challenge? Well first, we didn’t know for sure WHO exactly our best users were.

And with an aggressive monthly growth goal as our constraint, we couldn’t risk a 3-4 weeks on an onboarding redesign that might not work if we were to hit that target.

Rather than going for the big investment in our onboarding, we focused on the lowest-hanging fruit. We used a process to prioritize smaller experiments we could complete much faster like:

– Email automation
– In-app chat
– Surveys
– Webinars
– An onboarding screen

Taking each of these lower effort approaches helped us get immediate feedback. We then plowed that into the larger redesign when we did it.

Let’s go deeper into how this way of thinking works with email automation.

Rather than spending 2-3 weeks writing and implementing a sophisticated email funnel, our first implementation took 2 full days to implement which included a 7 email drip sequence with some additional emails set up based on events such as create a course, first student and first revenue.

While this worked and helped us hit our goal that month, we learned. It turns out Teachable had a variety of use cases, from independent entrepreneurs to celebrities and businesses. So we set up a survey to better segment new sign ups:

onboarding survey

Now we could target our emails toward very specific types of users like those WITH content, AND an audience. Our product could deliver immediate value to to these folks so we could jump on the phone with them or ask for a reply.

THEN, for the users who weren’t the best fit yet who had NO content and NO audience, we could send them to blog content and courses to help them solve their main challenge.

Another example.

Case Study #2: Solving Activation at Fomo

Here’s another case study from Ryan Kulp who runs Fomo.

For context, Fomo helps ecommerce companies increase conversions by adding social proof. It’s a company Ryan acquired and helped grow from $20k to $30k MRR in 90 days.

ecom conversions

One of the challenges Ryan faced initially at Fomo was getting new users who’ve signed up to implement Fomo code on their site.

Often, their target audience is marketers and site owners who may not be very technical. They realized after implementing their current onboarding, they needed to make the process easier.

But rather than dropping their efforts to rebuild the “right” onboarding experience, Ryan tested a very simple solution.

He added a P.S. to the bottom of their first onboarding email that linked directly to a page where a new user could connect to a new Zapier account and finish the implementation:f

welcome email

The outcome? A spike in the number of Zapier connections following signup.

usefomo zapier

Based on a database query, Fomo’s implementation rate jumped from 12.6% to 25% after this simple change.

PLUS, more responses from happy users like this:

thank you fomo

Could users have missed the P.S. in this email? Certainly, and it could have been a failed experiment. However by focusing on an immediate solution, they now have room to build up and further optimize.

Now that Fomo knows this email works in their updated onboarding, they can really start to optimize it. Perhaps a more obvious link in the email, follow up emails to those who haven’t connected. THEN, when they get more resources they can plow what they’ve learned into a new onboarding.

Now that we’re in the same place about the TYPE of experiments to run you’ll be MUCH further along.

While smaller experiments are well and good paired with the below process, I’ve found a significant increase in the pace of quality tests launched.

By having SOME process around prioritization, you can launch more meaningful experiments and improve your rate of learning…

Here’s the Simple Process to Run More Growth Tests

Something you intuitively know: there’s a downside to too much process.

Not only is it a pain in the a** and harder to get buy in from a team, it can slow you down.

This is exactly the opposite of what you want. A complex growth process is an over optimization.

But you also DO NOT want to avoid process.

The goal is to be able to zoom out to focus on what matters, rather than just throw stuff out to see what sticks.

With a simple process, you can get more team buy in, feedback on tests and improve your rate of learning.

Here’s the simple process that worked for me and I’ve used over the past four years in working with dozens of early-stage teams:

Step 1: Set ONE main goal

Okay, so we already discussed this in setting ONE focus, but I’ll reiterate: without a single meaningful goal, this stuff is way to hard. Read or save the above post or do not keep reading.

Step 2: Write out ALL your growth ideas

This step is all about getting the ideas out of your head.

It helps to batch this work, so you aren’t analyzing, rather just running a long list of ideas about how to improve your one metric.

This GrowHack Experiment Spreadsheet was designed as a SIMPLE place to put these ideas down so make them easy to analyze afterwards (which we’ll discuss in the next step).

growth spreadsheet

What can work well is to share this document with others on your team. This can help you avoid group think, and give those with less forceful personalities and good ideas a voice.

Step 3: Prioritize by Impact and Ease (IE)

Within the above spreadsheet you’ll notice I’ve only included a way to prioritize ideas by Impact and Effort.

What I’ll sometimes do in the column is write out the specific improvement I might expect or the specific resources I might use for an experiment. That said, keep it simple.

There are a variety of ways you can prioritize growth ideas. Two frameworks that come to mind are ICE (impact, confidence, ease) and PIE (potential, importance, ease). These might be right for you.

However, for any growth experiment I’ve seen run confidence is nearly impossible to assess until you have something live.

The risk in your growth process is more correlated to the size of your experiment. When we talk about confidence level of an experiment, what we’re really trying to assess is risk. However given we’re operating in an environment of high uncertainly ALL of our tests are risky.

To keep your process simple prioritize by effort and impact only (IE). By keeping your test size small, you’ll learn faster and validate whether it makes sense to continue to invest in an approach.

Step 4: Quickly implement

If you’ve done the first three steps right, your implementation has now gotten dramatically faster and easy. You can now more quickly prioritize and implement your growth tests.

It’s proper form to have a metric or at least a qualitative bet in mind before you run the test and an ability to measure it to keep score. Use your own internal process to actually run the tests.

Step 5: Analyze & learn

Now it’s time to look back on the experiment and assess the outcome. Were you surprised by the results? Why or why not?

The most terrible growth ideas come out of not learning from experiments. Pausing to really do this step well is important, given the more you learn the stronger hypotheses you’ll be focus on next.

If you’re collaborating with a team, a weekly meeting that focuses on just discussing what’s been learned can be something to consider.

That’s it!

I hope you actually apply this way of thinking (and use the spreadsheet) to run more growth tests.

Let me know if you have any questions below the post!

4