When should you end a new initiative that’s not performing? How many failed attempts are necessary to prove it is unlikely to ever work? Marketers constantly face this challenge as they test new landing pages, keywords and other elements of their marketing campaigns – many of which fail right out of the gate.
For those of you who have any historical data by which you can compare your new initiative, I have created a simple formula and table for answering these questions. Let’s use two common examples related to Adwords campaigns.
First, let’s say you are testing a new landing page to see if you can improve your conversion rates. Your previous landing page was converting 10% of clicks into web leads. How many failed clicks (or visits) to your new landing page are necessary before you shut it down? It depends on how confident you want to be that you are not jumping the gun too early. After 28 attempts with no conversions, you can be 95% confident that your new landing page will underperform your old one. If 80% confidence is all you need, then 16 attempts will suffice.
Similarly, let’s say you are testing a new keyword to see if you can achieve conversion rates on par with the 25% that your other keywords are able to achieve. That will take just six failed clicks to achieve 80% confidence and five more to achieve 95% confidence.
Here are the data points for scenarios where 80% certainty levels are acceptable:
- 79 failed visits if we expect a 2% conversion rate
- 31 failed visits if we expect a 5% conversion rate
- 22 failed visits if we expect a 7% conversion rate
- 16 failed visits if we expect a 10% conversion rate
- 10 failed visits if we expect a 15% conversion rate
- 8 failed visits if we expect a 20% conversion rate
- 6 failed visits if we expect a 25% conversion rate
The formula is fairly straightforward and, absent additional data points such as sample size and standard deviations, represents a decent approximation that can be applied to your campaigns. Your confidence level after one failed attempt is equivalent to your expected success rate. Then for each subsequent failed attempt, your confidence level increases by the product of your expected conversion rate multiplied by the remainder of one minus your confidence level achieved in your prior attempt. If that sounds confusing, you can rely on the accompanying table as an easy reference. The left column represents the number of failed attempts such as visits to your website that did not convert. The top row represents your expected success rate or conversion rate. The values in the body of the table represent your acceptable confidence levels.
These data points are only valid when all initial attempts are failures. As soon as you have a single instance of a success, then you will need to apply slightly more complicated statistical models.
In full disclosure, while I am decent at math and have taken one (yes, just one) stats course, I am not a statistician. If you are a statistician and can provide a more exact formula or a proof of my formula, I would greatly appreciate your comments below!
Looking for software? Check out Capterra's list of the best software solutions.