A cognitive bias is a common tendency to process information by filtering it through one’s own likes, dislikes, and experiences. In this article I will explain how cognitive biases affect our ability to judge the results of a split test, and show you an easy way to avoid making incorrect decisions based on misleading data.
The beauty of digital advertising lies in its quantifiable nature. The ability to attribute real results to an advertising investment allows digital advertisers to make qualified decisions when assigning budget and defining strategy. Measuring the results of digital marketing campaigns helps us drive traffic into areas which provide the best return on investment, highest conversion rate, or lowest cost per acquisition depending on the objective.
Having all this data is a blessing, but you must be careful not to make strategic decisions based on statistically insignificant data. You may appear to have identified a clear winner in your ad creative or landing page split test, but how can you be sure that your analysis of the results isn’t skewed by your cognitive biases? How can you be sure that the results aren’t due to chance alone?
When it comes to analysing the results of your split test experiments, a whole host of cognitive biases come into play. Most notably for split testing, confirmation bias will make you more likely to accept results which support your hypothesis. Take the split test results below – which ad variant do you think is the winner?
Click Through Rate
Variant B has a click through rate of 5.6%. This is quite a bit higher than variant A’s click through rate of 4.93%. The sample size is also relatively large, with over 1000 impressions for each ad. At first glance it seems that variant B is the clear winner, but in reality no clear winner can be identified from these results. Ending the test now and declaring variant B winner would be a poor decision.
We’ve determined that human nature affects our ability to judge the significance of test results, now we need to know how to safeguard against this, and the answer lies in asking the right questions. Let’s start from the beginning of the split testing process.
“What am I going to split test?”
Ad creative split testing is a method of statistical hypothesis testing. When creating an ad split test, you should always start with a hypothesis. This might look something like “I predict that adding a time-sensitive call to action will increase click through rate”. This is known as your alternate hypothesis.
Acting on our alternate hypothesis, we might decide to test two ad variants:
Ad variant A: Contains call to action “Buy Online!”
Ad variant B: Contains call to action “Buy Online Now!”
“Which ad variant gets clicked the most?”
To answer this we rotate the ad variants, serving them evenly to our target audience, and measure their click through rate. Let’s assume that the answer to our first question is “yes” and that one ad has a higher click through rate then the other.
“Could the difference in click through rate be due to chance alone?”
To answer this question, we need to test our alternate hypothesis against the null hypothesis which always states that, “click through rate is the same for both ad variants, and any difference in reported click through rate is due to chance alone.” We do this by calculating the confidence rating of our results.
Calculating the confidence rating of our split test results requires some rather advanced mathematics. Fortunately tools exist to perform these calculations automatically. Essentially, these tools work by determining the results P-value. The P-Value represents the probability of observing a result at least as extreme as the observed result if the null hypothesis is true. In ad creative split testing the ‘observed result’ is the difference between variant A and variant B’s click through rate.
In other words, if the P-value is low, the difference in conversion rate between ad variant A and ad variant B is not likely to be due to chance alone. Instead, the difference is likely due to our alternate hypothesis; that the time-sensitive call to action in variant B resulted in increased click through rate.
Let’s take another look at our earlier example of a split test result:
Using a statistical significance calculator we can determine the confidence rating for the results in our example above. In this case it is: 57.63%
We can therefore only be 57.63% confident that variant B’s higher click through rate is not due to chance. To make a confident decision on variant B winning the test, we need a confidence rating above 95%.
A confidence rating below 95% does not prove the null hypothesis, it just means that we don’t have a large enough sample size to disprove it. In this case, we need to continue testing until we have a confidence value of 95% or higher.
Note: if your ads are too similar, you may never reach a confidence rating above 95%. If your click through rates remain very similar even with a large sample size (many impressions), you may need to start a new test with less similar ad variants.
Henry is a PPC and data science specialist at Vertical Leap and has worked in digital marketing since 2007. He specialises in data and analytics, enterprise PPC and design. In his spare time he likes golf, Photoshop battles, 3D printing, stencil graffiti, drone photography and brewing his own alcohol.
Categories: Machine Learning, Martech
Categories: Data Science
Categories: PPC, Social Media
Categories: PPC, SEO
If your digital campaigns are underperforming, our commitment-free health check will reveal powerful insights to help you improve performance.