Photo by media.worldbulletin.net
By definition, the placebo effect can only occur in humans. When running drugs trials (which are effectively A/B tests) scientists have to be very careful to try to minimise the possibilities of results being influenced because the testers think they are being treated with a drug, when in fact they are receiving a tablet containing only water.
I am taking part in a drugs trial > I’m taking a tablet > I feel better
Ben Goldacre, a doctor and writer, has spoken and written extensively about this. He has a particular distrust of homeopathic medicine - he believes any improvement in the patient condition is due to the placebo effect and not because the homeopathic medicine does anything particularly beneficial.
We don’t have this psychological challenge with A/B testing in Digital Marketing.
A landing page doesn’t know it’s taking part in a test, so it won’t affect its performance!
In digital marketing it’s really much easier to use control, test and learn principles. But what we do have to be careful of is finding correlation between data sets and incorrectly assuming there is a form of causation.
Photo by http://xkcd.com/552/
It seems that those with access to “big data” can find it the most tempting to make the mistake of assuming causation when in fact there is probably only correlation.
Tim Harford, the undercover economist, explained back in April [link to: http://timharford.com/2014/04/big-data-are-we-making-a-big-mistake/] how Google published claims several years ago to show that their access to global search data allowed them to plot the spread of flu symptoms. But it appears that no-one at Google (or the scientific journal Nature) stopped to consider whether this search behaviour was inextricably linked to flu symptoms, or whether there could be other factors driving the searches (e.g. information searches by healthy people, the Google algorithm influencing search behaviour or possibly even a growing number of hypochondriacs!).
In some cases we would like to run robust A/B tests, but we have to make compromises.
For the new Petplan quote & buy funnel we’ve just designed and built, due to a few technical restrictions we couldn’t A/B test the new funnel vs the old one. Instead we have to invest time and effort in robust user testing during the development stage. This meant that we were confident that the new funnel would perform well. In the end, we have seen the new funnel increase overall conversion by 10% whilst on smartphones the improvement was close to 50%. These increases in conversion are calculated by comparing a date range with the new funnel vs a similar date range for the old one – whilst removing some other variables from the data mix. As a marketer I am confident in these results. A statistician might not be so happy – is there 100% proof of causation? – could it be possible that visitors to the Petplan website just happened to convert better from the day we set the new funnel live?!
In other cases we can keep the statisticians happy too.
For another Client we have just run a home page test. We split tested the traffic between the old home page (the control) and two new pages (test 1 and 2). In this case we used a testing tool called Visual Website Optimizer which allows us to robustly show these different pages to random samples of traffic and test their effectiveness at converting traffic to a single goal (in this case completed sales). After a pre-defined testing period was concluded, it was clear that one of the test pages was a winner – it led to a 22% increase in conversion from visit to sale and 15% increase in revenue.
In this final case, the process of hypothesis, control, test and learn is certainly the best practice that we aim for in all cases. I hope we continue to do more tests in this way that are as successful as this!