By Alistair Dent

Yesterday Google released a blog post here detailing statistical work they undertook to determine whether conversion rates vary with ad position. Take a look, it’s quite a decent (and short) read. However I have some concerns about their methods.

  • Search

Back to blog home

A snippet of text from the article is below:

Another difficulty is that the average position number reported by Google is that it is an average over all auctions in which you participate. If you increase your bid, it is quite possible to see your average position move lower on the page! The reason is that when you increase your bid, your ad will appear in new auctions, and it will tend to come in at the bottom of those new auctions. This effect can be large enough to push your overall average position down. See this FAQ for more on this issue.

We have used a statistical model to account for these effects and found that, on average, there is very little variation in conversion rates by position for the same ad. For example, for pages where 11 ads are shown the conversion rate varies by less than 5% across positions. In other words, an ad that had a 1.0% conversion rate in the best position, would have about a 0.95% conversion rate in the worst position, on average. Ads above the search results have a conversion rate within ±2% of right-hand side positions.

Now the reason I looked into this in more detail is that this isn’t the effect I have seen. I know that my analysis is much more anecdotal than theirs (they have access to data from all advertisers in all industries in all geographies and I only have access to data from clients that myself or my colleagues have run campaigns for) but I have immediately seen that in fact there are strongly varying effects by marketplace and by industry. In fact, I would go so far as to say that roughly 80% of my clients have actually had conversion rates change significantly due purely to changes in ad position.

However - and this is what makes it more complex - not always the same way. If 40% of my clients found conversion rates to be higher at the top of the page, and 40% found them higher further down the page, and the remainder saw no change; then dealing with that data on aggregate would look like there was no major change.

What I’m getting at here is that as far as my experience goes there isn’t one population to analyse, there are several. Grouping the data together would lead to an inaccurate picture of what is really happening.

Now I don’t have anywhere near enough details about their analysis, and it’s being led by Hal Varian, a well respected economist who has published well in his field. So I would like to assume they’ve taken this into account. But there’s no mention in the article of anything like that. And no separate results for technology versus clothing versus accountancy versus publishing… Is it overly simplistic to assume that these will all behave the same way?

Two examples, and our hypothesized reasons for them:

  1. A company selling electronics products - laptops, printers, etc. These are items for personal users with a high relative cost, yet they are homogeneous and buying from one supplier makes little difference to the product than buying from another.
  2. A company selling graphics to businesses. They have a good URL for their industry and the products are highly personalised.

In the first instance, conversion rates rise dramatically as ads move down the page. In the second, the reverse occurs. Why? Since electronics products are homogeneous, people’s supplier decision is dictated largely by price. Because the product is a large proportion of disposable income people research thoroughly. These two factors combine to mean that serious buyers click further down the page and then buy from the cheapest source (assuming all sites look reasonably trustworthy etc).

But in the second case the so called “vanity positions” at the top of the page imply to the browser that the supplier is a successful firm. Saying “look at us, we’re at the top of Google” has a larger effect when the user has to trust you that your personalised product will be in safe hands.

Now obviously both of these effects work in both cases, and there are many other effects too. These are simply a couple that I have picked out to explain why there will regularly be differences in conversion rates due to ad position. In some cases one effect will be stronger and we’ll see conversion rates rise with ad position, in other cases another effect will dominate and conversion rates will fall with ad position. The point is that different markets behave differently, and have a different mix of effects working.

To come to a conclusion and say “conversion rates don’t vary with ad position” is patently false, because we see otherwise every day. So why would Google’s own analysis say otherwise? I suggest that it’s because they decided that more data would lead to better analysis, and failed to separate the different populations involved, leading to aggregated data and misleading conclusions.

Share this article