Skip to content

Ad Campaign Optimization: Benchmarking Performance

This is part 2 of a 5 part series on Ad Campaign Optimization.

Understanding your baseline performance is the first step towards improving it. If you don’t now what is normal, how can you identify abnormal performance?

There are a few types of online ad campaigns, but we’ll use a cost-per-click (CPC) campaign as an example. In these campaigns, you only pay when a user clicks on your ad (not when they see it). As with most advertising, the actual cost per click can vary day to day based on how advertising demand changes. Below is the data for an example CPC campaign over a week:

Day of Campaign Clicks Cost CPC Sessions Bounce Rate
Day 1 719 1015.99 1.41 669 59.19%
Day 2 824 1160.45 1.41 724 68.65%
Day 3 795 1158.82 1.46 733 64.39%
Day 4 886 1272.06 1.44 844 59.48%
Day 5 723 1082.00 1.50 695 56.26%
Day 6 823 1180.12 1.43 785 57.71%
Day 7 787 1142.64 1.45 762 55.51%

Wow, that’s a lot of data. Where do we get started?

Our first step is to determine what is typical for this campaign, so that we can easily evaluate its performance and compare its performance to the vast number of other campaigns you run every day. While it is tempting to use price (CPC), it can be a misleading measurement of a campaign because the lowest cost campaigns may not be producing users who engage or convert. What you need is a metric that combines cost with performance, so that you can measure a campaign by its return on investment.

In this simple example, I do not have actual user behavior data, like order value related to each click, so I need to figure out a different metric. The bounce rate is the percentage of users who come to the site and leave after only seeing one page, therefore the inverse, (1 – Bounce Rate), is the percentage of users who visit at least two pages and are, hence, more engaged. If I weight the engagement by the price, CPC, I can create a useful campaign value metric:

Value = CPC * (1 - Bounce Rate)

With that value metric in hand, we can now benchmark the example campaign. The value for our campaign for each of the seven days is as follows:

Day of Campaign Value
Day 1 0.58
Day 2 0.44
Day 3 0.52
Day 4 0.58
Day 5 0.65
Day 6 0.61
Day 7 0.65

Given our 7 days of data, we need to choose a statistic to capture the nature of the campaign over this time period and become our benchmark. As we have mentioned previously, using the mean is dangerous if the data does not follow a normal distribution and we have no way of knowing if it does. Instead, we can use the median and have more confidence that we are not misleading ourselves!

The median value is 0.58, which is now our benchmark for this campaign. We can evaluate future days for this campaign based on whether they are above or below this benchmark. We can also compare all of our campaigns together using their benchmarks, to see which perform better or worse over the period of time we chose. You should also consider benchmarking across multiple time periods as performance may vary by week, month or year.

It is important not to think about your campaigns in a vacuum, so you should also calculate global benchmarks of the benchmark across all campaigns you run. With a global benchmark and individual benchmarks for each campaign, you can easily see which campaigns are over- and under-performing the expected performance and make adjustments accordingly.

Tomorrow we will take these benchmarks and use them to set up a way to monitor changes in campaigns over time so that we know if a campaign that performs well today performs poorly tomorrow!

Quote of the Day: “Advertising – A judicious mixture of flattery and threats.” ― Stephen Leacock

The Ad Campaign Optimization series