Ad Campaign Optimization: Benchmarking Performance
Understanding your baseline performance is the first step towards improving it. If you don’t now what is normal, how can you identify abnormal performance?
There are a few types of online ad campaigns, but we’ll use a cost-per-click (CPC) campaign as an example. In these campaigns, you only pay when a user clicks on your ad (not when they see it). As with most advertising, the actual cost per click can vary day to day based on how advertising demand changes. Below is the data for an example CPC campaign over a week:
|Day of Campaign||Clicks||Cost||CPC||Sessions||Bounce Rate|
Wow, that’s a lot of data. Where do we get started?
Our first step is to determine what is typical for this campaign, so that we can easily evaluate its performance and compare its performance to the vast number of other campaigns you run every day. While it is tempting to use price (CPC), it can be a misleading measurement of a campaign because the lowest cost campaigns may not be producing users who engage or convert. What you need is a metric that combines cost with performance, so that you can measure a campaign by its return on investment.
In this simple example, I do not have actual user behavior data, like order value related to each click, so I need to figure out a different metric. The bounce rate is the percentage of users who come to the site and leave after only seeing one page, therefore the inverse, (1 – Bounce Rate), is the percentage of users who visit at least two pages and are, hence, more engaged. If I weight the engagement by the price, CPC, I can create a useful campaign value metric:
Value = CPC * (1 – Bounce Rate)
With that value metric in hand, we can now benchmark the example campaign. The value for our campaign for each of the seven days is as follows:
|Day of Campaign||Value|
Given our 7 days of data, we need to choose a statistic to capture the nature of the campaign over this time period and become our benchmark. As we have mentioned previously, using the mean is dangerous if the data does not follow a normal distribution and we have no way of knowing if it does. Instead, we can use the median and have more confidence that we are not misleading ourselves!
The median value is 0.58, which is now our benchmark for this campaign. We can evaluate future days for this campaign based on whether they are above or below this benchmark. We can also compare all of our campaigns together using their benchmarks, to see which perform better or worse over the period of time we chose. You should also consider benchmarking across multiple time periods as performance may vary by week, month or year.
It is important not to think about your campaigns in a vacuum, so you should also calculate global benchmarks of the benchmark across all campaigns you run. With a global benchmark and individual benchmarks for each campaign, you can easily see which campaigns are over- and under-performing the expected performance and make adjustments accordingly.
Tomorrow we will take these benchmarks and use them to set up a way to monitor changes in campaigns over time so that we know if a campaign that performs well today performs poorly tomorrow!
Is Ad Campaign Optimization a problem you face in your business? Outlier is a product designed to help! Outlier monitors your business data and notifies you when unexpected changes occur, allowing you to know immediately when ad campaigns are underperforming or overperforming. If you’re interested in seeing a demo, schedule a time to talk to us.
Quote of the Day: “Advertising – A judicious mixture of flattery and threats.” ― Stephen Leacock