April 27, 2017

Written by Tim Colucci

Putting Google Experiments to the Test

I always found AdWords Campaign Experiments (ACE) to be a slightly cumbersome way to test different variables with online advertising campaigns. Reporting, particularly at the keyword level, took some effort, and the setup of the test itself could be time-consuming. So, when Google announced it was going to replace ACE with “campaign drafts and experiments,” I was rather giddy. Now that I’ve had some time to work with campaign drafts and experiments (aka Google experiments) option, I urge you to try it.

With experiments, Google allows advertisers to create a draft campaign (a replica) of a real campaign they are running. By doing so, the advertiser can make adjustments to advertising campaigns in a number of ways, such as changing keyword bids, ad group setup, ad copy, ad scheduling, and geo-targeting.

And how can an advertiser run the 50/50 split properly? Well, Google now asks advertisers how much traffic (budget) they want to spend on the new experiment campaign and how much they want to spend on the control (current) campaign. And with Google experiments, if an advertiser wants to run a test with 90 percent of traffic being piped through the control and 10 percent through the test, they can do so. Having the option to test traffic in this manner gives advertisers the capability to test even if they might be wary to spend more on a true 50/50 test.

Unfortunately (there’s always an unfortunately, amirite?) there are limits to what an advertiser can test, but those limits are not nearly the same as with ACE. For instance:

  • Some reporting isn’t available such as ad scheduling, auction insights, display placement reports.
  • The Dimensions tab is not available. Dimensions reports on search terms, by-day results, paid versus organic, and other deep-dive report.
  • Some automated bid strategies (e.g., “target search page location,” “target outranking share,” and “target return on ad spend) and ad customizers (e.g., “target campaign,” “target ad group”) are not available, either.

But, how many advertisers are looking to test these settings? Not many (other than me, that is). Rather, most advertisers will be using experiments for testing many of the basic questions, such as:

  • What messaging performs best in my ad copy?
  • Do increased keyword bids improve conversion rates?
  • What landing page leads to higher conversion rates?

For those with more advanced tests in mind, advertisers are able to dive deep into each campaign and try testing a number of variables, such as:

  • Excluding a search partner (e.g., another engine powered by Google, such as Ask.com) from the test campaign and keeping a search partner in the control campaign.
  • Targeting a city/state differently in the test campaign then in the control campaign.
  • Bidding differently on gender, age, device, or income.
  • Testing a different ad schedule.

The best new feature of the experiments is easier reporting. Instead of pulling segments, subtracting test totals from the overall totals, or having to run a crazy formula to confirm all of the test keywords were pulled correctly, Google breaks out campaign experiment results simply as “Experiment” and “Original” in the experiments tab. The totals are easy to see and couldn’t be easier to pull. Even better, these numbers are reported on in Analytics! The Analytics feature wasn’t possible through ACE.

After an advertiser does the tedious work of building out an account’s keywords, ad copy, and extensions, experiments allows the advertiser to test, and testing is the fun part of the job. Experiments allows us to get actionable data that can lead to better decision-making, not just for display or paid search, but in some cases across multiple tactics. Those results may give senior marketers another view of their marketing campaign effectiveness and rethink their approaches.