We’re celebrating the release of Text Ad Zoom with an in-depth look at how you can optimize your text ads. In case you missed it last week, check out The Ultimate List of PPC Ad Testing Resources. It’s a huge collection of articles, videos and presentations on testing and writing text ads.
This week, we’re doing a 5 part Q&A series with the authors who wrote the articles featured in the list. Each day this week, we’ll have a different question and the answers.
Opinions vary and sometimes the authors disagree. Proof again that no matter how much experience you have, the data will win the day. Not every author answered each question. Finally, the answers are unedited straight from the authors, so draw your own conclusions and remember to test any ideas you read.
Read below for today’s question and answers (in no particular order).
Text Ad Optimization Question #1: What are some of the biggest mistakes people make in text ad testing (aside from only measuring CTR changes)?
Brad Geddes: I don’t think enough people focus on Profit Per Impression. Just by choosing the lowest CPA or highest converting rate ad, does not mean you will bring in the most revenue for your account. Another mistake is not having enough data before making decisions. There are too many online calculators where you can input some very low numbers (like 15 impressions and 1 click for one ad and 10 clicks and 15 impressions for another one) and the tool will tell you that you have a winner. Although, the number one mistake is not doing it at all. Ad copy testing is so easy that everyone should always be running a few tests at any one time.
Andrew Goodman: I often hear: “test only one variable at a time.” Statistically, this really makes no sense, and more than that, it’s impractical. From a statistical standpoint, if you go in and try to isolate which of two calls to action are “better,” for starters, you’re ignoring variable interactions (once anything else you want to test has to be changed, you’re now assuming the winner from the previous test would interact most favorably with the changed conditions) and you’re ignoring the opportunity costs of the other tests you could be running. People will interpret this “test little things one at a time” maxim so literally, they will take forever to optimize properly. What this approach fails to see is how blinkered it makes you. “Is ‘buy now’ or ‘buy today’ a better call to action?” Maybe they’re about the same, or maybe what you’ve just done is rule out a different style of ad that took more room talking about pricing or a third party endorsement, or some other trigger. There is absolutely nothing wrong with bolder testing of three or four very different style of ad, to see if any of these create a significantly better response. For some reason, that sounds unscientific to some people, but you don’t create marketing results by spending your time in the wrong chapters of the wrong statistics textbooks.
1. Assuming they know what types of messaging appeal to their audience and not testing very different approaches against each other.
2. Completely ignoring CTR changes- though ultimately for a revenue or lead-based client you want the highest-conversion-rate ads, high CTR ads with lower conversion rates are informative. High CTR with lower conversion rate=people liked something about your ad but didn’t see a follow-through on your landing page, so it’s an opportunity to modify your landing page to match expectations and turn your high-CTR low-conversion ads into high-CTR high-conversion ads.
3. Completely disconnecting ad text testing and landing page testing (see above). One is the promise the other is supposed to deliver on, so even though it makes testing more complicated you can’t treat them as separate entities.
4. Running too many ads against one another for your traffic numbers. This just slows down testing and drags out poorly-performing tests. Let’s just figure out what works and move on to the next test, not watch something suck for two months until we’re 100% sure.
Chad Summerhill: Not considering the cost of testing – You are just as likely (if not more likely) to lose than to win a test, so you want to eliminate losers quickly. Focusing on conversion rate only – If possible you should focus on conversion-per-impression or profit-per-impression. The goal should be to maximize total conversions/profit.
Amy Hoffman: People seem to tend to get a little pause-happy, meaning, they tend to try to pick a winner before the test is statistically significant. There are a few free tools online for determining statistical validity, which should be used to aid in the decision making process.
Erin Sellnow: The two biggest mistakes I often see are people testing too many things at once (so it is difficult to isolate what really worked) or they don’t let ads accumulate enough data, and pause too quickly. While it is tough to wait it out, patience is important so you know you are making the correct decision.
Pete Hall: I’d say people too often think that their new ads will crush the current iterations, so being overconfident with your ads and not properly A/B testing can an issue if you’re not careful. I’ve had numerous instances where I thought I’d written the perfect ad, built on successful elements of past ads, implemented it, and it tanked. So making sure to properly test your new ads against existing ads, even if you think it’s perfect is critical.
One other mistake is not setting ad delivery to rotate in an A/B test. AdWords likes to favor ads and this will skew your results, so ensuring delivery is set to rotate is key.
Ryan Healy: Here are three common mistakes I see:
1. Writing an ad that gets a lot of clicks, but is not consistent with the messaging on the landing page. (This disconnect can hurt conversions and profitability.)
2. Writing a winning ad, then letting it run for months (or years) without ever writing a new ad to challenge it.
3. Writing two or three ads for an Ad Group, then letting them run for months (or years) without ever deleting the losing ads.
Jeff Sexton: Well, perhaps the biggest mistake is NOT optimizing ad text – or doing some testing and then adopting a “set it and forget it” mindset.
But, assuming that people are actively testing their ad text, the next biggest mistakes is not thinking past the keywords to get at the searcher intention BEHIND those keywords. Behind every set of keywords are people who are searching on those keywords in response to a need, problem, or question. Optimizing ad text means writing ads that better speak to those people on the other end of the screen.
So you should be looking at actual searcher queries associated with those keywords, past test results, competitive ads and landing pages, etc. in order to actively seek out an understanding of searcher mindset. Once you have that hypothesis you’ll be able to write ads on a more coherent basis and also able to interpret test results on a more scientific basis. In other words, the proving or disproving of a hypothesis will give you a direction on “what to try next” after each test, whether winning or losing. This will also allow you to more intelligently apply other ad writing best practices.
1. Looking at the wrong sample size and/or deciding based on ”bad data” – Even though there are a lot of tools to help you identify whether you’ve reached statistical significance, people often ignore them and either end tests too soon or run them too long. Another variation on this theme is looking at “bad data” to draw conclusions about a test – basically you want to carefully catalog changes within your account so that you’re not lumping in data where a text ad is married to a different landing page or set of keywords. Those things can have a huge impact on ad performance, and may lead you to pick the wrong winner.
2. Not testing enough – This is far and away the biggest mistake we see, particularly in larger campaigns. Across our network we see around a 30% lift in sales from continual optimizations made by our writers. This means for higher volume ad groups where you’re neglecting to test and iterate on ad copy, you’re leaving a lot on the table.
Bradd Libby: ‘Only measuring CTR’ is a big one by itself. There’s at least one company, BoostCTR.com, named after doing this process wrong.
Here are some other mistakes:
1. Treating ad testing like it might be a quick cure for current performance problems. That is, waiting until some problematic performance is seen and then trying to use ad testing to improve results by the end of the month. Ad testing should be done continuously as a normal part of account management.
2. Not qualifying traffic prior to testing. It doesn’t do much good to test two ad creatives against each other on month, pick the winner, and then the next month add a bunch of negative keywords to the adgroup.
3. Misinterpreting the meaning of statistical significance. Confidence levels only state how likely results were to not have been obtained by chance.
4. Not repeating tests. Reproducibility is one of the hallmarks of good science. If ad ‘B’ wins in an A/B test, you should be able to repeat the test in 3 months and see ‘B’ beat ‘A’ again.
Crosby Grant: And aside from not doing it at all! Judgment errors in choosing the winner ad is a mistake that can be pretty costly and that happens often. Using rigorous statistics is one part of the solution, but often requires more traffic, and thus time, than is reasonable or available. A good example is with holiday ads. You only get about a week each year to test each holiday ad version, which might not provide enough traffic. To a lesser extent a similar dynamic happens with ads running year-round if you are imposing an artificial time horizon for your test cycles. For example, if you want to complete a test every week, or every month. Then of course there is the question of which metric(s) to optimize for. Books could be written on that one. My preference is for maximizing margin ((advertising revenue – advertising cost)/advertising revenue) because it takes all of the other metrics into account, and because at the end of the day, more money in your pocket is, well, more money in your pocket. Of course, many advertisers don’t use rigorous statistics at all, and simply rely on judgment based on the metrics, whichever metric they choose. I call that “business statistics.” Statistics is not the whole solution though. It is quite possible to have two identical ads with statistically significant variances in performance. This is mostly due to the X-Factor of AdWords’ system assigning Quality Score based on limited data – which is another topic altogether. So, another part of the solution is considering the content of the ads. This is where human judgment comes in, and where experience really helps. Choosing a test, and choosing a winner, then interpreting that to help you craft more ads that are also winners, is part of the art. It works together with the science provided by the statistics. Getting this part wrong is a potentially costly mistake that happens often, and that’s why it makes my list of one of the biggest mistakes people make in text ad testing.
Rob Boyd: I feel the largest mistake is not creating ads with a purpose. When you get down to it, you can have all of your metrics and variables planned out perfectly but in the end it all comes down to the ad text. Is what your writing more effective at reaching your target audience then your existing ad? Is your ad focused on intent? As marketers, we don’t always write winners but I think the largest mistake is to throw darts blindfolded. If you aren’t truly getting into the mind of your audience you are stacking the deck against yourself. Plus, when you do write a winner, it’s all the more satisfying. In my opinion, the second largest mistake in ad testing is not keeping your account pace in mind. What I mean by that is, you have to test in relation to the spend or click level of the account. If each ad group is only generating a handful of clicks a day and you are testing 5 ads, it could take months over months to gather statistically relevant data. Testing in relation to your data gathering ability is important because it will allow you to make actionable decisions more frequently, which should result in more consistent incremental improvements over time.
Greg Meyers: Many Advertisers tend to test too many elements all at once, so there is no clear understanding of what was the deciding factor in identifying a winner vs. loser. Another key mistake that happens is figuring what elements make up the test. Typically, the 1st level test should be either a specific CTA (Call to Action) or to a different Audience. The idea of testing a single word would be a waste of time and would not “move the needle” Other common mistakes would be insufficient length of testing time which could lead to misinterpretation of results.
A. Testing too many variables at once, which makes it difficult to pin down what actually led to the winning ad.
B. Testing too many ad copy variations at once, which makes getting enough data to make statistically significant data difficult
C. Going along with B, not basing decisions off of statistical significance
D. Not Testing at All!
John Lee: Advertisers make a wide variety of mistakes when testing text ads. The biggest, and most obvious, is simply NOT testing at all. But more specifically, advertisers frequently test too many ads at once. This can slow down testing, complicate determining results, etc. Test a smaller number of ads, 2-3 is best, with concrete testing variables in each.
Jon Rognerud: Firstly, testing with too little data. In other words, they make a decision to pause or delete an ad before understanding or knowing that it actually works. Secondly, just copying what others are doing – assuming that it will work for them.
Joe Kerschbaum: Testing too many variations at once. Testing variations that are too similar; I’ve seen too many tests where the ads are basically the same except for perhaps a punctuation mark. Test big ideas and see what works.
Learn More About The Authors
- Brad Geddes – Certified Knowledge
- Andrew Goodman – PageZero
- Jessica Niver – Hanapin Marketing
- Chad Summerhill – PPC Prospector
- Amy Hoffman – Hanapin Marketing
- Erin Sellnow – Hanapin Marketing
- Pete Hall – Room 214, a social media agency
- Ryan Healy – BoostCTR / RyanHealy.com
- Jeff Sexton – BoostCTR / JeffSextonWrites.com
- Tom Demers – BoostCTR / MeasuredSEM
- Bradd Libby – The Search Agents
- Crosby Grant – Stone Temple Consulting
- Rob Boyd – Hanapin Marketing
- Greg Meyers – SEMGeek / iGesso
- Bonnie Schwartz – SEER Interactive
- John Lee – Clix Marketing
- John Rognerud – JonRognerud.com
- Joe Kerschbaum – Clix Marketing