Advertising: is it art or science? Surely most of us can agree that a healthy dose of both is necessary for powerful advertising.
As direct-response advertisers, testing is a huge part of our daily ad operations; we’re continually running creatives against each other, optimizing campaign bids and budgets, and analyzing down-funnel results between audiences.
A lot of the time, well-structured campaigns can answer all the questions we have. Sometimes, though, we have to put our lab coats on, develop a solid hypothesis, isolate our variables and get our A/B testing on.
In this post, we’re sharing some of our experiences with Facebook’s newish testing feature, creative split testing, along with some advice on when and when NOT to leverage this feature.
First, a bit about the Facebook split testing feature.
Facebook Split Testing 101
Creative split testing (Facebook’s term for ad A/B testing) has evolved from an API-only feature (which we leveraged through our Facebook tech partner Smartly.io), into the self-serve tool it is today.
Launched in November 2017, Facebook advertisers can now A/B test any of the following variables:
- Creative
- Delivery Optimization
- Audience
- Placement
Facebook A/B testing works by evenly splitting your ad set (up to five times), eliminating audience overlap and ensuring that users only see one variation for the duration of the test. It also guarantees that the exact same amount of budget is allocated to each variation, something that is impossible to achieve without this feature.
Of course, all the standard A/B testing best practices apply:
- Isolate and test only one variable at a time for best results.
- Give your test enough time to determine a statistically relevant winner. Facebook recommends four days, but we use this handy dandy statistical relevance calculator by Kissmetrics.
- Similarly, A/B testing typically works best with larger audiences. The bigger, the better, but aim for audiences of 1 million or more in each of your subsets.
- Set your KPI of interest up front, whether that’s click-through rate or cost-per-install.
For an in-depth look at how to set up Facebook ads for split testing, Adespresso has you covered.
When You SHOULD Use Creative Split Testing
The value of creative split testing should be apparent; it helps advertisers determine the scientifically best performing ads, audiences, or bidding strategies. It does, however, require deliberate planning and dedicated budget, thus often lowering overall efficiency.
We’re constantly weighing the benefits of A/B testing with those costs, and have outlined some common scenarios in which you SHOULD utilize A/B tests, and others in which you SHOULDN’T.
Here are a few scenarios in which you should.
#1: Testing imagery to guide future creative direction
The golden rule of running split tests is quite simple–use them to test measurable hypotheses. As an advertising agency that strives to maintain campaign structure hygiene, we take this rule a step further. We run A/B tests against hypotheses that are meaningful to our overall strategy.
When set up correctly, A/B testing can provide black-and-white learnings to inform your creative strategy going forward.
Take this example…
We ran a split test that tested an existing high-performing ad against one with the same messaging, concept and copy but a different design element.
Details: Our isolated variable in this test was the design concept; all ad copy, in-image copy, CTAs, and overall concepts were the same. By monitoring cost differences among our top performing audience segments, we wanted to determine which path to take creative concepts going forward.
Results: After running the test for a full week to reach statistical significance, we were able to determine a winner based on lower CPAs. We then utilized that learning to guide the rest of our creative asset production, helping us lower overall costs on average 60%.
#2: Testing messaging to inform company-wide strategy
Similar to our previous example, split-testing can help inform messaging across channels, or even company-wide.
We ran a split test for an early-stage transportation company to determine which first-touch messaging resonated best with their core demographic.
Details: While we could have run typical ad sets, we knew that in helping our client determine future positioning, having a high level of confidence was important. Running a controlled A/B test also gave us insight into lifetime value, which we’ll discuss later in this post.
Results: In this case, our messaging variations didn’t have a major effect on the CTR. Although statistically relevant, our winning messaging had a less than 3% better CTR. It’s important to remember, however, that even those findings are valuable, prompting us to ask us other questions–do we need to focus on other areas like the imagery? Calls-to-action? Value-prop communication? Should we target different audiences with different messaging?
#3: Testing creative concept variations for specific personas
Split testing can also be useful when figuring out what creative resonates with new or hard-to-crack audiences. Employing the “ax first, sandpaper later” mentality, you can utilize a split test to determine which tactics resonate better, and make incremental optimizations afterward.
Take for example…
For one client, we wanted to run a split test to see if persona-specific creative would perform better than generic creative among a new demographic group.
Details: This particular test utilized existing successful ads against new ads to test our hypothesis that persona-specific creative would do better.
Results: These findings were significant in that this was an entirely untouched user segment for us, and gave conclusive guidelines on how to approach this audience in the future. The costs for the losing concept were 270% higher.
#4: Determining long-tail value of messaging or user groups
Although more time- and resource-intensive, you can also use split testing to create pseudo cohorts to monitor over time after your initial test has completed. Here are some ideas…
- Test what messaging produces the most bottom-of-the-funnel conversions (purchases, etc.).
- Test which Facebook placements have the lowest acquisition costs.
- Test which demographic (i.e., gender, age, interest, education) has the highest lifetime value.
We implemented this tactic for a subscription consumer goods startup to determine what upfront ad messaging produced the best return on ad spend after 28 days.
Details: For this specific test, we wanted to monitor ROAS of groups with and without an offer.
Results: After running our test long enough to get statistically relevant cost results, we continued to monitor our two groups. We determined that after 28 days, although conversion rates were virtually identical, one group did indeed have 27% higher ROAS.
When NOT to Use Split-Testing on Facebook
Now it’s time for a reality check. We don’t always have time to set up highly scientific, highly controlled A/B tests. In many circumstances, our structured campaigns and ad sets work just fine, and there are other tailor-made solutions for other circumstances.
Here are three situations you should rethink running an A/B test…
#1. Testing several different creative variables
This is basic A/B testing hygiene; only test one variable at a time.
Even if you wanted to test 20 variations, of that one variable, you’re going have a helluva time reaching statistical relevance. You’re better off just running typical ad sets, or if you really want to mix things up, give dynamic creative optimization (DCO) a go.
Although more of a black box, DCO can be more efficient at getting several learnings all at once. While it likely won’t inform your broader strategy, it’s a great use of resources if you want to test several ad copy variations, images, CTAs, etc.
#2: Testing minor creative tweaks that aren’t valuable to your business
As we’ve expressed already, A/B testing is best utilized when you MUST have learnings–when you have a specific, measurable hypothesis to prove or disprove. We don’t advise spinning your wheels A/B testing superfluous ad variations like words, image headlines, background colors, etc. Instead, test ads that are quite different conceptually.
The test on the left is better suited for an A/B test, as one image represents more stereotypical San Francisco imagery and the other represents more “local” imagery. The example on the right (testing color versus black and white photography) might be valuable, but less so to your business overall.
At the end of the day, Facebook split testing is the only way to truly run a scientifically precise A/B test. That said, they do require planning and resources, and as a rule of thumb, you should trust that Facebook’s algorithm will deliver your best converting ads.
So, unless you NEED undisputable answers, standard ad sets should get the job done. Regardless, we urge you to weigh the pros and cons of setting up, running, and analyzing a Facebook creative split test up front and hope this post helped you reach a conclusion.
About the author
Payton O’Neal is the resident inbound marketer at Bamboo, a mobile-first paid social advertising agency in San Francisco. With a background in content and product marketing, she loves crafting stories out of unruly data, connecting unlikely dots, and traversing the paid social realm. Since its founding in 2014, companies like Dropbox, HotelTonight, Coursera, and Turo have partnered with Bamboo to help scale and manage their acquisition programs.
from Internet Marketing Blog by WordStream http://ift.tt/2EtkQVP
from WordPress http://ift.tt/2Cm82dG
No comments:
Post a Comment