Swayable Insights

Here’s What Top Marketers Are Doing to Ensure Their Marketing Works

Written by Michelle Wiles | Feb 1, 2023 5:00:00 AM

Marketers today can test ads and creative campaigns to see what works before shifting significant media spend behind certain concepts.

The problem is, many types of testing fall short. New innovations in testing mean savvy marketers have an edge on effectiveness if they make use of the right kind of test.

So, what kind of testing is best? Let’s go over the types.

1. Pre/Post Testing

A common testing approach is pre/post testing, where you measure brand sentiment or sales before and after a campaign.

Pro: Right metrics

Pre/post testing allows marketers to measure changes in their ultimate objectives (sales, revenue, brand sentiment) accurately, instead of relying on inaccurate proxies like clicks and ad views.

Con: Attribution

Metrics cannot definitively be attributed to your ads. For example, if you pre-test in August and post-test in November, any differences in sales are just as likely to be caused by seasonal shifts in demand than anything your marketing is doing. Or perhaps by a competitor’s campaign, or a new in-store promotion … any of a wide range of so-called exogenous causes could be the real explanation.

Con 2: Too little, too late

By the time you get results, you’ve already run the campaign, or a large part of it. You can report on the results for next year, but there is not a lot you can do about the money you have already spent.

Conclusion

With pre/post testing, you know there are changes, you just don’t know what drives them… and can’t do much about it.

2. A/B Testing

A/B testing is a general term for experiments comparing two or more options. In marketing, the metrics are generally clicks or conversions, such as how one landing page converts versus another.

Pro: Accurate for bottom-of-funnel objectives

A/B tests are very useful when clicks or immediate conversions are the goal - like ads we see in our social feeds telling us to “buy now!”, that link to an e-commerce site.

Con: Not suited to upper-funnel brand and demand-creation campaigns

Valuable brands like Nike and Apple allocate heavy proportions of their spending to upper-funnel campaigns that cannot be assessed via clicks. Their goals are core business drivers like product consideration, brand sentiment, and equities like quality, trust or excitement. None of these can be measured by clickstream data.

Con 2: Not suited to in-store purchase objectives

Clickstream-derived metrics can be great for optimizing for the final click in e-commerce. But the vast majority of commerce does not take place online (online’s share of US retail was less than 15% in Q3 2022). Clicks and engagement data are not useful for in-store sales.

Conclusion

A/B testing is an important tool for bottom-of-funnel, direct response marketing, but misleading if applied to upper-funnel campaigns or in a business context where non-digital or non-immediate sales are significant.

3. RCT Testing

When getting it right matters, marketers are now turning to randomized controlled trial (RCT) survey experiments. If you’ve worked in science or public health, you are most likely intimately familiar with using RCTs to determine causality.

RCTs used to require months to execute, and cost in the hundreds of thousands of dollars. So they were used only sparingly for the most crucial decisions. But technology has recently made RCTs far faster and more cost-effective, so much that they’re now accessible for day-to-day decisions on campaigns.

Pro: Accurate metrics across the full purchase funnel

RCTs can be used to directly measure consumer attitudes and intent, like brand consideration, favorability, and purchase intent, that other tests like A/B cannot assess.

Pro: Proof of causality

The “controlled” nature of a randomized controlled trial means that all RCT tests include a control audience who is assessed on desired marketing metrics, but did not see your ad. This means you isolate the impact of your campaign on a randomized population of people who saw it vs. a randomized population of people who did not. If the ad worked to lift brand sentiment or purchase intent, you know for sure that your messaging or ads caused it.

Pro: Get the data before you spend on media

Unlike pre/post tests, RCT tests allow savvy marketers to test campaign ideas for impact before they go live with a big media spend. For example, hot sauce brand Truff tested multiple ad ideas with RCTs to determine which one would drive the most in-store demand. They took the top performing ad and put their media spend behind that one, saving money and increasing impact at the same time. Likewise, Paramount Pictures used RCTs to drive box-office sales for their biggest ever year in 2022.

Pro: Get the data before you spend on media

Data science can be used to stratify respondents across all segments they belong to (age, gender, income, location, purchasing behavior etc), ensuring not only that any sampling imbalance is corrected, but also enabling results to be reported out for each target segment separately. Thomas’ English Muffins used RCTs to test the impact of their creative on different age groups, allowing them to attract millennials to the brand.

Conclusion

For ad campaigns designed to drive demand, intent or opinion, RCTs are the best evidence of what will work, and they empower marketers to make better decisions.

Takeaways

  • Marketers use A/B tests and pre/post tests to optimize their marketing assets. Unfortunately, both of these test types have significant drawbacks

  • RCT tests, popular in the science community, are more accurate. But they are typically time consuming and expensive

  • Advances in technology mean they are now fast and affordable - with results in as little as 24 hours on Swayable

Ready to see how RCTs can improve your marketing?

BOOK A DEMO