Creative testing is quickly becoming the norm to help marketers launch more effective campaigns. However, not all ad testing platforms are equal.
Similar to tests in science or healthcare, the way to isolate impact is to include a control group. In ad testing, this means a group who sees a non-ad piece of content (the placebo) and answers the same questions as the group who sees your real ad.
Without a control group, it might be possible to tell if audiences have interest in your brand after seeing your ad. But it will be impossible to tell if that interest is due to your ad or if it already existed, which negates the purpose of ad testing.
Randomized controlled trials (RCTs) are a specific type of test that always includes a control group.
Tests are only useful if they help you make better decisions.
In the past, marketers had to rely on post-campaign studies to tell how their campaigns shifted perception. With today’s pre-testing options, marketers can choose which campaigns and ideas to invest in before they launch, allowing them to make better decisions (and save media dollars).
If you want to find out how your ad will perform for an audience, the best way to do so is simply to ask them. Some companies claim to replace respondents with artificial intelligence to predict how effective ads are. AI can do some amazing things (have you seen ChatGPT?). But even the best AI models can only ever pattern-match against what has done well in the past - they have no way to know what new creative ideas might work (which is where great advertising comes from).
Statistically significant results require large sample sizes, typically in the thousands, and sophisticated analysis to prove impact and understand margins of error. Ask your vendor what their sample sizes are and how they calculate margins of error.
Tests are only as good as what they can predict. Ask your vendor to share validation studies linked to real-world outcomes. E.g., data which proves that their ad recommendations grew top-line sales.