


There is a conversation that happens in almost every marketing team at some point in the year. Usually around budget planning. Sometimes triggered by a bad quarter. The finance lead pulls up the channel report, notices that the brand campaign, the CTV spend, or the video activity has no clear conversion event attached to it, and asks you to justify it.
And you know the spend is working. You can feel it in the brand search volume, in the way conversion rates have held up, in the feedback from the sales team. But the report in front of you does not say that. The report says last-click attribution gave the credit to paid search, which was always going to be the last thing someone clicked before buying.
Incrementality testing is how you fix that. Here we walk through what it is, how to set one up, and how to take the results back into that budget conversation and actually win it.
Rather than asking “which channel got the last click?”, incrementality testing asks: “What would have happened if we had not run this campaign at all?”
The difference between what happened with the campaign and what would have happened without it is the incremental effect. That is the number that tells you what your marketing actually caused, as opposed to what it happened to be present for.
This distinction matters more than it might sound. Most brands have a meaningful level of organic demand. Customers who were going to buy anyway will buy, and last-click will credit whichever channel they touched last. Incrementality testing separates those organic conversions from the ones your campaign genuinely drove.
For upper-funnel channels, the time lag between exposure and conversion is the key challenge. A CTV viewer might convert three weeks after seeing your ad. Last-click will not connect those events because three weeks of other digital activity sits between them. Incrementality testing will, because it is measuring outcomes at the audience level, not the session level.
The methodology is more straightforward than it tends to sound when people describe it. You split your target audience into two groups before the campaign runs. One group receives the campaign. The other group, the control group, does not. At the end of the campaign period, you compare conversion rates. The difference is your incremental lift.
Here is how to do each step without the parts that commonly go wrong.
Step 1: Define and split the audience before anything runs
Start with the full target audience for the campaign. This might be a lookalike audience built from your CRM, a category segment from a retail media network, or a broad demographic group for a brand campaign. The split needs to happen before any targeting or media delivery occurs.
The control group should be a minimum of 10% of the total audience for the results to be statistically meaningful. For smaller campaigns, 20% is safer. The larger the control group, the more confident you can be in the results, but the more reach you give up during the test period. That is a real trade-off, and it is worth making deliberately rather than just defaulting to the smallest defensible control group.
Step 2: Make sure the split is genuinely random
This is where many incrementality tests fail quietly. If the control group is selected in a way that systematically differs from the exposed group, the comparison is meaningless.
Common mistakes: excluding the control group from one channel but not others, building the control group from a different geographic region, or pulling the control group after the campaign has already started running. The split needs to happen at the individual level, using a stable identifier, before any activity begins.
If you are running the test across multiple channels simultaneously, the same individuals need to be in the control group across all of them. This is operationally fiddly, and it is the reason identity infrastructure matters for measurement, not just targeting.
Step 3: Hold the control group out of everything relevant
During the campaign period, the control group receives no exposure to the campaign being tested. If you are testing a CTV campaign, they do not see the CTV ads. If you are testing a full-funnel campaign across CTV, display, and email, they are suppressed from all three.
This requires the ability to identify and suppress the same people across multiple platforms and environments. Without that, control group members will inevitably receive some exposure, which dilutes the results in the direction of making your campaign look less effective than it is.
Step 4: Measure the right outcome over the right window
For upper-funnel campaigns, the right outcome is rarely an online click. It is more likely to be a store visit or a subscription, measured across both online and offline channels and over a window that reflects how long people actually take to make that decision.
For a fast-moving consumer product, two weeks might be sufficient. For a household appliance or a financial product, four to six weeks is more appropriate. Setting the measurement window too short is one of the most common ways incrementality tests undercount upper-funnel effectiveness.
Step 5: Express the result as incremental revenue, not incremental conversions
Here are the metrics you want to have to hand:
Incremental ROAS is not the same number as total ROAS. Incremental ROAS measures only what the campaign caused. For upper-funnel channels, the gap between the two is typically large, and it consistently moves in the direction of making upper-funnel spend look more effective than last-click was suggesting.
“We cannot afford to hold out a control group”
The cost of running a control group is the reach you give up during the test period, typically 10 to 20% of your audience for one campaign cycle.
However, the cost of not running one is continuing to allocate budget based on measurement that systematically misattributes the contribution of your most important brand-building channels.
Run the test once properly, use the results to rebalance your channel mix, and the incremental revenue recovered will significantly outweigh the reach sacrificed during the test period.
“Our finance team will not accept results that are not in our standard dashboard”
This is a presentation challenge, not a measurement one. Build a single-page summary with three numbers side by side: total ROAS from standard reporting, incremental ROAS from the test, and the difference expressed as the revenue that last-click was not capturing.
Most finance teams respond well to being told that existing reporting is undercounting revenue. It reframes the conversation from “trust our methodology” to “your current methodology is leaving money on the table.” That is a more useful frame for a budget conversation.
“We do not have the identity infrastructure to run a clean hold-out”
This is the most legitimate objection and the one worth taking seriously. A clean incrementality test across multiple channels requires individual-level identity resolution that persists across environments, so that the same person can be held out of a CTV campaign, a display campaign, and an email campaign simultaneously. Without it, the test will leak and the results will be less reliable.
The practical answer is to start with a single-channel test where the hold-out is operationally clean, establish the methodology and the internal confidence that comes with it, and then extend to cross-channel testing as the identity infrastructure matures. Starting imperfectly is significantly better than not starting.
The budget conversation about upper-funnel spend has historically been won or lost on credibility rather than evidence, because the evidence was not there. Incrementality testing provides the evidence.
That is a different kind of confidence to walk into a room with. And it tends to produce different outcomes at the next planning cycle, because the number is expressed in the language finance teams use to make decisions rather than the language marketing teams use to describe their work.
If you want to understand how to build incrementality testing into your measurement framework, or how to present the results in a way that changes your internal budget conversation, talk to our team.