DMA tests, or geo-based incrementality tests, are a direct and effective way to measure campaign impact. However, in practice, they come with operational and analytical limitations. Below, I share insights from my experience, using an example to highlight key challenges and considerations.
Example: Sponsorship Campaign in City A
We ran a sponsorship campaign in City A, partnering with local sports events to display our brand logo in arenas and on courts for three months. After the campaign, we applied a difference-in-differences (DiD) analysis: the three months prior to the campaign launch served as the pre-period, the rest of the nation as the control group, and the campaign period as the post-period. Revenue data showed the following:
- Control (Rest of Nation): Pre: $100k, Post: $110k (+10% growth)
- Test (City A): Pre: $10k, Post: $12k (+20% growth)
If we hadn’t launched the campaign, we’d expect the test group to grow in line with the control group (+10%), resulting in a baseline of $11k for City A. The actual post-campaign revenue of $12k reflects a 9% lift over the baseline, or $1k in incremental revenue. With a campaign spend of $2k, the ROAS is $0.5. While this seems like a standard DMA test, there are several caveats to consider.
Caveat 1: Choosing the Control Group
Why use the rest of the nation as the control group instead of a similar city? First, for campaigns tied to local sports events, only a limited number of cities qualify, making it hard to find a lookalike control city. Second, selecting an equal number of control cities requires extensive research, comparison, and decision-making, which increases planning time, especially if other incrementality tests are ongoing. Third, it’s challenging to convince senior executives that a chosen control city accurately mirrors the test city and represents the broader market. While the rest of the nation isn’t a perfect control group, it provides a directional read and is easier to set up.
Caveat 2: DMA Spillover
DMA mapping isn’t always accurate. You might launch a campaign in City A but see incremental sessions in City B due to spillover. To mitigate this, proactively check with the data engineering team prior to campaign launch to estimate spillover percentages for the selected DMA(s), informing the marketing team of potential noise. After campaign ends, adjust your calculations based on the spillover rate. In my earlier example, if 20% of sessions spilled over to another DMA, only 80% of the spend ($1.6k of the $2k) effectively targeted City A. Additionally, exclude “problematic DMAs” (e.g., cities running other DMA tests or impacted by spillover) from the analysis. Adjusting for this, the ROAS becomes $1k / $1.6k = $0.625, up from $0.5.
Caveat 3: Reporting and Actions
When running incrementality tests across multiple DMAs, avoid averaging results. Report each city separately, as results can diverge significantly due to city-specific factors. This allows for deeper analysis of other metrics at the city level to refine learnings. A key question post-test is: Should we scale the campaign to more DMAs, go nationwide, or stop? The answer depends on your business and marketing strategy. Based on my experience, I evaluate three factors:
- Lift on Primary KPI (e.g., Registrations): Is the lift positive and statistically significant?
- ROAS (Incremental Revenue / Ad Spend): Is it above $1?
- Saturation: Is investment in this campaign type unsaturated?
If all three conditions are met, I’d encourage continuing the test across more DMAs, or scaling nationwide if results are exceptionally strong. If available, I’d also leverage a marketing mix model (MMM) readout to validate the recommendation.
Conclusion
DMA tests are powerful for measuring campaign incrementality but require careful handling of control group selection, spillover effects, and reporting. By addressing these challenges and aligning results with business goals, you can make informed decisions about scaling or adjusting campaigns.