The attribution debate never ends. Let me share how I think about it.
The Core Problem
A user sees a Facebook ad, clicks a Google ad, reads a blog post, and then converts. Who gets credit?
- Last-Touch Attribution (LTA): Google gets 100%
- First-Touch Attribution (FTA): Facebook gets 100%
- Multi-Touch Attribution (MTA): Some weighted split
None of these answers is “correct”—they are all models making different assumptions.
Why LTA Persists
LTA is often criticized as simplistic, but it has real advantages:
- Simplicity: Easy to explain, easy to implement
- Actionability: Clear signal for what to optimize
- Conservatism: Tends to favor lower-funnel, high-intent channels
For many businesses, especially those with short consideration cycles, LTA is good enough. I have seen teams overcomplicate things when LTA would have served them fine.
When MTA Helps
MTA shines when:
- Long consideration cycles: B2B, big-ticket purchases
- Heavy upper-funnel investment: Brand campaigns, content marketing
- Complex customer journeys: Multiple devices, channels, touchpoints
MTA attempts to credit each touchpoint based on its contribution to conversion.
The Fundamental Problem with MTA
Here is the dirty secret: MTA does not measure incrementality.
MTA answers: “What touchpoints appeared in converting journeys?”
MTA does not answer: “What would have happened without those touchpoints?”
A user who was already going to convert will still have touchpoints in their journey. MTA credits them anyway. This is why I always want to calibrate MTA with actual experiments.
A Better Framework
Instead of debating attribution models, I ask:
-
What decision are you trying to make?
- Reallocate budget across channels? You need incrementality testing.
- Optimize within a channel? Platform attribution might be fine.
- Understand customer journeys? User path analysis.
-
What accuracy do you need?
- Directionally correct? LTA is often sufficient.
- Precisely calibrated? You need experiments.
-
What can you actually test?
- Design holdout experiments where possible
- Use geo-tests for channels that cannot be individually randomized
My Current View
Attribution models are useful for monitoring and directional optimization. For budget allocation decisions, they should be calibrated against experimental evidence.
The best measurement systems combine:
- Attribution for day-to-day monitoring
- Incrementality tests for calibration
- MMM for overall budget allocation
No single method gives you truth. But triangulating across methods gets you closer.