The attribution debate never ends. Here is how I think about it.
The core problem
A user sees a Facebook ad, clicks a Google ad, reads a blog post, then converts. Who gets credit?
- Last-Touch (LTA). Google gets 100%.
- First-Touch (FTA). Facebook gets 100%.
- Multi-Touch (MTA). Some weighted split.
None of these is “correct.” They are all models with different assumptions.
Why LTA sticks around
LTA gets called simplistic a lot. It has real advantages too:
- Simple. Easy to explain, easy to build.
- Actionable. Clear signal for what to optimize.
- Conservative. Tends to favor the lower-funnel, high-intent channels.
For businesses with short consideration cycles, LTA is usually enough. I have seen teams make this too complicated when LTA would have done the job.
When MTA helps
MTA is more useful when:
- The consideration cycle is long (B2B, big-ticket consumer).
- You invest heavily upper-funnel (brand, content).
- Customer journeys are complex — multiple devices, channels, touchpoints.
MTA tries to credit each touchpoint based on its contribution to the conversion.
The dirty secret
Here is the thing: MTA does not measure incrementality.
MTA answers: “what touchpoints appeared in the journeys that converted?”
It does not answer: “what would have happened without those touchpoints?”
A user who was going to convert anyway still has touchpoints. MTA credits them regardless. That is why I always want to calibrate MTA with an actual experiment.
A better framework
Instead of arguing about attribution models, ask:
-
What decision am I making?
- Reallocating budget across channels → you need incrementality testing.
- Optimizing within a channel → platform attribution is probably fine.
- Understanding journeys → path analysis.
-
How accurate do I need it?
- Directionally correct → LTA is often enough.
- Precise → you need experiments.
-
What can I actually test?
- Run holdout experiments where you can.
- Use geo tests for channels that cannot be randomized at the user level.
Where I land
Attribution models are good for monitoring and directional optimization. For budget allocation, they should be calibrated against experimental evidence.
The best measurement stack I have seen is a combination:
- Attribution for day-to-day monitoring
- Incrementality tests for calibration
- MMM for overall budget allocation
No single method gives you truth. Triangulating across methods gets you closer.
See it for yourself
Build a customer journey below and watch how five different attribution models split a single conversion. Same data, radically different stories — which is exactly why debating models without defining the decision is a waste of breath.
Who gets the credit?
Build a customer journey. Compare how five attribution models split a single conversion.