Marketing attribution models
Six attribution models, six different stories. The trade-offs behind each one.
A customer saw four ads, read three blog posts, got two emails, and bought after a podcast mention. Which channel deserves credit? Six attribution models exist because the answer is genuinely contested — and each one tells a different story about your marketing mix.
Why attribution is hard
Attribution exists because customers rarely buy on the first touch. They wander. By the time they convert, they've passed through awareness channels, consideration content, and conversion-focused triggers — and somebody has to decide how to split credit. The model you pick determines which channels look like winners and which look like dead weight.
The honest framing: there is no single correct attribution model. Each one over- or under-credits certain channels by design. The job is to know which trade-offs you're accepting, run more than one model in parallel as a sanity check, and report the range — not a single false-precision number.
Single-touch models
Single-touch models give 100% of the credit to one interaction. They are simple, computable from a single click, and wrong in opposite directions.
- First-touch attribution credits the very first interaction in the customer journey. It overweights upper-funnel awareness channels — display ads, podcast sponsorships, content syndication — and ignores everything that happened between discovery and purchase. Useful when you're trying to understand where new prospects come from. Misleading when used to evaluate full-funnel performance.
- Last-touch attribution credits the final interaction before conversion. It overweights closing channels — branded search, retargeting, direct traffic, email — at the expense of every channel that built the demand. It's the default in most analytics platforms because it's easy to compute, which is also why it has produced more bad budget decisions than any other model.
If you only run last-touch (most teams), you'll systematically under-fund the upper funnel until you're cutting awareness spend that was actually working — and then wondering six months later why your closer channels are also tanking. The two are connected.
Multi-touch models
Multi-touch models split credit across multiple interactions. They are closer to reality and harder to compute, but most modern analytics platforms support at least the simpler ones.
- Linear attribution splits credit equally across every interaction in the path. Six touches, each gets 16.7%. It's a fair-feeling baseline. The trade-off: it treats a brand-search click identical to a passing impression three months earlier, which understates the value of channels that actually convert.
- Time-decay attribution weights interactions closer to the conversion more heavily. A click yesterday gets more credit than a click sixty days ago. This matches buyer psychology better — recent touches are usually more decisive — but still risks under-crediting awareness work that planted the seed early.
- Position-based (U-shaped) attribution gives the largest shares to the first and last touches and splits the rest across the middle. Common splits are 40/20/40 or 30/40/30. It explicitly rewards demand creation and demand capture while still acknowledging the middle of the funnel.
Each of these models is a heuristic. None of them know which touches actually moved your specific buyer. They are useful precisely because they're transparent — you can see the rule that produced the credit split.
Data-driven attribution
Data-driven attribution uses your conversion data to estimate the contribution of each touchpoint, typically with a Markov-chain or Shapley-value approach. The model looks at converting and non-converting paths in your data and assigns credit based on which channels reliably show up before conversions.
When it works, it's the most defensible model — the credit split reflects your actual customer journeys instead of a generic rule. When it fails, it fails silently. Data-driven models need substantial conversion volume to produce stable estimates; with low conversions per channel, the attribution flips week to week. They also inherit any biases in your tracking — channels with poor cookie persistence get systematically under-credited regardless of their real impact.
Data-driven attribution also struggles with offline and view-through impact. A podcast ad someone heard in the car shows up nowhere in the click data, so the model values it at zero. Marketing mix modeling and incrementality testing fill that gap; the two approaches are complementary, not interchangeable.
Picking a model — and why you should run two
The default recommendation: pick a primary attribution model that matches your business model, and run a secondary one as a sanity check. The choice depends on the shape of your funnel.
- Short-cycle ecommerce — last-click or position-based usually works. Buyers don't accumulate a long touch history before purchase, so simpler models stay close to truth.
- Considered B2C purchases (furniture, electronics, travel) — position-based or time-decay. The first-touch matters; the closing channel matters; the middle is real.
- B2B with long sales cycles — multi-touch is mandatory. Last-click in B2B will tell you that contract-renewal emails drove all your revenue.
- High-volume, mature programs — data-driven attribution earns its keep when you have enough conversions to feed it.
Run a second model in parallel for sanity. If your primary model says paid social drives 40% of revenue and your secondary model says 12%, you have a data discussion to have — and probably a budget decision that shouldn't be made on either number alone.
What attribution can't fix
Attribution is downstream of clean tracking. If your UTM taxonomy is a swamp — three spellings of "facebook," missing campaign tags on half your links — no attribution model can save you. Fix the data first; pick a model second.
Attribution also doesn't replace experiments. Running geo-holdouts, brand-lift tests, or paused-channel tests gives you causal evidence that no observational attribution model can match. Use attribution for ongoing channel reporting; use experiments to settle the genuinely contested questions about what's actually driving revenue. The numbers from both feed into your marketing ROI, and the results from your testing program — sample-sized properly via A/B testing fundamentals — anchor it. Your CRO program in turn lifts the conversion rate that every attribution model is allocating credit for in the first place.