Honest attribution: what affiliate networks aren't telling you

Updated April 15, 2026 · 5 min read

Imagine a merchant looking at a dashboard. The dashboard says the recovery tool has driven $42,000 in attributed revenue this month. The merchant's commission on that revenue, at 8% CPO, is $3,360. The math is clean. The math is also misleading in a specific way that the affiliate-network industry has gotten very comfortable not talking about, because the industry's pricing model depends on the merchant not asking the question that follows. The question is: would those orders have happened without the recovery tool? If they would have happened, the merchant just paid $3,360 for revenue they were going to get anyway. This essay is about how attribution math actually works, what "attributed revenue" really proves, and how to know whether the number on the dashboard is real revenue or accounting fiction.

Last-touch attribution explained without buzzwords

Last-touch attribution is the simplest possible model for crediting an order to a marketing source: whatever the visitor clicked on most recently before the purchase gets the credit. If the visitor clicked a Google ad, then clicked a Facebook retargeting ad, then clicked an exit-recovery page link, then bought — the exit-recovery page gets the credit. The Google ad gets nothing. The Facebook ad gets nothing. The fact that the visitor only saw the exit-recovery page because they'd already decided to buy and were just verifying the spec one last time — that doesn't enter into the calculation. Whoever owned the last click owns the credit.

This is the model affiliate networks run on, and it's not stupid — last-touch is auditable, fast to compute, hard to game in obvious ways, and reasonable when the goal is to pay publishers for clicks that demonstrably preceded a purchase. The model breaks when it's used to answer a different question: did this publisher cause the purchase, or were they just the last person to wave at the visitor on the way to a purchase that was already going to happen?

The affiliate network usually doesn't care about that question. The network's job is to broker access to publisher inventory and to settle the accounting of who clicked what before what. Whether the click caused the purchase is the merchant's problem, not the network's. So the network reports attributed revenue, the merchant pays commission on attributed revenue, and the question of whether the revenue was actually caused by the publisher mostly doesn't get asked. Until the merchant runs a holdout test, in which case the question gets asked very loudly indeed.

What "attributed revenue" actually proves — and what it doesn't

Attribution proves a temporal sequence: a click happened, then a purchase happened, then the system credited the click. That's all it proves. It doesn't prove the click caused the purchase. It doesn't prove the purchase wouldn't have happened anyway. It doesn't prove the publisher contributed any value beyond being the last name on the timeline.

The technical term for the thing attribution doesn't measure is incrementality. Incremental revenue is revenue that wouldn't have existed without the marketing intervention. Attributed revenue is revenue that the system credits to the marketing intervention. The two are not the same number, and on most marketing channels they're not even close. Brand search advertising is the canonical example — Google reports a healthy ROAS on branded keywords, the merchant pauses the campaign, the organic traffic absorbs the visitors who would have clicked the ad, and the conversion volume barely moves. The attributed revenue was real in the accounting sense. The incremental revenue was close to zero.

The same logic applies to a lot of bottom-of-funnel marketing including exit-recovery tools that fire on visitors who were already mostly going to buy. The visitor was on the PDP for two minutes; they read the reviews; they checked the size chart; they were going to make their decision in the next twenty seconds anyway. The recovery tool fires, the visitor clicks something on the recovery page, the visitor goes back to the cart, the order lands. The recovery page got credit for the order. The order would have happened either way. The publisher took a commission on it.

This isn't a moral indictment of the affiliate-network model — it's a description of the structural blind spot that the model creates. When the publisher's compensation is a percentage of attributed revenue, the publisher has no incentive to distinguish caused-orders from would-have-happened-anyway-orders, because the commission is the same. The merchant has every incentive to make the distinction, because the merchant is paying for both. But the data the merchant gets from the network only shows attributed revenue. The incremental number isn't visible without measuring it separately.

The incrementality question — would the order have happened anyway

The only honest way to answer the incrementality question is a holdout test. The setup is straightforward: take some fraction of eligible visitors — typically 5-10% — and randomly assign them to a control group that doesn't see the marketing intervention. The other 90-95% see the intervention as normal. After enough volume to reach statistical significance (which depends on the conversion rate and the sample size, but typically a few weeks for a mid-size store), compare the conversion rate of the holdout group against the conversion rate of the treated group.

The difference is the incremental lift. If the treated group converts at 3.5% and the holdout group converts at 3.4%, the intervention is contributing 0.1 percentage points of incremental conversion — which translates to a small fraction of the attributed revenue actually being incremental. If the treated group converts at 3.5% and the holdout converts at 2.8%, the intervention is contributing meaningful incremental revenue and the attribution number is closer to honest. The math isn't complicated; it's just that the affiliate-network business model doesn't structurally encourage running it.

A few large advertisers run holdout tests on their exit-recovery vendors and see numbers that range from "the tool is contributing real lift" to "the tool is collecting commission on revenue we were going to get anyway." The variance is enormous between vendors and between merchant verticals. For high-consideration purchases (furniture, electronics, jewelry) where the visitor's research mode is real and the recovery page can genuinely surface alternatives the visitor hadn't seen, the lift tends to be real. For low-consideration purchases (impulse goods, deal sites, flash sales) where the visitor was either going to buy or not within thirty seconds, the lift is often closer to noise.

We're running a holdout test on Before You Go right now — first cohort started in March 2026, results expected mid-year. Until those numbers are in, the honest position is: the conservative session-based click attribution deliberately undercounts so the dashboard number is closer to the incremental number than the affiliate-network alternative would report. Once the holdout numbers are in, the math will be more concrete. The point of designing the attribution model conservatively from the start was to make the eventual reconciliation easier — if the conservative number is already close to the holdout number, no big surprise. If a generous number had been used and the holdout number came in at half of that, the merchant has to redo a year of unit economics.

How holdout testing works in practice

The implementation is mostly an engineering question. On the merchant side: when the recovery tool would normally fire, randomly assign the visitor to one of two groups based on a stable identifier (session ID hashed to a bucket). Treated group sees the recovery page; control group sees nothing — the visitor leaves as if the tool weren't installed. Track the conversion rate of both groups over the next 30 days, accounting for the visitor having multiple sessions and possibly buying through a different channel.

The complications are real but solvable. The treated and control groups need to be matched on enough dimensions (traffic source, landing PDP, time of day, device type) that random assignment doesn't accidentally bias the results. The post-leave window has to be long enough to capture delayed purchases — most merchants pick somewhere between 14 and 30 days. The sample size has to be large enough to detect lifts in the range you actually care about; for a store with a 3% conversion rate hoping to detect a 0.5% lift, you need tens of thousands of visitors per group, which means small stores can run the test but it takes longer to read.

The output of a well-run holdout test is a single number: incremental conversion lift. Multiply that by the average order value and the eligible-visitor count to get incremental revenue. Compare incremental revenue against the cost of the tool — flat subscription, commission on attributed orders, whatever the pricing model is — and you have the actual unit economics.

Why this matters when you're paying commission on attributed orders

The arithmetic is unforgiving. Suppose a merchant pays a 10% commission on attributed revenue. The dashboard shows $50,000 attributed in a month, so the merchant pays $5,000. A holdout test reveals that the actual incremental revenue is $15,000 — about a third of attributed. The merchant just paid $5,000 to recover $15,000 of real revenue, which is a 33% commission rate on incremental revenue, not the 10% rate the contract advertised. If the merchant had been paying a flat $99/mo subscription with conservative attribution showing $20,000 in recovered revenue, the conservative number would have been closer to the truth and the unit economics would have been transparent.

This is the structural reason flat subscription with conservative attribution is a fairer pricing model on the merchant side, even when the dashboard number looks smaller than what the affiliate-network alternative would report. The dashboard number isn't supposed to be impressive. It's supposed to be true. A small true number is more useful for running a business than a large attributed number that includes orders the merchant was going to get anyway.

The affiliate-network model has its place — large enterprise merchants with sophisticated attribution science teams who can run their own holdout tests and negotiate commission rates against the actual incremental lift have made the model work for them. The merchants who get hurt are the small and mid-size stores who don't have the analytics infrastructure to run a holdout, who trust the dashboard number, who pay commission on attributed revenue without ever measuring the incremental number, and who don't realize the model was structurally biased against them until they get curious enough to ask.

What to actually do

Three concrete moves that any merchant evaluating an exit-recovery tool can make:

Ask the vendor to support a holdout test. Any vendor that won't let you run a 5-10% holdout for a month is admitting that they don't want you to know the incremental number. That's a signal worth weighing.

Read the attribution window in the contract carefully. A 30-day last-touch attribution window will count vastly more revenue than a session-based click window. Both are valid definitions; neither is automatically wrong. The one to prefer is the one where the dashboard number is closer to what a holdout would show.

Compare attributed revenue against your own analytics. If the recovery tool says it drove $50,000 last month and your overall storefront revenue is up by $10,000 year-over-year, the recovery tool isn't driving $50,000 of incremental revenue. Most of what the tool is reporting was happening anyway.

The dashboard number is a marketing artifact. The bank balance is the actual data. When the two disagree, trust the bank balance and ask the vendor to explain the gap. The honest vendors will. The ones who get defensive are telling you what you needed to know.

Recover missed product discovery.

Free Starter plan. Native theme integration. Honest attribution.

Free Starter plan. 7-day trial on paid plans. No credit card.