Recovery page click-through rates vary wildly by vertical — and the engine doesn't care
Updated April 8, 2026 · 5 min read
Updated April 8, 2026 · 5 min read
Recovery page click-through rates vary by vertical in a way that's been consistently surprising in the data, and very little of that variation has anything to do with the recovery page itself. A cosmetics store gets a 30-45% CTR on the recovery grid. A building materials store gets 8-10%. A home goods store sits around 12-15%. All three stores are running the same recommendation engine, the same page layout, the same blending math behind the rankings. The difference between a 40% CTR and an 8% CTR is not the recovery page. It is how people shop for the category, and that shopping behavior is more durable than any optimization the page itself can do.
The cosmetics CTR — typically in the 30-45% range across the stores tracked closely — reflects a shopping mode where the visitor is genuinely browsing and discovery is the point. A visitor on a cosmetics store has usually arrived with a soft idea of what they want (a new lipstick, a foundation, something for an upcoming event) but no specific SKU in mind. The first product page they land on is one option among many they expect to consider. When the recovery page surfaces with a grid of related products — different colors of the same lipstick line, different finishes from the same brand, complementary items in the same color story — the visitor's behavior is to look. Browsing is what they came to do, and the recovery page extends the browsing rather than interrupting it.
The engagement also reflects a category where the underlying product variation matters a lot at the SKU level. Cosmetics customers care about the specific shade, the specific texture, the specific finish, and they want to see options. A grid of twelve related products on a recovery page is exactly the size of the consideration set the visitor was already mentally constructing, and the page makes that construction easier than the storefront's category navigation usually does. The CTR is high because the surface is well-matched to the shopping behavior. There is no clever optimization on the page itself that produced the number — the cosmetics audience would engage with any reasonable recommendation grid at a rate higher than a building materials audience would.
The other end of the spectrum looks dramatically different. A building materials store — tile, lumber, fixtures, the kinds of products people buy when they have a specific project — gets 8-10% CTR on the same recovery page. The visitor on that kind of store is not browsing in the cosmetics sense. They came looking for a specific item, they evaluated the specific item, and if it isn't what they need, the recovery grid of related products is mostly noise to them. They aren't going to consider twelve options because they came with one option in mind, and the recovery page doesn't change the underlying intent.
The low engagement number sounds bad in isolation but doesn't read as bad when the average order value is taken into account. Building materials orders run hundreds of dollars routinely, and a small percentage of recovery-page engagement translates into orders that move real revenue. An 8% CTR with an average attributed order in the $300-500 range produces meaningfully more revenue per visitor than a 40% CTR on a $30 lipstick. The engagement metric understates the value of the segment because the conversion mechanic is different — building materials visitors who do engage are usually high-intent and convert at materially higher rates than the long tail of cosmetics browsers do.
The implication is that the right comparison across verticals isn't CTR. It's revenue per visitor, which collapses the engagement and the order value into a single metric that's directly comparable. Run that comparison and the cosmetics and building materials stores look much closer to each other than the CTR comparison suggests, with cosmetics winning on volume and building materials winning on per-engagement value, and the recovery page producing real lift on both segments through different mechanics.
Home goods sits in the middle of the spectrum, with CTR typically in the 12-15% range. The shopping mode is somewhere between the cosmetics browse and the building materials project — a visitor on a home goods store usually has a category in mind (a new lamp, a side table, a set of throw pillows) but is open to seeing options within that category, and possibly to seeing complementary items that work with the original consideration. The recovery page lands well enough that the engagement is real, but not as well as the cosmetics case because the visitor's openness to discovery is narrower.
The pattern across the three verticals is consistent: the more the category supports browsing as a mode, the higher the recovery page CTR. The more the category supports project-driven, intent-specific shopping, the lower the CTR but the higher the per-engagement value. The recovery engine doesn't change either dynamic. It surfaces the products that make sense given the visitor's session, and the visitor's response is calibrated to the category, not to the engine.
The genuinely unexpected pattern, across enough stores to have a real read on the variation, is that the stores that do nothing after installing the app perform basically the same as the stores that spend time tweaking settings. No configuration. No custom rules. No boosted products. No suppressed SKUs. Just install the app and walk away — and those stores end up with CTRs that match the verticals they're in, indistinguishable from the stores that have spent hours in the admin panel adjusting weights and overrides.
This was not the expected result. The product was built with a fairly elaborate set of customization controls — the assumption was that merchants would want to surface specific products, suppress others, override the algorithm's choices in cases where the merchant knows something the algorithm doesn't. Those controls work, the merchants who use them get the outputs they want, but the data says the outputs are not materially different from what the algorithm would have produced on its own. The engine picks up patterns from visitor behavior anyway, and it turns out that's enough.
The honest reading is that most of the customization UI is unnecessary, which is both humbling and reassuring at the same time. Humbling because a lot of engineering went into controls that the data says aren't earning their place. Reassuring because it means the engine is doing the work it was meant to do — the visitor's behavior is the dominant signal, the algorithm reads that signal accurately, and the merchant's manual intervention rarely improves on what the behavior alone produces. A merchant who installs the app and does nothing else is not under-using the product. They're using it correctly.
The deeper lesson from the cross-vertical numbers is that the recovery page works on visitor behavior, not on merchant configuration. The CTR variation across verticals tracks the shopping mode, not the algorithm. The stable performance across configured-vs-unconfigured stores tracks the same thing — the algorithm is reading the visitor signal, and the visitor signal is what's producing the outcome. The merchant's role in the system is to install the app, run the storefront, and let the visitor traffic do the work of telling the algorithm what to recommend. The active levers the merchant has are real but secondary. The passive lever — the visitor population's actual shopping behavior — is the dominant one, and it doesn't need any merchant input to be useful.
This also explains why claims about "highest CTR ever" or universal lift numbers should be read carefully. The CTR is a property of the category and the visitor population at least as much as it is a property of the tool. A recovery page on a cosmetics store will report a higher CTR than the same page on a building materials store, no matter how good or bad the engine is. The right comparison is within-vertical, against alternative recommendation approaches on similar stores. The wrong comparison is across-vertical, treating one merchant's 35% CTR as evidence that the tool will produce 35% on a hardware store too.
Calibrated to the actual data, the right way to describe recovery page performance is something like this: CTR varies wildly depending on what the store sells, with high-browse categories like cosmetics in the 30-45% range and intent-specific categories like building materials in the 8-10% range, with most other categories falling somewhere in between. Per-engagement value tends to be inversely correlated with CTR, so the revenue-per-visitor numbers across categories are closer than the CTR numbers suggest. Configuration matters less than expected — the algorithm reads the visitor behavior accurately and produces good rankings without merchant intervention in most cases. The variation that does exist is mostly the category and the audience, and the engine works at a stable quality across both ends of the spectrum.
The closing observation is the one that actually matters for evaluating the tool. The engine works on visitor behavior. It does not work better when the merchant configures it more aggressively. It does not work uniformly across categories because the categories themselves are not uniform in how people shop. And the right way to read the dashboard is to compare the numbers against the category, not against the universal claims the marketing material on any tool — including this one — will be tempted to make. The calibrated number is more useful for running a business than the impressive number, and the calibrated number is what the data actually supports.
Recover missed product discovery.
Free Starter plan. Native theme integration. Honest attribution.
Free Starter plan. 7-day trial on paid plans. No credit card.