
Repeat purchase rate is the strongest leading indicator of CPG brand health. The problem isn't that brands don't care about it — it's that the measurement infrastructure to track it cross-retailer simply doesn't exist yet.
The metric everyone agrees matters — and almost no one tracks properly
Ask any CPG brand team what they'd most like to improve, and repeat purchase rate comes up quickly. Ask them what their current repeat purchase rate actually is, and the conversation gets murkier.
Most brands are working from panel data — Kantar, Nielsen, IRI. These tools were built for a different era: statistically sampled households, six-to-eight week data lag, category-level aggregation. They can tell you broad directional things about penetration and frequency. They cannot tell you what percentage of shoppers who bought your SKU at Waitrose last month bought it again within 60 days — at any retailer.
The internal definition problem makes this worse. In some organisations, "repeat purchase rate" means a shopper's second-ever purchase of the brand. In others it's second purchase within 90 days. Some teams pull it exclusively from DTC data. The result: a number that appears in every QBR but means something different depending on who built the slide. It's not a KPI. It's a placeholder.
Why retailer data doesn't solve this
The obvious answer is: go to the retailers. Tesco, Sainsbury's, ASDA, Boots — they all hold loyalty data at the individual shopper level. And they do share it, in a limited way, through programmes like Tesco's Data Ventures or Sainsbury's Nectar360.
The problem is structural. Retailer data is designed to help brands buy better media within that retailer's ecosystem — not to give brands an honest picture of shopper loyalty. What gets shared is aggregated category data, rate-of-sale reporting, and media performance metrics. Brand-level repeat cohorts, broken down by SKU, are not part of the standard data share. The retailer's commercial interest and the brand's measurement need are not the same thing.
More fundamentally: shoppers don't confine themselves to one retailer. A shopper who buys your product at Sainsbury's this week might buy a competitor's product at Waitrose next month. That is a churn event. It doesn't appear in your Sainsbury's report. It doesn't appear in your Waitrose report. It doesn't appear anywhere in your current measurement stack. You are optimising within one retailer's window while the actual loyalty picture — which plays out across the full grocery landscape — remains invisible.

What the absence of this data actually costs
This is where the real commercial damage sits.
CPG marketing budgets are overwhelmingly skewed toward acquisition: trial-driving promotions, in-store displays, retail media, sampling. These activities are justified by reach, impressions, and rate-of-sale uplift during the promotional window. None of those metrics tell you whether the people you reached bought again.
A brand running a six-figure campaign on a retailer's media network can report 2.1 million impressions and a 14% uplift in promoted SKU sales during the period. What they cannot report is how many of those shoppers returned to buy again — at any retailer — within 30, 60, or 90 days. The ROI case is built on a metric that doesn't predict revenue.
Trade spend decisions carry the same blind spot. Rate-of-sale data conflates promotional volume with genuine frequency. A promotion that moves 40,000 units in four weeks might be generating loyal repeaters or deal-hunters. Without verified post-promotion purchase behaviour, you can't tell. Most brands assume it's the former and plan budgets accordingly. The evidence suggests the assumption is usually wrong.
The habit-formation question — "are we building frequency?" — gets answered by gut feel, category convention, or a panel report that landed six weeks after the window closed. For brands trying to justify loyalty investment to a CFO, this is the gap that's hardest to close.
What a proper repeat purchase measurement framework actually requires
The technical requirements are specific, and most existing tools don't meet them.
First: verified purchase data. Not panel, not survey, not modelled. Individual-level purchase records tied to real transactions, at SKU level. The difference in accuracy between modelled frequency estimates and verified purchase data is not marginal — it's the difference between a sample and a signal.
Second: cross-retailer visibility. A shopper's second purchase might happen at a different retailer than their first. The framework needs to follow the shopper across retailers, not report within one retailer's walls. If your measurement only covers one retailer, you're not measuring repeat purchase rate — you're measuring repeat purchase rate at that retailer, which is a different and considerably smaller question.
Third: cohort structure. First-purchase cohort → 30/60/90-day repurchase rate → frequency curve by SKU, channel, and region. This structure turns repeat purchase from a single aggregate number into a diagnostic tool: you can see where the funnel is leaking, at what point shoppers drop off, and which SKUs or retail environments generate the strongest habit formation.
Fourth: intervention attribution. Not just "did they buy again" but "what preceded the second purchase?" Which promotion, reward, or communication was active in the 30 days before repurchase? This is the layer that connects loyalty investment to demonstrated behaviour change — and the only layer that lets you improve the next campaign based on what actually worked.
None of this is what retail media networks were designed to deliver. None of it is what panel data was built for. The infrastructure simply hasn't existed at brand level — until recently.

What brands with this data are finding
The brands that have begun tracking verified repeat purchase are finding things that standard tools couldn't surface.
Frequency uplift concentrates in specific SKUs that aggregate data obscures entirely. Promotions that look similar on rate-of-sale metrics produce very different repeater cohorts — some drive genuine frequency, others attract shoppers who never return without a discount. Cross-retailer purchase patterns reveal which retail channels function as trial environments and which function as habit environments for a given brand, which changes how media and trade budgets should be allocated.
Early programmes tracking verified repeat purchase are surfacing purchase behaviour data that simply wasn't available before — which SKUs drove repeat, which retailers captured the second purchase, which shopper cohorts built genuine frequency. That data wasn't in any existing report. It had to be built from the ground up.
Where to start
Define the measurement goal before selecting a tool. There's an important distinction between wanting to measure repeat purchase rate and wanting to actively drive it. The most efficient infrastructure treats both as the same problem — measurement and activation built on the same verified purchase data, so every intervention you run generates the data you need to improve the next one.
Demand verified purchase data as a baseline condition, not modelled or inferred proxies. Require cross-retailer visibility — single-retailer measurement is better than nothing, but it masks the cross-retailer churn that drives most frequency loss.
A small number of platforms are beginning to build this infrastructure at the brand level. Vela is one — built specifically for CPG brands that want to pay for verified repeat purchases rather than impressions, with SKU-level attribution and cross-retailer visibility as the foundation, not a feature.
If you can't currently answer what percentage of your trial buyers purchased again within 60 days, across all retailers, at SKU level — that's the gap. And it's worth closing before the next planning cycle locks in another year of budgets built on reach.



