Meta Ads Manager says your prospecting campaigns are returning 4x ROAS. Google Analytics says Meta drove a fraction of those conversions. Your actual bank account suggests reality is somewhere in between.
This is not a bug. It's how the measurement systems are designed. Meta measures Meta using Meta's data, and the result is a number that systematically overstates Meta's contribution to your business. That doesn't mean Meta isn't working — it often is, and sometimes better than you think. But you can't know that from Meta's own reporting.
This guide explains exactly why Meta's ROAS numbers are inflated, what they do and don't tell you, and how to independently verify whether your Meta spend is actually driving revenue. (If you're new to measurement approaches beyond platform reporting, see our guide on MMM vs. attribution vs. incrementality testing.)
Why Meta's reported ROAS is wrong
Not "a little off." Structurally wrong, in predictable ways that always skew in Meta's favor. There are four main reasons.
View-through attribution inflates everything. By default, Meta counts a conversion if someone viewed your ad and converted within a day, even if they never clicked. Imagine someone scrolling past your ad in their feed, then going to Google, searching your brand, and buying. Meta counts that as a Meta conversion. Google also counts it as a Google conversion. The actual sale happened once, but both platforms claimed it.
View-through conversions can represent 30-60% of Meta's reported conversions for some advertisers. That doesn't mean zero of those people were influenced by the ad — some probably were. But counting every person who saw an impression and later bought as a Meta-driven conversion is a massive overcount.
Click-through windows are generous. Meta's default click attribution window is 7 days. Someone clicks your ad on Monday, browses your site, leaves, comes back on Sunday through a direct visit, and buys. Meta takes credit for the full purchase. Whether that click was truly the reason they bought — or whether they would have come back and purchased anyway — is unknowable from Meta's data.
Modeled conversions fill in the gaps. After iOS 14.5, Meta lost visibility into a large share of conversions from iPhone users who opted out of tracking. Rather than show lower numbers, Meta started modeling estimated conversions — basically, predicting conversions it thinks happened based on partial data and statistical inference. These modeled conversions are Meta's best guess, generated by Meta's models, using Meta's data. They're not independently verified.
Meta has an obvious institutional incentive for those models to be accurate. They also have an obvious institutional incentive for those models to not undercount. When the methodology is opaque and the entity doing the modeling profits from higher numbers, you should treat the output as one signal among many, not ground truth.
Cross-device and cross-platform blindness. A user sees your Meta ad on their phone, switches to their laptop later and buys through a direct visit. Meta can sometimes connect these through logged-in users, but not always. Meanwhile, Meta can't see what happened on other platforms at all. If someone saw your Meta ad and your Google ad before buying, Meta credits Meta and Google credits Google. Your total attributed revenue across platforms will exceed your actual revenue — often by 30% or more.
What Meta ROAS actually tells you
Meta ROAS isn't useless. It just measures something different from what most advertisers think it measures.
It's a relative performance signal within Meta's ecosystem. If Campaign A has a 5x ROAS and Campaign B has a 2x ROAS, Campaign A is almost certainly performing better within Meta — even if both numbers are inflated. The inflation affects both campaigns similarly, so the ranking is still informative.
Where it misleads you is in cross-channel comparisons and absolute budget decisions. When Meta says 4x and Google says 6x, you can't conclude Google is better. They're measuring on completely different scales with different attribution windows, different view-through definitions, and different modeling approaches. It's like comparing temperatures in Fahrenheit and Celsius without converting — the numbers look comparable but they're not.
How to independently measure Meta's actual contribution
There are three approaches that don't rely on Meta grading its own homework.
1. Marketing mix modeling. MMM estimates Meta's contribution by looking at the statistical relationship between your Meta spend and your total business sales over time. Because it uses aggregate data (weekly spend vs. weekly revenue), it doesn't depend on any platform's tracking or attribution. It doesn't care about view-through windows or iOS opt-out rates. It just asks: when Meta spend went up, did total sales go up proportionally?
The result is usually a Meta ROAS that's lower than what Meta reports — often significantly lower. This isn't because MMM undercounts Meta. It's because MMM strips out the inflation from view-through attribution, cross-platform double-counting, and modeled conversions. What you're left with is a directional estimate of Meta's true incremental contribution.
For most brands, MMM reveals that Meta is effective but not as effective as Ads Manager claims. A Meta-reported 4x might look like a 1.5x-2.5x in an MMM. That's still profitable — it just changes how much you should be spending and where the ceiling is.
You can run an MMM for free in under a minute with CheapMMM. Upload a CSV with your weekly revenue and weekly spend per channel, and the model will estimate ROAS independently of any platform's attribution.
2. Incrementality testing (geo holdouts). The gold standard. Turn off Meta ads in a set of test regions for 4-8 weeks while keeping them on in control regions. Compare sales between the two groups. The difference is Meta's causal contribution.
This is the most reliable method because it's an experiment, not a model. The downside is it's expensive (you're sacrificing revenue in test regions), slow, and only measures one channel at a time. Meta offers its own conversion lift studies, but those are Meta measuring Meta again — better than Ads Manager ROAS, but still not truly independent.
Geo holdouts work best for brands spending enough that the test regions represent statistically meaningful sample sizes. If you're spending under $20k/month on Meta, the signal may be too noisy to draw conclusions.
3. Simple holdout tests. If a full geo test is too complex, try the simple version: significantly reduce Meta spend for a defined period (4-6 weeks) and watch what happens to total revenue. If revenue drops roughly proportional to the spend cut, Meta is contributing. If revenue barely moves, Meta was taking credit for organic demand.
This isn't as rigorous as a geo holdout — there's no control group, so you can't account for seasonality or external factors during the test period. But for smaller brands, it's a fast way to gut-check whether Meta spend is doing real work or just intercepting existing demand.
The branded search interaction
This is the most overlooked dynamic in Meta measurement, and understanding it will change how you evaluate your entire paid strategy.
Meta prospecting campaigns create awareness. People see your product, get interested, and then — instead of clicking the ad — go to Google and search your brand name. When they click your branded search ad and buy, Google Ads takes credit. Meta gets nothing in last-click attribution, even though Meta created the demand.
If you run an MMM, you'll often see this pattern: Meta's contribution in the model is higher than Google Analytics suggests, and branded search's contribution is lower. That's because the model captures the correlation between Meta spend and subsequent branded search volume. When Meta spend goes up, branded searches go up. The model attributes that lift correctly.
This has a practical implication. If your MMM shows strong Meta contribution but weak branded search contribution, consider testing a branded search spend reduction. Many brands discover they can cut branded search 50%+ without losing sales — the organic listing captures those clicks instead. That frees up budget to reinvest in the prospecting activity (like Meta) that's actually creating the demand. For more on this dynamic, see our ecommerce and DTC MMM guide.
What to do with Meta's numbers in the meantime
You don't have to stop looking at Meta Ads Manager. You just need to use it for the right things.
Use Meta ROAS for relative comparisons within Meta. Which campaigns, ad sets, and creatives are performing best relative to each other? Ads Manager is fine for this. The inflation applies roughly equally across your Meta campaigns, so the ranking is useful.
Don't use Meta ROAS for cross-channel budget allocation. Comparing Meta ROAS to Google ROAS to TikTok ROAS and moving budget to whichever platform reports the highest number is how brands misallocate millions of dollars annually. Use MMM for cross-channel comparisons instead.
Don't use Meta ROAS to justify total Meta spend. "Meta is returning 4x" is not evidence that your total Meta budget is correct. That 4x includes view-through inflation, modeled conversions, and intercepted organic demand. The true incremental ROAS is lower, which means the efficient frontier for Meta spend is probably lower than you think — you may be past the point of diminishing returns even though the headline number looks great.
Adjust your attribution window to get closer to reality. Switching Meta's attribution from 7-day click + 1-day view to 7-day click only removes the most inflated component (view-through conversions). Your reported numbers will drop — sometimes dramatically — but they'll be closer to what you'd see in an MMM. This is a free change you can make today in your Meta reporting settings.
A realistic framework for Meta measurement
No single method gives you a complete picture. Here's the combination that works best for most brands.
Run platform reporting for daily/weekly campaign management. Use it to decide which creatives to scale, which audiences to test, and which ad sets to pause. Don't trust the absolute numbers, but trust the relative rankings.
Run an MMM quarterly to get an independent view of Meta's actual contribution relative to other channels. This tells you whether your total Meta budget is in the right range and whether you should be shifting dollars to or from other channels. If you're not sure your data is ready for this, see our guide on how to prepare your data for MMM.
Run an incrementality test annually (or when making major budget decisions) to validate the MMM estimate for Meta. If the MMM says Meta ROAS is 2x and a geo holdout confirms it's in the 1.5x-2.5x range, you can make budget decisions with real confidence.
This isn't complicated. It just requires admitting that no single platform can objectively measure its own value — and building a measurement practice that doesn't depend on that being true.