The standard answer is 2 to 3 years of data. That answer comes from enterprise MMM vendors who are also selling a 12-week engagement and have an incentive to make the process seem as rigorous and data-intensive as possible. (For a broader overview of what MMM is and how it works, see our complete guide to marketing mix modeling.)
The practical answer is more nuanced.
The real requirement is variation, not volume
What an MMM model actually needs is not a specific number of data points. It needs enough variation in your spend across channels to estimate how each one relates to your outcome metric. A dataset with 3 years of flat spend on every channel will produce worse attribution than a dataset with 9 months where spend varied meaningfully across channels and time periods.
When people say "you need at least 2 years of data," what they usually mean is that 2 years is long enough to contain natural variation in spend, seasonal patterns, and enough data points for the model to learn from. That reasoning is correct. The specific timeframe is not sacred.
What 3 to 6 months looks like in practice
With 3 to 6 months of weekly data, you have roughly 12 to 26 observations. That is enough to run a model and get directional output, with some important caveats.
The model will struggle with seasonality if your measurement window does not contain a meaningful seasonal cycle. If you are an ecommerce brand and your data runs from January through June, your model has never seen a holiday period and cannot account for it in attribution.
Channel separation becomes harder with fewer data points. If Meta and Google spend moved together for most of your measurement period, the model will have limited signal to attribute contribution to one versus the other.
With this data size, treat output as directional. The channel ranking is more reliable than the specific ROAS numbers.
What 6 to 18 months looks like in practice
This is the range where MMM output becomes genuinely useful for budget decisions. You likely have enough variation in spend to produce stable attribution, and if your fiscal year follows a standard calendar you have meaningful seasonal coverage.
The quality of output at this range still depends heavily on what happened during the measurement period. If you ran a major budget shift 4 months in — moving spend from one channel to another — that is valuable variation for the model. If spend was relatively stable throughout, attribution will be softer.
What 18 months and beyond looks like
Longer datasets give the model more to work with, but they introduce a different problem. Marketing channel performance changes over time. A Facebook ROAS from 2 years ago may not reflect current performance due to iOS privacy changes, platform algorithm shifts, audience saturation, or changes in your own creative approach.
Including stale data can actually hurt attribution quality. Consider weighting recent periods more heavily, or truncating your dataset to the period that reflects your current channel mix and strategy.
Channel count matters too
More channels means more parameters to estimate, which means more data is needed for stable results. A 3-channel model (Meta, Google, email) can produce reasonable output with 6 to 9 months of data. A 7-channel model with TV, radio, and several digital channels needs more variation and more time periods to avoid unstable attribution.
If you have limited data and many channels, consider whether some channels have negligible spend or are tightly correlated with others. Simplifying the channel structure often produces more reliable output than trying to model every line item separately.
What to do if your data is thin
If you have less than 6 months of data or limited spend variation, you have a few options.
Run the model and treat output as a hypothesis rather than a conclusion. Use it to decide where to introduce spend variation over the next quarter, then rerun with the new data.
Focus on channel ranking rather than absolute ROAS numbers. The model may not give you precise figures, but the relative ordering of channel contribution tends to be more stable than individual estimates.
Be explicit about uncertainty when presenting results. A model run on limited data is still more structured than gut feel or last-click attribution, as long as everyone understands its limitations.
The practical minimum for CheapMMM
CheapMMM works best with at least 6 months of weekly or monthly data with spend variation across your channels. You can run it with less, but results below that threshold should be treated as exploratory rather than actionable. Once you know you have enough data, see our guide on how to prepare your data for MMM for the specific formatting and cleanup steps before you upload.