Your Marketing Model Is Misleading You
%20(5).jpg)
Published:
October 31, 2025
Updated:
April 7, 2026
The $50 Billion Problem Nobody Talks About
Every quarter, marketing teams across the Fortune 500 sit through the same ritual. An analytics vendor opens a slide deck. The deck contains a waterfall chart. The waterfall chart assigns credit—down to the decimal—for every dollar of revenue generated by every channel in the mix. Television contributed 34.2%. Paid search delivered 22.7%. Social drove 11.4%. The numbers are crisp, confident, and almost certainly wrong.
Marketing mix modelling (MMM) has become the default framework for answering the hardest question in marketing: what actually works? But after fifteen years of managing performance campaigns across dozens of verticals at Aragil, the pattern we see most often is not teams lacking models—it is teams making catastrophic budget decisions because they trust their models too much and interrogate them too little.
The global marketing analytics market is projected to exceed $50 billion by 2028. That is an enormous amount of money flowing toward tools that claim to decompose the complex, messy, human act of purchasing into tidy regression coefficients. The danger is not that these tools exist. The danger is that the people using them have stopped asking whether the outputs make sense.
What Marketing Mix Models Actually Do (And What They Don't)
An MMM ingests historical data—spend by channel, impressions, clicks, seasonal indicators, pricing changes, macroeconomic signals—and runs a regression to estimate each variable's contribution to a business outcome, usually revenue or conversions. The output is a set of coefficients that tell you, for example, that a 10% increase in display spend is associated with a 1.3% lift in sales.
Here is what that sentence does not tell you. It does not tell you that the relationship is causal. It does not tell you the relationship will hold at a different spend level. It does not tell you that the relationship exists independently of the forty other things happening simultaneously in your business. Regression identifies correlation within the data it was given, constrained by the assumptions the modeller baked in before the first line of code ran.
This distinction matters enormously. When a CMO reads a slide that says "television ROI: $1.42 per dollar spent" and treats it as a fixed law of nature, they are making a category error. They are confusing a model estimate—conditional on specific data, time windows, and assumptions—with a physical constant. Gravity does not change when you adjust the decay window. Your MMM output absolutely does.
Five Ways Your Model Lies to You
We have audited attribution and mix models for clients ranging from early-stage DTC brands to enterprise SaaS platforms. The same structural flaws appear with uncomfortable regularity.
1. The Omitted Variable Trap
Every model is only as complete as the variables it includes. Most MMMs capture media spend, pricing, and seasonality. Very few properly account for brand equity momentum, competitive share of voice, PR coverage, word-of-mouth velocity, or macroeconomic sentiment shifts. When these forces drive sales and the model cannot see them, it attributes their impact to whatever correlated channel it can see. The result is systematic over-attribution to measurable digital channels and systematic under-attribution to everything else.
We ran into this with a client whose MMM credited paid search with driving 41% of new customer acquisition. When we layered in brand search volume as an independent variable, paid search's true incremental contribution dropped to 19%. The remaining 22 points had been brand awareness—built by television and content marketing—leaking into the search channel because people Googled the brand name before clicking an ad.
2. The Decay Rate Gamble
Every model must decide how long the effect of an ad impression lasts. This parameter—called the adstock decay rate—has massive downstream consequences. Set decay too short and you under-value brand channels whose effects compound over weeks. Set it too long and you over-credit channels whose impact is genuinely fleeting. The problem is that decay rates are frequently set by convention or analyst intuition rather than empirical calibration. A two-week decay for television versus a three-week decay changes television ROI by 15–30% in most models we have tested.
3. The Saturation Curve Fiction
Models assume diminishing returns: at some point, more spend in a channel yields less incremental outcome. The shape of this curve—how fast returns diminish and where the inflection point sits—is another modeller judgment call. If your analyst assumes steep saturation for social media, the model will recommend capping social spend early. If they assume a gradual curve, the model will recommend pouring more money in. Same data, different assumption, opposite recommendation. The model did not change its mind. The human behind it made a different guess.
4. The Aggregation Illusion
Most MMMs operate at a weekly or monthly granularity, aggregating all activity within a channel into a single number. This obscures enormous variation. Your Meta spend might include top-of-funnel video prospecting, mid-funnel retargeting, and bottom-funnel dynamic product ads. Lumping them together and assigning a single ROI to "Meta" is like averaging the temperature of an oven and a freezer and calling the kitchen comfortable. Channel-level MMM outputs mask the creative, audience, and placement differences that actually determine performance.
5. The Stationarity Assumption
Regression models assume that the relationships in historical data will persist into the future. Markets, consumer preferences, competitive landscapes, and platform algorithms change constantly. A model built on 2024 data assumes 2024 conditions will hold in 2026. If your competitor launched a massive brand campaign in Q3 2024 that depressed your organic traffic, the model might permanently undervalue your organic channel—even though the competitive pressure has since subsided. Historical relationships are not destiny, and models that treat them as such generate recommendations that age badly.
The Real Cost of Trusting the Wrong Numbers
When flawed model outputs drive budget allocation, the consequences compound. We have seen this play out in a disturbingly consistent pattern across industries.
First, the model under-credits brand marketing because brand effects are diffuse, delayed, and hard to isolate. Second, the CMO shifts budget from brand to performance channels because the model says performance ROI is higher. Third, short-term conversion metrics improve—confirming the model's recommendation and reinforcing the cycle. Fourth, over twelve to eighteen months, brand search volume declines, organic traffic erodes, paid CPAs rise as the brand loses pricing power, and the business enters a slow spiral that is invisible in quarterly reporting but devastating in annual comparisons.
This is not theoretical. Byron Sharp's research at the Ehrenberg-Bass Institute has documented repeatedly that brands which cut upper-funnel investment in favor of activation spend lose mental availability—the probability of being thought of at the moment of purchase—and subsequently lose market share. The model told them to cut. The model was optimizing for a metric. The business needed something the metric could not capture.
How to Actually Interrogate Your Model
The solution is not to discard MMM. It is a useful directional tool when treated as one input among several. The solution is to stop treating model outputs as verdicts and start treating them as hypotheses that require validation. Here is the framework we use at Aragil when auditing a client's attribution ecosystem.
Step 1: Demand the assumption log. Every model is built on dozens of explicit and implicit assumptions. Ask your vendor or internal team to produce a written document listing every assumption—decay rates by channel, saturation curve parameters, variables included and excluded, data transformations applied, time windows selected. If they cannot produce this document, the model is a black box and its outputs are not trustworthy.
Step 2: Run sensitivity analysis. Take the three most consequential assumptions and vary them by plus or minus 20%. If the model's top-line recommendations flip when you adjust a single decay parameter, the model is fragile and its outputs should be treated with extreme caution. Robust models produce directionally stable recommendations across reasonable assumption ranges.
Step 3: Cross-reference with incrementality tests. The gold standard for measuring channel contribution is a controlled experiment—geo-based holdout tests, matched market tests, or platform-level conversion lift studies. If your MMM says Meta delivers a 3.2x ROAS but a geo-holdout test shows 1.8x, you have a calibration problem. Use experimental results to recalibrate your model, not the other way around. At Aragil, we build incrementality testing into every performance engagement specifically because models alone are insufficient.
Step 4: Check for face validity. Does the model's output match what you observe qualitatively? If the model says cutting television spend by 30% will have no impact on sales, but your sales team reports that customers consistently cite your TV ads as their first brand touchpoint, there is a disconnect. Qualitative intelligence is not inferior to quantitative output—it is a necessary complement.
Step 5: Demand out-of-sample validation. A well-built model should be tested on data it was not trained on. Ask your vendor what the model's out-of-sample prediction error is. If they have not tested this, the model has been over-fit to historical data and its forward-looking recommendations are unreliable.
The Triangulation Imperative
The most sophisticated marketing organizations do not rely on a single measurement methodology. They triangulate. MMM provides a macro, top-down view of channel contribution. Multi-touch attribution (MTA) provides a micro, user-level view—though it carries its own biases around cookie limitations and walled gardens. Incrementality experiments provide causal evidence for specific channels at specific moments.
When all three methodologies point in the same direction, you can invest with confidence. When they diverge, you have identified an area that requires deeper investigation before committing budget. This triangulated approach is more expensive and more complex than trusting a single model. It is also dramatically more accurate.
The marketing measurement landscape is evolving rapidly. Privacy regulations are shrinking the data available to user-level attribution. Platform-reported metrics are increasingly self-serving. In this environment, the organizations that will allocate capital most efficiently are not the ones with the fanciest model—they are the ones with the most rigorous process for questioning every number that lands on the CMO's desk.
Stop Quoting. Start Questioning.
The next time someone presents you with a waterfall chart showing precise channel ROI figures, resist the urge to write those numbers on a whiteboard. Instead, ask five questions: What assumptions produced these numbers? What happens if those assumptions are wrong? What data was excluded? Has this been validated experimentally? And does this match what we see in the real world?
A model that cannot withstand these questions was never trustworthy in the first place. A model that can is a genuine strategic asset. The difference between the two is not the sophistication of the algorithm—it is the rigor of the people using it.
Your marketing model is not lying to you on purpose. It is doing exactly what it was designed to do: produce an output based on the inputs and constraints it was given. The question is whether anyone bothered to check if those inputs and constraints reflect reality. In most organizations, the answer is no. And that is the most expensive oversight in modern marketing.
FAQ: Marketing Mix Models and Attribution
What is marketing mix modelling and why do marketers use it?
Marketing mix modelling is a statistical technique that uses historical data—advertising spend, pricing, seasonal factors, and external variables—to estimate each marketing channel's contribution to business outcomes like revenue or conversions. Marketers use it because it provides a top-down, privacy-compliant view of channel effectiveness without relying on individual user tracking. However, its outputs are estimates shaped by the assumptions built into the model, not absolute measurements of channel value.
How can I tell if my MMM outputs are unreliable?
Several red flags indicate unreliable model outputs. If your vendor cannot produce a written list of the assumptions used to build the model, that is a problem. If small changes to decay rates or saturation parameters produce dramatically different recommendations, the model is fragile. If the model's conclusions contradict what your sales team observes qualitatively, or if the model has never been validated against controlled experiments, treat its outputs as directional hypotheses rather than actionable facts.
What is the difference between MMM and multi-touch attribution?
MMM is a top-down, aggregate approach that uses historical data to estimate channel-level contribution without tracking individual users. Multi-touch attribution (MTA) is a bottom-up, user-level approach that traces individual customer journeys across touchpoints. MMM is better for understanding macro channel allocation, while MTA is better for optimizing within-channel tactics. Both have significant limitations—MMM relies on assumptions and aggregation, while MTA is constrained by cookie deprecation, walled gardens, and cross-device tracking gaps. The strongest measurement strategies use both alongside incrementality experiments.
How often should a marketing mix model be recalibrated?
At minimum, models should be recalibrated quarterly against fresh data and validated annually against controlled incrementality experiments. Major business changes—new product launches, significant competitor moves, market expansions, or platform algorithm shifts—should trigger immediate recalibration. A model built on 2024 data may produce misleading recommendations in 2026 if the competitive landscape or consumer behavior has shifted materially.
Can small and mid-size businesses benefit from marketing mix modelling?
Traditional MMMs require large datasets and significant investment, which can make them impractical for smaller businesses. However, lightweight alternatives exist. Bayesian MMM frameworks like Google's Meridian or Meta's Robyn are open-source tools that can produce directional insights with smaller datasets. For businesses spending under $500K annually on media, simpler approaches—incrementality tests, platform-level conversion lift studies, and disciplined A/B testing—often provide more actionable insights at a fraction of the cost. The principle remains the same regardless of budget: never rely on a single measurement source to drive allocation decisions.
What role does brand marketing play in marketing mix model accuracy?
Brand marketing is the single largest source of misattribution in most marketing mix models. Brand effects—awareness, consideration, trust, mental availability—are diffuse, delayed, and difficult to isolate statistically. Because MMMs struggle to capture these effects, they systematically under-credit brand channels like television, sponsorships, and content marketing while over-crediting lower-funnel channels that benefit from brand-driven demand. Organizations that recognize this bias and supplement their MMM with brand tracking studies and incrementality tests make significantly better allocation decisions.
%20(32).jpg)
%20(26).jpg)
%20(26).jpg)
