The Martech Trap: Sutherland's Critical Warning

Why marketing technology stacks hurt performance when used without judgment

Author:

Ara Ohanian

Published:

October 16, 2025

Updated:

April 16, 2026

The Dashboard Doesn't Know What It's Looking At

Here's the uncomfortable part of Rory Sutherland's argument about martech, the part that most marketers nod along to and then completely ignore in their own operations: the tools do not understand the business they are measuring. They measure what they can measure. They optimize what they can optimize. And the boundary between "what the tool can see" and "what actually matters" is a gap most marketing departments have stopped noticing exists.

Sutherland's warning — that powerful marketing tools in the wrong hands become dangerous — is not new. But it is more relevant in 2026 than it was when he first made it, because martech stacks have grown, attribution platforms have become more sophisticated, AI-driven analytics have entered the workflow, and the space between "data" and "truth" has widened rather than narrowed. More measurement has produced more confidence in less accurate conclusions.

This article is not a rejection of marketing technology. Our team at Aragil uses it extensively — across performance marketing, CRO, and analytics. But the way we use it, and the way most organizations use it, are different in specific ways that determine whether the tools become force multipliers or expensive traps.

The Measurability Bias: When Your Tools Redefine Your Strategy

Every marketing tool has a built-in worldview. Google Analytics sees the world as sessions and conversion events. Meta's Ads Manager sees it as impressions, clicks, and attributed sales within a specific window. Attribution platforms see it as touchpoints along a path. Each tool captures a narrow slice of what's actually happening in the market, and presents that slice as if it were the complete picture.

Over time — and this is the specific failure mode Sutherland identifies — the tool's worldview becomes the organization's worldview. The things the tool can measure become the things the organization values. The things the tool cannot measure become invisible. This is not a conscious decision. It happens gradually, through the accumulation of weekly reports, quarterly reviews, and performance metrics that all reference the same narrow set of indicators.

The result is a specific pattern we see repeatedly when we audit client marketing operations. A brand has spent years optimizing for click-through rate, conversion rate, and cost per acquisition — the metrics their tools measure well. Meanwhile, they have no idea whether their brand is becoming more or less trusted, whether customers recommend them to others, whether their positioning resonates more strongly than their competitors', or whether their campaigns are building long-term pricing power. These unmeasured dimensions are where most of the real marketing value lives, and they have been systematically ignored because no tool in the stack reports on them.

The fix is not to buy more tools. It is to explicitly maintain a parallel track of judgment-based assessment alongside tool-based measurement. Someone on the team needs to be asking questions the dashboard cannot answer, and those questions need to carry weight in strategic decisions.

Attribution Theater: The Most Expensive Fiction in Marketing

Attribution platforms are the most technologically sophisticated part of most martech stacks. They are also, in many cases, the most misleading. The attribution models that drive billions of dollars in budget allocation decisions are built on assumptions that every practitioner knows are wrong, and most stakeholders pretend are right.

Last-click attribution credits the final touchpoint before conversion — which means it systematically undervalues every earlier interaction that built awareness, trust, or consideration. First-click attribution does the opposite — crediting the initial touchpoint while ignoring everything that actually converted the customer. Data-driven attribution uses algorithms to distribute credit across touchpoints, but those algorithms are calibrated on the same incomplete dataset that caused the problem in the first place.

None of these models capture what actually happens in customer decision-making. A customer sees your brand on LinkedIn, forgets about it, sees a friend mention it three weeks later, reads a comparison article, searches your brand name, and converts. Your attribution platform will credit the paid search click at the end — and your team will respond by shifting more budget to paid search, away from the LinkedIn presence and the content marketing that actually drove the consideration.

This is not a hypothetical scenario. It is the observed pattern across many client engagements where we've done post-hoc analysis. Teams that trust their attribution platforms systematically over-invest in bottom-funnel channels and under-invest in the top-funnel activity that creates the demand those bottom-funnel channels capture. The tool is working as designed. The problem is that the design doesn't match the reality.

The practical response we've developed: treat attribution data as directional, not definitive. Use it to identify patterns worth investigating, not to make final budget allocation decisions. Maintain skepticism about the model's assumptions. Cross-reference with incrementality testing, brand tracking studies, and qualitative customer research whenever possible.

The Optimization Trap: Improving Something Into Irrelevance

A specific, almost comical failure mode emerges when marketing teams become too good at using their tools. They optimize with increasing precision toward metrics that no longer matter, producing improvements that look impressive in reports and generate no corresponding business value.

A real example: an eCommerce client came to us with a landing page that had been A/B tested extensively over 18 months. Conversion rate had improved from 2.1% to 3.4% — a 62% relative increase, a marketing team's dream. But overall revenue had been flat for the entire period, and customer lifetime value had declined. What happened?

The optimization had systematically selected for a specific type of customer — the one who converted quickly on discount-driven messaging, with low commitment to the brand and high price sensitivity. The landing page variants that converted at higher rates were the ones that promised immediate savings and minimized commitment. The team had optimized their way toward a customer segment that was cheaper to acquire and vastly less valuable over time. The tool showed success. The business was quietly degrading.

This is the pattern Sutherland warns about, expressed concretely. When you optimize against a proxy metric, the optimization always works — you will improve that metric. But the metric is a proxy, and the relationship between the proxy and the actual business outcome is never as stable as the tool assumes. Optimize aggressively enough, and you break the relationship entirely.

The defense against this is deceptively simple: regularly audit whether the metrics you're optimizing still correlate with the outcomes you care about. If conversion rate has improved but revenue hasn't, the metric has decoupled from the outcome. If engagement has increased but retention has dropped, same story. The tools will never tell you this, because the tools don't know what the business actually needs. Only humans with context can make that call.

The Automation Fallacy: When Efficiency Replaces Thinking

Marketing automation is the single most overhyped capability in the martech space. Not because the tools are bad — they work as advertised — but because the category has been sold as a substitute for marketing thinking rather than a complement to it.

The pitch goes like this: automation handles the repetitive work, freeing your team to focus on strategy. In practice, what usually happens is that automation handles the repetitive work, and the team uses the freed time to configure more automation. The strategic thinking that was supposed to emerge from the efficiency gain never materializes, because the tools have replaced both the tasks AND the questions the tasks used to force.

Consider email marketing workflows. Before automation, a team had to manually decide who got which message when. That friction forced uncomfortable questions: Why are we sending this? Is this actually useful to the recipient? What would happen if we didn't send it? With automation, the workflow runs continuously, and nobody revisits those questions. The email goes out because the automation is configured to send it. Whether it should still be sent — whether the audience has changed, whether the message has aged, whether the entire program is still productive — is a question that often goes unasked for years.

The antidote is deliberate friction. Schedule regular reviews of every automated process — quarterly at minimum — where the team is required to justify its continued existence based on current evidence. Automations that can't be justified get turned off. This sounds obvious. It is rarely practiced, because automation's main promise is that you don't have to think about it, and the organizations that use it most are the ones that have most fully accepted that promise.

The AI Layer: More Power, Same Problem

The integration of AI into marketing tools has not solved Sutherland's critique — it has intensified it. AI-powered analytics platforms, predictive attribution models, and automated optimization engines are faster, more sophisticated, and more confident in their outputs than the previous generation of tools. But they are still optimizing against the same flawed proxies, still blind to the same unmeasured dimensions, still incapable of understanding the business context within which their outputs will be applied.

The specific risk with AI-layered martech is that the outputs look more authoritative. A dashboard reporting "our model predicts a 23% lift from this audience segment" feels more trustworthy than a human analyst saying "this segment looks promising, here are some caveats." The confidence is often misplaced. The underlying data limitations haven't changed. The model is just better at hiding them.

The organizations getting the most value from AI-enhanced martech are the ones that have reversed the typical relationship: they use AI to generate hypotheses that humans then evaluate, rather than using AI to make decisions that humans then execute. The human stays in the critical position — deciding what's worth pursuing, what's worth ignoring, and when the tool's output contradicts business reality.

This approach is slower than full automation. It produces better decisions. The difference compounds over time, because the organizations that maintain human critical thinking as a core competency can adapt when the tools mislead them. The organizations that have offshored judgment to their stack cannot.

What a Healthy Relationship With Martech Looks Like

A few principles we've developed over 15+ years of client work that separate healthy martech usage from the trap:

  • Every tool must serve a specific, stated purpose. If you can't articulate what question a tool answers and what decision it enables, the tool doesn't belong in your stack. Complexity without purpose is a liability.
  • Maintain parallel tracks of tool-based and judgment-based assessment. The tools measure what they can measure. Humans need to be responsible for tracking the dimensions the tools miss — brand perception, positioning strength, competitive dynamics, customer sentiment. These parallel tracks inform each other but neither replaces the other.
  • Audit the relationship between proxy metrics and business outcomes quarterly. If the metrics you're optimizing have stopped correlating with the results you care about, the optimization is making you worse, not better.
  • Use attribution data as directional, not definitive. The models are always wrong in specific ways. Cross-reference with incrementality testing and qualitative research before making major budget shifts.
  • Subject automations to regular mandatory review. Automated processes must earn their continued existence by demonstrating current value, not by having been valuable once.
  • Keep humans in the critical path for strategic decisions. AI and automation can generate options, surface patterns, and execute at scale. The judgment about which options matter and why must remain human, because only humans have the full business context required to make it well.

Frequently Asked Questions

What is the biggest problem with marketing technology stacks today?

The biggest problem is measurability bias — the tendency for what tools can measure to become what organizations optimize for, while the unmeasured dimensions of marketing performance (brand trust, positioning strength, long-term customer value) get systematically ignored. This isn't a tool problem. It's an organizational discipline problem. The fix is maintaining explicit parallel tracks of tool-based and judgment-based assessment.

Is Rory Sutherland saying marketers should stop using martech tools?

No. Sutherland's argument is about misuse, not use. Marketing technology in the hands of critical thinkers who understand its limitations is powerful and valuable. Marketing technology in the hands of teams that treat it as an oracle, that optimize against its proxies without questioning them, that let the tool's worldview replace strategic thinking — that is the dangerous scenario. The distinction is operational, not technological.

Why is attribution data misleading?

Attribution platforms are built on assumptions about how customers make decisions that don't match how customers actually make decisions. No attribution model can fully capture cross-channel influence, offline touchpoints, word-of-mouth effects, or the long lead times of consideration in most markets. The data is useful for identifying patterns worth investigating but unreliable for making final budget decisions. Incrementality testing and brand tracking provide the triangulation attribution alone cannot.

How do I know if my team has fallen into the martech trap?

Warning signs include: optimizing a metric aggressively for months without corresponding business improvement, inability to articulate what your brand is becoming beyond the performance metrics you track, growing attribution certainty combined with shrinking understanding of why customers actually buy, and automation workflows that nobody can defend on current evidence. If several of these apply, your stack is shaping your strategy more than your strategy is shaping your stack.

How does AI change the martech trap problem?

AI intensifies rather than solves the problem. AI-enhanced tools produce more confident-looking outputs based on the same flawed proxies and incomplete data the previous generation used. The outputs are faster and more sophisticated, but they carry the same fundamental limitations, now hidden behind more authoritative presentation. The organizations using AI well treat it as a hypothesis generator for human evaluation, not as a decision-maker for human execution.

What marketing activities should not be automated?

Strategic positioning decisions, creative direction, brand voice development, customer relationship judgment in high-stakes situations, and any activity where the cost of a wrong decision significantly exceeds the time cost of human thought. The general principle: automate what is repetitive, predictable, and reversible. Keep humans in the critical path for decisions that are novel, high-stakes, or shape the organization's identity.

How should a marketing team start fixing an over-reliance on martech?

Start with one audit: identify the top five metrics your team reports on and trace each one back to the business outcome it's supposed to predict. For each metric, ask whether the correlation between the metric and the outcome is still strong. Any metric where the correlation has weakened is a sign that optimization has decoupled from value. This exercise typically reveals two to three places where tool-driven work has been destroying rather than creating value, and those are the highest-priority areas to restructure.