Strategy & Analytics

Data Has Its Limits: Some Thoughts on Long-Term Brand Trust

By Abram Gregory · Business Analysis

Data-driven marketing is not the problem. Most of the time, the issue is simpler: companies confuse what their data can measure with what their strategy actually needs to protect.

Dashboards are excellent at showing demand capture—clicks, conversions, CAC, ROAS, short-term revenue. They struggle to show demand creation—trust, loyalty, perceived quality, mental availability, and pricing power.

That distinction matters. The easiest things to measure tend to be the easiest to over-optimize. A company can get very good at harvesting people who were already close to buying while quietly getting worse at creating future customers.

The Dashboard Is Not the Business

The evidence does not suggest performance marketing is inherently flawed. It suggests imbalance is costly. WARC and Analytic Partners find that over-investing in performance channels can reduce ROI by 20% to 50%, while shifting toward a more balanced mix can increase revenue ROI by 25% to 100%, with a median lift around 90%.

That gap points to a common mistake. Companies build dashboards around the channels closest to purchase, then assume those channels created the demand. In reality, short-term activation and long-term brand building are doing different jobs. One captures demand. The other determines whether demand exists in the first place.

Short-Term Metrics Miss Slow-Moving Assets

Trust does not collapse overnight, and brand equity does not compound in a single reporting window. That is exactly why dashboards miss them. Thinkbox’s analysis of 141 brands and £1.8 billion in spend found that most advertising profit comes from long-term effects, meaning short-window ROAS often undercounts what actually drives returns.

Kantar’s trust research reaches a similar conclusion. High-trust companies grew 115% more than low-trust peers over a decade. Its work on pricing power shows that strong brands rely less on discounting and can sustain higher margins. None of that shows up cleanly in CTR or conversion rate, but it shows up later in retention, resilience, and willingness to pay.

Attribution Can Be Deeply Misleading

One of the biggest failures in data-driven marketing is treating attribution as causation. A channel can appear to drive sales simply because it sits near the purchase. The eBay paid-search experiment is a clean example: measured returns were only a fraction of standard attribution estimates, and brand-keyword ads showed no measurable short-term benefit.

Groupon’s deindexing experiment showed the same issue from a different angle. When the company temporarily removed itself from Google, direct traffic fell by roughly 60%. A large portion of what analytics labeled “direct” was actually organic search. If the inputs are misclassified, the strategy built on top of them will be too.

CRO Can Win the Test and Lose the Customer

Conversion optimization runs into the same problem when it is judged too narrowly. A test can lift conversion rate while increasing frustration, refunds, or distrust. If the metric only asks whether more people clicked today, it misses whether the experience made people less willing to come back tomorrow.

The extreme cases make the risk obvious. The FTC required Epic Games to pay $245 million over deceptive design practices, and had already issued more than $72 million in refunds by late 2024. The monetization worked in the short term. The cost came later.

The Amazon Prime case reflects the same pattern at a larger scale, with a $2.5 billion settlement tied to enrollment and cancellation flows. A funnel can look efficient while quietly building legal risk and customer resentment.

Trust Is a Business Asset, Not a Soft Metric

Wells Fargo’s sales practices scandal is a classic example of metric failure. Cross-sell targets drove behavior that produced millions of unauthorized accounts. The company ultimately paid $3 billion, and after the scandal broke, new checking accounts dropped 44% while credit card applications fell 50%.

The issue was not marketing execution. It was measurement design. The organization optimized a visible metric while eroding the underlying asset that made the business viable.

Even Simple Tests Are Less Stable Than They Look

A/B testing does not fully solve this problem. It still depends on what you choose to optimize. Ron Kohavi’s work on experimentation warns that focusing only on revenue can improve short-term results while degrading long-term user experience.

Even basic tactics are less reliable than they seem. A meta-analysis of 8,977 headline experiments found that increasing concreteness reduced click-through in more than half of cases. What works in one context often fails in another.

SEO Has the Same Blind Spot

SEO and content strategy are especially vulnerable to proxy metrics. Teams can chase clicks and rankings while gradually weakening credibility. Traffic can look strong even as trust erodes.

Google’s own guidance reflects this tension. Its systems emphasize helpful, reliable, people-first content and penalize scaled content abuse. The takeaway is straightforward: traffic is a signal, not proof of value.

A Better Way to Use Data

The solution is not less data. It is more precise use of it. Performance metrics are useful for understanding demand capture. Brand tracking, retention, pricing power, complaint rates, and repeat behavior are needed to understand whether demand is being created.

The most effective organizations track both. Short-term metrics for immediate efficiency. Long-term indicators for durability. When those two start to diverge, that is where the real strategic signal lives.

The Real Tradeoff

Data forces you to decide what matters enough to measure. If a tactic improves the funnel this month while weakening trust or teaching customers to expect manipulation, the spreadsheet will show a win even as the business takes on risk.

Over-optimization creates a specific failure mode. A company becomes extremely good at extracting value from existing demand while becoming worse at earning future demand.

Final Thought

Data has limits because businesses have memory. Customers remember whether a brand helped them, respected them, or tried to extract from them. Those impressions accumulate slowly, and they rarely show up cleanly in dashboards.

The goal is to use data without letting the most measurable signals crowd out the most important ones. Trust compounds quietly, but once it breaks, it is expensive to rebuild.

More writing

I write about strategy, analytics, institutions, and the limits of systems that look cleaner than they really are.

Discover more from Abram Gregory

Subscribe now to keep reading and get access to the full archive.

Continue reading