Trust is the New Oil
From the Attention Economy to the Action Economy
A few weeks ago, I floated a half-baked idea during a presentation: “in the age of AI, trust is the new oil.” The more I bake it, the more I believe this phrase captures the possibility that AI will re-order the media economy as we know it. In many ways, it also points to a hopeful future, even for traditional media.
In this post, I’ll explain why.
Tl;dr:
For the last 25 years, the foundation of the web economy has been the monetization of attention.
That has funded a vast amount of content, but the zero-sum competition for our finite attention has a dark side too: sponsored links that are a diversion or a tax; apps and sites that manipulate our neurophysiology to hook us; content that is clickbait, ragebait, polarizing, and false; and unwelcome surveillance online.
There are also signs we are in the late stages of strip-mining attention and nearing a breaking point.
Here comes AI. It will likely devalue raw attention for two reasons: as creation costs fall, increasing noise will drown out signal on open platforms, such as the open web and social networks (despite platforms’ attempts to counteract it); and consumers will increasingly use agents and bots to manage it.
In turn, this will likely weigh on digital advertising. All that noise will degrade the value of impressions and the quality of data, especially at the top of the funnel. The rise of agents and bots may render many ads worthless—and not just on the open web, but even some social impressions and retail media traffic.
The big question: how does the web economy adjust? It will likely shift from an attention economy to an action economy, from one built on potential energy to kinetic energy.
The declining value of attention will be accompanied by increasing value of trust. Consumers are far more likely to act—or, eventually, empower their agents to act—when they trust the source. Just as data was the oil that fueled the attention economy, trust will be the oil that fuels the action economy.
There are two kinds of trust online: signal-based trust and earned trust. The former is easy to game, but the latter must be established and reinforced over time.
A web economy built on earned trust would look very different from what exists today. Among the implications: the value of human curation would rise as algorithms struggle to manage the noise; discovery would shift back from algorithmically-curated feeds to trusted follows; similarly, consumers would lean more on institutions with trusted brands; it would be harder for new creators and brands to establish trust; and advertisers would shift from paying for attention to paying for outcomes.
The fight for attention won’t ever fully go away and, with it, its negative consequences. But, on the margin, attention alone will become less profitable. That could shift both incentives and the balance of power online.
Why Attention Became the Primary Currency
We’re so accustomed to paying for content with our attention that we don’t think about it. But it didn’t have to be this way.
The web was built on open protocols. TCP/IP for routing and delivery of packets, HTML for rendering web pages, HTTP for links between pages, SMTP for delivering mail between servers, etc. For a variety of reasons, however, there weren’t protocols for things that, in hindsight, would’ve been very useful, like payments or verified identity. Perhaps that’s because the web wasn’t initially intended to support commercial activity (the National Science Foundation (NSFNET) initially prohibited it) and the early web community prioritized privacy and anonymity over verified identity. The reasons don’t matter much.
In the absence of “rails” for native payments and identity—what Ethan Zuckerman called the “original sin of the internet” over a decade ago—the only viable model to support content creation was advertising: capture consumers’ attention and charge advertisers for the right to access some of it.
Today, while the internet supports subscriptions and commerce, almost every site and app is either partially or entirely advertising supported. Globally, digital advertising is a ~$750 billion business, representing 70% of all advertising spend—which is to say that it is more than twice as large as TV, print, radio, outdoor, and every other type of advertising combined (Figure 1).
Figure 1. Digital Dominates Global Advertising
Source: WPP, The Mediator.
The Problem With Attention
In some ways, advertising serves the public interest. It finances a vast amount of content that is available for “free.” When video and audio streaming services, games, and podcasts offer consumers the option of receiving ads in exchange for paying less or nothing (i.e., by offering ad-supported and more expensive ad-free tiers), many happily take the ad-supported choice.
It is increasingly clear that the battle for a finite amount of attention is at odds with consumer welfare.
It is increasingly clear, however, that this zero-sum competition to capture a finite amount of consumer attention—the so-called “attention economy”—puts the interests of ad-supported platforms and content creators at odds with public welfare.
Sponsored results are either a diversion or a tax. Consumers go to Google Search to find the best sources of information. Google’s interests are misaligned with that goal. It makes money by inserting sponsored search ads that often displace the most relevant organic results. When these sponsored results are the best source, that’s because the advertiser felt compelled to purchase the relevant keyword(s) to pre-empt competitors, which effectively raises costs for consumers. The same principle applies to sponsored results on Amazon or any retail media.
Product design is often manipulative. Social platforms hack our neurophysiology to increase time spent, often at the expense of consumers’ well being. They use tactics like variable reward schedules (so-called “dopamine loops”), gamification, notifications to draw consumers back in, infinite scroll, etc.
The monetization of attention rewards sensational, enraging, polarizing, false content. This battle for attention encourages creators (professionals and otherwise) to produce sensational, polarizing, false, and clickbait content. And since this kind of emotion-baiting stuff tends to draw more attention, the platforms’ algorithms amplify it.
Privacy is collateral damage. To facilitate all this advertising, platforms and publishers collect all kinds of data about users: clickstream, location, behavioral, demographic, and economic—giving rise to another pejorative description of the web, “the surveillance economy.” Technically, users usually consent to this data collection, but it is often buried in massive terms and conditions that are onerous to read. Many feel that this “surveillance” is a violation of privacy that they had little practical ability to refuse.
Whether you call it the attention economy or the surveillance economy, many agree that this state of affairs is not good. It arguably isn’t good for mental health (especially young people); public civility; political discourse; and trust in information ecosystems.
It may also be unsustainable. Culturally, there are already signs that we are being pushed to the brink. You can see evidence of backlash everywhere: “digital detox” movements; the rise of minimalist phones and digital hacks to limit screen time; memes about digital burnout and doom-scrolling; subtle rejection of modernity by younger consumers (who are embracing vinyl, film cameras, and vintage shopping); and, thanks to efforts by people like social psychologist Jonathan Haidt, author of The Anxious Generation, a broader push to ban phones in school.
How AI Will Devalue Attention
With this as backdrop, enter GenAI. It promises (or, depending on your perspective, threatens) to break the back of the attention economy, for two reasons: content hyperinflation and, in response, the rise of AI intermediaries—bots and agents.
Content Hyperinflation Will Flood the System and Drown Out Signal
The tragedy of the commons occurs when individuals, acting in their own interests, overuse and deplete a cheap or free shared resource. There’s no accountability for taking more than your fair share, so everyone does it. The cows overgraze the commons, ruining the pasture for everyone.
We are already in the late stages of strip-mining our collective attention. And here comes AI.
As described above, we are already in the late stages of strip-mining our collective attention, a tragedy of the commons. AI is now exacerbating this. It will probably get much, much worse.
The Cost of Creation is Plummeting
Today, the cost of content creation is still relatively high. Although cheap consumer hardware (like the ubiquity of smartphones) and software tools have democratized the ability to create content, it still takes time and human effort. (Even what are colloquially referred to as “bot farms” are really often human farms—real people, often in low-wage countries, creating content at scale.)
GenAI will increasingly reduce those costs. Relatively soon, the open web and social feeds will likely be overrun by entirely synthetic content: articles, videos, songs, posts, podcasts, and entire websites. It is already feasible to deploy AI agents that scan platforms for trending topics, automatically create adjacent content, push it out on the network, monitor results, and amplify the most promising signals.
According to an article in The Times from a few months ago, of the top 20 most-viewed posts on Facebook in the U.S., four were “obviously AI.” Soon, it won’t be as obvious. As AI pushes production costs toward zero, even miniscule monetization will provide enough incentive to publish virtually infinite content.
The idea that a growing proportion of the web will be entirely synthetically generated is increasingly feasible technologically.
The so-called “Dead Internet Theory” is a conspiracy theory that most traffic on the web is generated by bots and represents a coordinated effort by nefarious state actors to manipulate the public. While I’m not so sure about the conspiracy part, the idea that a growing proportion of the web will be entirely synthetically generated is increasingly feasible technologically.
The Email Analogy
As an analogy, think about what’s happened to email over the last 20-30 years. In the late 1990s-early 2000s, it was still relatively novel to receive email and it was therefore high signal. Today, the noise has drowned out signal. We are overwhelmed by email spam, with spammers and email filters engaged in an never-ending cat-and-mouse game. Those filters are pretty good, but the costs to send bulk mail are so low that the spammers still have an economic incentive to try. Even with the best filters, spam slips through and, just as bad, legitimate emails get filtered out. (I recently found a trove of responses to the emails I send out from this Substack buried in my spam folder.)
The result is that email has been devalued as a communications medium. Click-through rates on legitimate emails have declined from 4-6% in 2010 to about 2% today. While it is still used at work and for a lot of marketing, its prior role has splintered into numerous apps:
Personal communications - WhatsApp, iMessage, Signal
Alerts - Push notifications, SMS
Group discussions - Discord, group chat
Information sharing - Slack, Facebook groups
There is a clear risk that, in the same way, a flood of content will drown out the signal-to-noise ratio online. This is especially true for any open platform. The open web is probably a goner, as AI will flood the internet with low-quality web pages and make it easier to automated SEO hacking to surface these pages.
In the case of social, platforms will adjust their algorithms in an attempt to weed out this content, but it may be difficult to determine what “this content” is. At Google I/O 2025 last month, Google announced SynthID Detector, which uses watermarking to identify AI content. But this only works for AI content created using Google tools. Plus, over time, it is likely that almost all human-generated content will include some synthetic elements—so, it is not clear this will help. How much AI is too much AI? What exactly are we weeding out?
At best, the efforts to filter out “AI slop” will prove to be another cat-and-mouse game, as the search engines, platforms, and the sloppers jostle to stay one step ahead. Trust and usage of both the open web and social feeds may be the collateral damage.
The Rise of Agents
Even more significant than the glut of content is the potential disintermediating effects of bots and agents.
There’s a running gag in 1985 movie Real Genius, starring Val Kilmer as a wisecracking college student. The film occasionally cuts to an impossibly boring science class. Over the semester, one-by-one, the students gradually replace themselves with tape recorders, left on their desks to record the lecture. By the end of the term, the entire class is full of tape recorders, including the professor. One machine gives the lecture to the other machines.
Source: Sony Pictures.
It was an accidentally prescient joke. As AI-produced content proliferates, consumers will probably increasingly turn to the use of both chatbots and AI agents to manage the tsunami by filtering and prioritizing information and verifying authenticity.
The major platforms are already building the infrastructure: Meta building AI agents across Instagram and WhatsApp; Apple Intelligence; ChatGPT with web browsing and plug-ins; Perplexity, etc. Late last year Anthropic introduced Model Context Protocol (MCP), an open-source framework that enables AI models to interoperate with any tool, site, app, or data layer. MCP also supports persistent memory, identity, and secure delegation. Assuming wide adoption, it will be possible for users to delegate authority to agents to work with any site or app, have them remember preferences or decisions over time, and talk with other agents (kind of like the tape recorders).
Agents are the buzzword du jour, so it’s easy to get carried away and fall prey to what I call naive technological determinism—thinking that just because something can happen, it will. Agents will make sense for some use cases, but not others. A reasonable guiding principle is that consumers will delegate where the use case is functional, not experiential (i.e., those in which the user is just trying to accomplish a task vs. those for which there is pleasure in the task itself); and the evaluation criteria are objective, not subjective.
Use cases that are functional and have objective evaluation criteria are more likely to be delegated to agents.
So, use cases that are functional and objective are more likely to be mediated through agents and bots, like pulling up the best flight options, recommending the best vacuum cleaner and comparison shopping across sites, searching for auto insurance, finding restaurants near the movie theater, or curating posts from information-dense social sites, like LinkedIn, Twitter, or Reddit. It is less likely that we will use agents to shop for a dress, make interior design choices, or scroll Instagram, TikTok, or Pinterest.
Potential Effects on Advertising
Rather than get bogged down in the specifics, here’s the basic idea: AI will probably make the web a lot noisier and mediate some actions on the web that consumers currently conduct directly. Both will devalue attention.
The basic idea: AI will devalue attention.
Let’s go slow here and think through the effects on advertising.
Most digital advertising is performance advertising, meaning that the advertiser is encouraging a specific action, such as buy, subscribe, request more information, etc. Even though the advertiser is seeking an outcome, it pays for attention, not outcomes. (An exception is affiliate marketing, which pays a bounty or commission on completed transactions.) In other words, the advertiser bears the risk that this attention will yield conversion. To use a metaphor from physics, advertisers are funding a system built on potential energy, hoping it will convert into kinetic energy.
To use a metaphor from physics, advertisers are funding a system built on potential energy, hoping it will convert into kinetic energy.
It follows that the value and pricing of different types of impressions depends on the expected likelihood of conversion. A helpful framework for thinking about this is the advertising “funnel.” There are a few versions of advertising or marketing funnels, but the basic concept is that the top of funnel is awareness, the middle is consideration, and the bottom is intent (Figure 2).
Figure 2. Value of Impressions Increase as You Move Down Funnel
Source: The Mediator.
Figure 2 illustrates the idea that to advertisers, value and intent of impression are directly correlated. Starting from the top, run-of-site, run-of-network display ads have the least intent and, therefore, the lowest value; contextual advertising is a little better, because at least the ads are appearing near relevant content; behavioral targeting is a little better still, since this enables advertisers to target the types of people who tend to transact; retargeting is even better, because it targets specific people (not types of people) who have already explored a brand or product or transacted in the past; retail media and search are the highest value, because these consumers are actively doing research or poised to purchase.
So, if there is hyperinflation of content and the use of bots and agents become commonplace, what happens to the value of inventory along this funnel?
A Content Glut Has Different Effects Up and Down Funnel
A glut of content would affect some types of inventory more than others.
Figure 3. Hyperinflation of Content Destroys Top of Funnel
Source: The Mediator.
As shown in Figure 3, a vast increase in content supply will probably further devalue programmatic advertising on the open web. It may also pressure contextual advertising, especially on unknown sites, because brands will be more concerned about content quality and brand safety. It will likely weigh on the value of some behavioral targeting, since the behavioral inputs may become less meaningful as AI noise pollutes the signal. Retargeting should be more insulated, since these users will have engaged in a specific behavior. Retail media and search will probably retain value or become relatively more valuable, since they enable advertisers to reach consumers who are not just passively consuming, but have high intent.
Agents Obsolete Advertising Altogether
For the reasons described above, consumers won’t likely delegate all their online activity to agents. But wherever they do will completely blow up the value of ads—for the simple reason that agents will be completely oblivious. Like the proverbial tree falling in the forest, ads will have no value if consumers don’t see them.
Like the proverbial tree falling in the forest, ads will have no value if consumers don’t see them.
Concern that AI will disintermediate sites and platforms has been bubbling up over the last year or two. News publishers are deathly afraid that consumers will rely on AI news summaries and that even when the chatbot surfaces links, they won’t feel compelled to click through. This is partially why The New York Times was the first major media company to sue OpenAI. It is also why many publishers, such as Axel Springer, Conde Nast, Dotdash Meredith, Financial Times, and News Corp., among others, have struck licensing deals with OpenAI.
Last month, Apple SVP Services Eddy Cue made headlines when he said at an antitrust trial that Google searches fell in the Safari browser in April, for the first time. Why this happened isn’t clear, but it raised speculation that Google is effectively losing search share to LLMs with surfing capabilities, like ChatGPT, Claude and Perplexity.
It isn’t just the open web and search. Even some retail media and social traffic will be vulnerable to disintermediation.
But the open web and search aren’t the only destinations at risk. Let’s go back to the criteria for using agents that I described above. Tasks that are functional with objective evaluation measures are more likely to be disintermediated. That implies that retail media will also likely take a hit, especially in some product categories. And, as mentioned above, even some social traffic—functional platforms like Reddit, LinkedIn, and X/Twitter—may be susceptible. (Personally, I think there is very high value on a very small proportion of posts on X. I would love an agent that could surface just those.)
From An Attention Economy to an Action Economy
To sum up: The foundation of the web economy is advertisers paying for consumer attention based on the expectation that a small percentage will see an ad and take action. If that percentage declines markedly because the amount of noise online drowns out the signal or, even worse, if people stop seeing ads because most of their information is mediated for them, that will undermine that foundation.
How would the web economy adjust? This is where things get a little speculative, but let’s try anyway. It would likely need to shift from one based on passive consumption to outcomes, from the attention economy to the action economy. That means publishers, platforms, and ecommerce sites would need to pivot from a focus on generating value from impressions, clicks, likes, and shares to persuading consumers to subscribe, purchase, or patronize—or empower their agents to do so.
Trust is the New Oil
This all brings us back to the title of this post and what I mean by “trust is the new oil.”
As the attention economy took shape in the 2010s, the expression “data is the new oil” gained prominence. The point of the analogy is that just as oil fueled the second industrial revolution, data fuels the attention economy, because behavioral, contextual, and demographic data powers the systems that optimize both attracting and monetizing attention.
Consumers will only take action when they trust the source.
Action creates a higher bar than attention. Consumers are far more likely to take action, or empower their agents to take action, when they trust the source.
The currency of the attention economy is data; the currency of the action economy is trust.
Two Types of Trust
So, how will publishers, platforms, brands, creators, and merchants generate trust? There are two types of trust online: signal-based trust and earned trust.
Signal-Based Trust
Signal-based trust is derived from third-party markers of trust: verification badges, verified identities, follower counts, and positive reviews and ratings.
The benefit of signal-based trust is that it scales. It encourages trust even when the consumer has no direct experience with the site, app, brand, or product. But the larger drawback is that many try to game the system and manipulate these trust signals. This includes fake product reviews, purchasing or manufacturing followers, SEO hacking, comment farms, etc. AI will make it even easier to rig these signals.
Signal-based trust can be gamed.
This is the part where crypto enthusiasts will point out that “crypto solves this.” It might. On-chain reputation systems, verifiable credentials, and incentive alignment—such as through tokens or staking—could all address the problem of trust manipulation, if there is broad adoption. So far, that has proven a big if. We’ll see.
Earned Trust
Unlike signal-based trust, earned trust can’t be gamed. It has to be, well, earned, by implicitly (or explicitly) making a promise to deliver value in a consistent and authentic way and keeping that promise over time. There’s no other way.
The Implications of a Trust-Based Economy
Here’s an interesting thought exercise. We already know what the web looks like when everyone is incented to attract attention. Not good. But what would be the implications if, instead, everyone was trying to earn trust? That is, obviously, a very different web. For one thing, attention is measured in time and bounded, so competition is zero-sum, cutthroat, and often somewhat shrill and desperate. Trust is not bounded. It is earned through sustained, consistent effort. If the most prevalent basis of competition was trust, not noise, the web would be a kinder, gentler place. Here are some more practical implications:
Rising value of trusted human curation, declining value of the algorithm. As algorithmic feeds become oversaturated with synthetic content, trusted human curators—creators, influencers, and editors—become even more essential to cutting through the noise.
Discovery shifts back from feed to follow. At SXSW last year, Patreon founder Jack Conte delivered a keynote called “Death of the Follower & the Future of Creativity.” His point was that the rise of feeds—first on TikTok, but now prevalent on Instagram, YouTube, and X/Twitter—has shifted the gravity online from subscribed relationships to algorithmic feeds. If those feeds get swamped with crappy, synthetic content, we could see a shift back.
A swing back toward trusted brands. Similarly, if open platforms get overwhelmed with synthetic content and algorithms become less reliable, consumers will lean more on institutions with trusted reputations built over time, both media outlets and consumer brands. That’s good news for traditional media companies in an environment short on it.
More challenges for the tail. With a growing amount of noise, it will be harder for new creators and brands to earn trust.
Ad models shift to outcomes. If raw attention loses value and consumers increasingly use AI to mediate their interactions online, advertisers may shift their emphasis from potential energy to kinetic energy—from the possibility of outcomes to outcomes themselves.
Even before the effects of AI are widely felt, a lot of this is already starting to play out.
Wall Street research firm MoffettNathanson recently published a report called “U.S. Advertising: The Funnel Has Collapsed Into One As Ad Growth Continues to Surprise To The Upside.” It made the point that advertising spend continues to gravitate down funnel as the top of the funnel loses value (something that’s illustrated in Figure 1 above).
In a recent interview with Ben Thompson, author of Stratechery, Mark Zuckerberg floated the idea of an ad model entirely predicated on outcomes, a tacit acknowledgement that the attention-based model is taking on water:
I basically think that there are four major product and business opportunities [from AI] that are the things that we’re looking at and I’ll start from the most simplest…Use AI to make it so that the ads business goes a lot better…Improve recommendations, make it so that any business that basically wants to achieve some business outcome can just come to us, not have to produce any content, not have to know anything about their customers. Can just say, “Here’s the business outcome that I want, here’s what I’m willing to pay, I’m going to connect you to my bank account, I will pay you for as many business outcomes as you can achieve”.
The most successful creators are learning that there is much more value in monetizing trust than attention.
Over the last several years, the most successful creators have increasingly realized that there is much greater value in monetizing trust than attention. They make a lot more by being trusted curators—of information, culture, products, services, other creators, and even investment opportunities—than just attracting attention and selling ads. This is perhaps best captured through a recent LinkedIn post by Steven Bartlett, host of The Diary of a CEO podcast, in reference to being named to the Forbes list of Top 25 creators. Here’s an edited version:
it's crazy to be on this list, it's crazy to be the only non-American name on this list... the part people aren't talking about is:
For this list Forbes essentially measured *revenue from your creator business* - not equity or revenues from companies you own.
The list says:
✅ MrBeast made $85m± last year from his creator business.
🚀 But he's leveraged that audience to build a company called Beast Holdings worth $5billion± that may generate nearly $1b± in revenue this year. (*I'm an investor in his company).
✅ Jake Paul made $50m± last year from his creator business.
🚀 But he's leveraged that audience to build a boxing promotions company, a venture fund, an energy drink, a betting app and a mens care brand likely worth $500m±.
✅ Alex Cooper made $32m± revenue last year from her creator business.
🚀 But she's leveraged that audience to build a media network, a hydration drink, and a radio network likely worth $200m±.
Here are 10 important ideas to think about...
❌ Creators are no longer advertising space
✅ The top creators are now fully fledged entrepreneurs, building billion dollar companies
✅ Creators are turning into venture capital investors
✅ A community is harder to copy than a product
✅ AI will slash production costs and put a premium on trust and connection at scale
✅ The new moat is relationships, not physical products
✅ Creators are diversifying like hedge funds - sports, beverages, software
✅ Sponsor money is just the underwriting fee for bigger bets, cash is funding equity plays
✅ Silicon Valley is starting to scout TikTok before Stanford
The creator economy, how companies are built and investing is changing before our very eyes
Cautiously Hopeful
I don’t want to be naive, simplistic, or dogmatic about this. Neither the fight for attention nor its negative consequences are going away. Social networks will still be incented to keep us hooked for far longer than our better judgment would like. Outrageous and polarizing content will still be rewarded. Sites and apps will still obfuscate the ways they track us online. But, on the margin, the rewards of all this attention seeking will likely diminish.
In that wake, the value of sustained, trusted relationships will grow. That will shift the incentive structures and the balance of power online. That’s got to be a good thing, right?








Another provocative article. I will take a moment, Doug, to celebrate a few rays of light in this one!! I just did a fireside chat yesterday w Neil Waller the co-founder of Whalar Group. We spent a lot of time talking about established trusted creators and how things can roll up under them. It’s easy to see a world where they become the new media brands, rolling up more and more companies and creators and businesses under them. As you said it will be hard for new creators to develop, but if they come up under the brand of Beast or Speed or Kai Cenat they will be certified trustworthy brands— not that different from a legacy creator selling his show to HBO.
This is really excellent (and I'm very much looking forward to your new book!). I wrote about the concept of trust this week, but from a slightly different perspective: how important in the success of the biggest YouTubers (MrBeast, Dude Perfect) has been that parents trust them to make age appropriate content for their kids, compared with other superficially similar but less trustworthy creators. And that here is where TV producers could have a potential advantage over other creators as they instinctively know what family friendly content looks like. https://businessoftv.substack.com/p/family-friendly-youtube-content-the