The Mediator

The Mediator

Infinite Content: Chapter 9

GenAI: The Next Great Disruption of Media

Doug Shapiro's avatar
Doug Shapiro
Oct 21, 2025
∙ Paid

This is the draft ninth chapter of my book, Infinite Content: AI, The Next Great Disruption of Media, and How to Navigate What’s Coming, due to be published by The MIT Press in 2026. The introductory chapter is available for free here. Subsequent draft chapters will be serialized for paid subscribers to The Mediator and can be found here.


Where to even start with AI? It has inspired both awe and fear unlike any technology in history, perhaps second only to nuclear fission. It is highly controversial, and for good reason: it challenges what many believe to be the essence of being human.

W. Brian Arthur captured this unease most clearly in The Nature of Technology: What it is and How it Evolves:

Our deepest hope as humans lies in technology; but our deepest trust lies in nature. These forces are like tectonic plates grinding inexorably into each other in one long, slow collision… We are moving from an era where machines enhanced the natural—speeded our movements, saved our sweat, stitched our clothing—to one that brings in technologies that resemble or replace the natural…we are moving from using nature to intervening directly within nature.

So, AI is unsettling. Policy can shape it, but there are few precedents for stopping a technology outright. It’s coming, for better or worse.

Part of the reason for this inevitability is that it is attracting vast amounts of capital and brain power. Consider this: according to Pitchbook, AI-related companies raised $132 billion in 2024, more than one-third of all venture capital invested globally. Alphabet, Amazon, Meta, and Microsoft spent $236 billion in capital expenditures in 2024, up more than 50% from 2023—with the increase almost entirely attributable to AI spending. In fiscal 2025, Nvidia, the company whose chips train and run state-of-the-art AI models, generated $130 billion in revenue, growing at more than 60% annually over the previous 5 years. Between the end of 2020 and October 2025, its market capitalization increased 13-fold to $4.5 trillion, making it the most highly-valued company in the world.

AI is moving too fast for a blog to keep up, let alone something meant to endure as long as a book. Plus, even the top experts, who have devoted their professional lives to AI, cannot agree on some of the most fundamental questions about its future, such as whether:

  • The ongoing development of large language models (LLMs) puts us on a path to artificial general intelligence (AGI) or, as Meta Chief AI Scientist Yann LeCun says, LLMs are just an “off ramp,” with fundamental constraints;

  • the benefits of scale in data and compute will continue indefinitely or, as Bill Gates argues, we’ll get only “two more turns of the crank”;

  • small scale models that can run on a laptop or even a phone will suffice for most applications;

  • consumers and enterprises are really using it or just trying it out;

  • current AI valuations are a bubble;

  • value will flow to the closed-source frontier models (such as those from Google, OpenAI, and Anthropic) or open-source models will commoditize the foundational model layer;

  • it will replace human labor to such a degree that we will need universal basic income (UBI) to maintain a functioning society; and

  • it will or won’t kill us all.

There is much we don’t know.


In this chapter, let’s focus on what we do know. Like the internet before it, GenAI is another general purpose technology (GPT)1 and it will have profound implications across the entire economy, well beyond the media industry. How profound is an open question, but even if it stopped advancing today—which it won’t—it will fundamentally change the nature of knowledge work.

The internet unbundled information from infrastructure and, as described in Chapter 3, catalyzed a series of related technologies that set the cost to move bits around on a path toward zero. In one regard, this makes the internet like most other disruptive technologies throughout the history of media. It lowered the cost of distribution. GenAI is different, because it is squarely focused on the cost of creation. It is poised to set the cost to make bits on a path toward zero. We will return to this distinction repeatedly, because almost every downstream effect of GenAI in media flows from that difference.

How does it reduce the cost to make bits? Good question. If you’re looking to educate yourself, there are countless books on the topic or, even better, a few dozen key technical papers. If you’ve read those, I suggest you skip ahead.

If not, let’s roll up our sleeves. In the introductory chapter, I warned that this book will get a little wonky in parts. Some of the material in this chapter is a bit technical. But my goal was to provide enough background—as accessibly and intuitively as I can—about the capabilities, limitations, and potential of GenAI to ground our discussion throughout the rest of the book. I use myself as a proxy: what do I need to know to gauge the potential implications of GenAI?

Here are some of the topics we’ll cover:

  • How GenAI fits in with the broader evolution of artificial intelligence.

  • The difference between a discriminative and a generative AI model and why they (kind of) do opposite things.

  • Why generative AI is so revolutionary.

  • How generative models (actually) make new stuff, from first principles.

  • Why GenAI is not an isolated innovation in media, but builds on the breakthroughs of digitization and introduces a new symbolic layer.

  • From a technical basis, why accusations that GenAI models “steal” intellectual property are not clear cut.

  • Why the greatest strengths of GenAI models—their ability to create novel information—also causes their biggest weaknesses—like the tendency to make things up, lack of controllability, and opacity.

  • Why so many of the initial use cases for GenAI cluster in media.

  • Why GenAI will invariably exacerbate the tectonic trends we explored in the first half of the book.

In subsequent chapters, we’ll turn to the most important “known unknowns” about GenAI in media, how we can still mold that uncertainty into some potential scenarios, the likely effect of GenAI on the flow of value across the media value chain, and the cultural and societal implications of “infinite content.”

A Brief Primer on AI

It is hard to pinpoint the conceptual birth of artificial intelligence. In the 13th century, Franciscan mystic Ramon Llull created the “Ars generalis ultima,” a system of rotating paper disks meant to generate new ideas. Four hundred years later, Gottfried Wilhelm Leibniz wrote about reasoning as a mechanical calculation and envisioned a reasoning device, called the “calculus ratiocinator” (which sounds a little like something designed by the evil genius Dr. Doofenshmirtz in Phineas and Ferb).

In the 1940s, Warren McCulloch and Walter Pitts created the first artificial neural network model and in 1950, Alan Turing conceived of the Turing Test, a test of whether a machine could exhibit intelligence indistinguishable from a human. We’ll come back to Turing and his test in the next chapter.

In 1956, artificial intelligence pioneers John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon (yes, Shannon again) gathered at the so-called Dartmouth Conference, where the term “artificial intelligence” was born. (Interestingly, McCarthy coined the term to avoid association with cybernetics and its founder, Norbert Wiener.) Famously (or maybe infamously), the idea behind the conference was that ten people working together for two months could make significant progress in enabling a machine to simulate “every aspect of learning or any other feature of intelligence…” It proved a tad more complicated than that, but the field was born.

Let’s try to understand modern AI on a conceptual level.

Symbolic Systems

A critical distinction in AI is the difference between symbolic and sub-symbolic systems. The first efforts at AI, such as those at the Dartmouth Conference, were based on symbolic systems. The central premise of symbolic systems is that human knowledge is fundamentally about manipulating symbols according to logical rules. So, if you could identify and encode those rules in a machine, the machine could combine these symbols in new ways and come up with new ideas.

The first AI systems were based on symbolic systems and the idea that human knowledge could be hard coded.

Importantly, since people encode these rules, it follows that people can understand them. This is obvious, but I point it out to draw a distinction with sub-symbolic systems, which we’ll discuss in a moment.

Among the first symbolic applications were so-called expert systems, deployed in the 1970s, which consisted of domain-specific databases, such as for configuring computer systems or medical diagnosis. They were basically sophisticated “IF-THEN” systems.

By the late 1980s-early 1990s, it was becoming clearer that purely symbolic approaches couldn’t scale to real-world complexity. The fundamental problem was that it proved a lot tougher than expected to hard-code knowledge, especially common sense and rules of thumb. Also, the systems were brittle and failed when encountering edge cases that fell outside the rules—of which real life has more than a few. They only worked when constrained to narrow tasks.

The general philosophical approach of symbolic systems—that knowledge could be pre-programmed—arguably reached its pinnacle2 when IBM’s Deep Blue beat grandmaster Garry Kasparov in 1997. Strictly speaking, Deep Blue was not a symbolic system. It didn’t reason using human-readable logic; it searched massive numbers of future positions (up to 200 million per second), based on evaluation functions created in consultation with chess experts. While it succeeded in a narrow domain, this brute-force approach bore little resemblance to how humans think.

User's avatar

Continue reading this post for free, courtesy of Doug Shapiro.

Or purchase a paid subscription.
© 2026 Douglas S. Shapiro · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture