Infinite Content: Chapter 13
Culture and Society Under Content Abundance: The Good, the Bad, and the Ugly
This is the draft thirteenth chapter of my book, Infinite Content: AI, The Next Great Disruption of Media, and How to Navigate What’s Coming, due to be published by The MIT Press in 2026. The introductory chapter is available for free here. Subsequent draft chapters will be serialized for paid subscribers to The Mediator and can be found here.
Recently, a neighbor came over. She’s an artist who is staunchly “anti-AI.” She started to bring up the topic, then stopped herself, looked at me sheepishly, and put up her hands. “Well,” she said, as though she had impulsively raised religion or politics at Thanksgiving dinner and now wanted to avoid a confrontation, “let’s talk about something else.”
Her caution was unnecessary. I wouldn’t have been offended. I might not even have disagreed. Like many, I am amazed by what AI can do and how quickly it is advancing. Like many, I am also wary of what it will cost us.
The goal of this chapter is to pose one overarching question: If content becomes infinite, fluid, personalized, and increasingly synthetic over the next decade, what happens to culture and the social fabric?
To be clear, in this chapter I won’t address the effects of AI on culture and society in general. We won’t explore whether AI will cause mass unemployment, exacerbate income inequality, or reduce humans’ cognitive capacities as they outsource more thinking to machines. Strictly speaking, it is also not about definitively answering this question. How the advent of GenAI in media will affect culture and society is both a laughably broad topic and one that it is far too early to answer. Even though I framed this question “over the next decade,” it will play out even longer than that. But it’s important that we ask the question now.
There is a techno-utopian answer: AI will unambiguously improve human welfare. There is the opposing, neo-Luddite, doomer answer: we’re cooked. Both have some merit.
The good: GenAI will have some important positive cultural and social effects. It will increase representation from people who would never otherwise have the resources to create content. It will enable creatives and creators to take risks and explore stories, formats, and experiences that would otherwise be technically and financially impossible. And it will democratize creative expression for everyone.
The bad: It is easier to be pessimistic. Even if you grant those obvious societal benefits, one could argue persuasively that the downside risk is far worse. There is at least strong circumstantial evidence that social media has increased political polarization, degraded public discourse, and probably damaged our collective mental health. By enabling infinite, personalized, fluid, synthetic content, GenAI threatens to pull us even further apart and arguably weaken the very fabric of society. In particular, it exposes four fault lines that we’ll explore in depth: 1) what happens to social cohesion if we share fewer stories; 2) in an increasingly personalized, synthetic world, what happens to social isolation; 3) what are the consequences if we can’t agree on what constitutes truth; and 4) how will people function if overwhelmed by information overload?
The ugly: I will stake out a more nuanced middle ground. Culture and society are highly complex adaptive systems. The words “complex” and “adaptive” are key. The agents in these systems—that is, people—are driven by simple needs, but those needs are often in opposition and don’t all pull us in the same nihilistic direction. These systems are also path dependent, by which I mean that changes within the system—like incentives, regulations, consumer preferences, and societal norms—will affect how it evolves. And social systems are often self-correcting. It may take a while (sometimes measured in decades) and things often have to get pretty bad first, but they self-correct. So, it will be ugly, or at least messy and complex. But the worst-case scenarios are not necessarily the most likely. There is cause for hope.
A good place to start is to be clear what “infinite, fluid, personalized, synthetic” media really means.
What “Infinite, Fluid, Personalized, and Synthetic” (IFPS) Actually Means
Let’s paint a picture of what it means if content becomes infinite, fluid, personalized, and increasingly synthetic over the next decade (or what I’ll call IFPS for the rest of the chapter). I’ll take them out of order.
Fluid. In Chapter 11, I explained what I mean by fluid. When there is a sufficiently high dimensional multimodal latent space, it will be possible to express any content along any number of modalities. A book could be a movie, a serialized vertical video, a game, an album, a podcast, or adapted for any platform (re-cut and formatted for YouTube, TikTok, Twitter, LinkedIn, Instagram, etc). The format and form will no longer be fixed. Similarly, when content can be produced in real time and becomes contextual, emergent, and interactive, provenance will become fluid, as the line blurs between who created something and who consumed it. For the same reason, state will become fluid, as the line blurs between content that is finished and not finished. So, fluid means fluid along every dimension: plot elements, structure, characters, style, tone, modality, format, state, provenance, you name it. Not only will there be an infinite amount of content, but every single piece of content may have infinite possible permutations.
Not only will there be an infinite amount of content, but every single piece of content will have infinite possible permutations.
Personalized. Personalized follows from fluid. While it is often reduced to “people will insert themselves as Luke Skywalker,” personalization means a lot more than that. Consumers will have the ability to tailor any version of content they want—or have an agent or algorithm tailor it for them—along all those dimensions. It will also be possible to personalize content based on context, need state, preferences, or current constraints (“I only have 12 minutes”). That doesn’t mean people will always personalize content, of course. Many will want to just consume the canonical version of the thing, maybe most of the time. But they could.
It’s easy to overstate how technology will affect behavior. People may want to just consume the canonical version of media a lot of the time.
Synthetic. There will be a continuum of human involvement in content creation. It will probably range from “entirely” human (although even “entirely” human production may make use of AI for help with ideation, editing, or other final touches); to hybrid productions that involve significant human oversight and judgment, but delegate many production decisions to AI; to fully synthetic. By increasingly synthetic, I mean two things: As the capabilities of GenAI models improve, it will probably make sense even for largely human and hybrid productions to include more synthetic elements. Also, since synthetic content systems will be capable of creating at the speed of computation, over time the proportion of all content that is entirely synthetic will likely increase substantially.
Infinite. Infinite follows from all the above. Consumers will have essentially infinite choice. Not just infinite supply of content, but a bewildering (and effectively infinite) array of options.
That describes a very different media environment than the one in which we live today. It will change a lot. And a lot is at stake.
Media Theory in Two Words: Media Matters
It may seem self-evident today that the form of media has enormous effects on culture and society, but that wasn’t always the case. Prior to the 1940s or so, the prevailing view of media was that different forms of media were just different conduits of information. In the 1950s-70s, a new school of thought started to emerge: media not just as dumb pipes, but as environments that shape human thought, public discourse, and what society optimizes.
Prior to the 1940s, media were considered simply conduits for information.
The roots of this idea can be traced to Harold Innis, an economic historian. Writing in the late 1940s and early 1950s, Innis argued that different media have structural biases. His main point was that the inherent properties of communications technologies are causal forces in history.
In the late 1950s and early ‘60s, Marshall McLuhan built on Innis’ ideas to propose a radical theory of culture (and, in the process, become something of an academic celebrity): the largest effects of media occur independently of the content it carries. Or, as he famously wrote in Understanding Media, Extensions of Man, published in 1964, “the medium is the message.” McLuhan wasn’t arguing that the content didn’t matter, but that the form of a medium shapes how its content is perceived. Print promotes linear, sequential thinking; television reinforces immediacy, emotion, and image; and electronic media collapse distance and time.
Around the same time, Walter Ong, a cultural historian, extended this insight by focusing on how communication technologies shape consciousness and cognition. Media do not merely influence what we communicate, they affect how we think. He argued that oral societies favor memory, repetition, and communal knowledge, while literate and print cultures enable abstraction and analytic reasoning.
Another of McLuhan’s contemporaries was Neil Postman. Postman formalized and extended McLuhan’s ideas into a theory of media ecology—or media as environments—that shape thinking, public discourse, and society itself. While McLuhan didn’t (really) pass judgment on the structural effects of a medium, Postman sure did. In Amusing Ourselves to Death, published in 1985, he was especially concerned with the transition from a “typographic” culture, dominated by print, to a visual culture, dominated by TV. His central claim was that while print culture favored sustained argument and the hierarchical organization of ideas, TV turned everything—especially politics and news—into entertainment. TV made sustained argument, seriousness, and rational deliberation increasingly difficult to sustain. He also emphasized that incentives shape outcomes, which is where the media ecology concept kicks in: institutions evolve to survive within their media environment, so the media environment affects what society optimizes and, as a result, how it organizes itself.
Today, it seems obvious that these guys were onto something: media shapes culture and society, not the other way around. In recent years, a phrase commonly associated with Postman is “eerily prescient.” Amusing Ourselves to Death was written more than 40 years ago, but he predicted a lot about our current media environment. Now, the debate is no longer whether technologies like social media and smartphones affect us—but how and what, if anything, do we do about it?
Social media has had some positive societal effects. It makes it easier to connect with far-flung friends and family; it provides community for those who would otherwise be socially isolated; it has birthed an entire class of creators who can make a living or at least supplement their income through content creation; it enables civic engagement; and it helps people manage emergencies and natural disasters.
There has, however, been a lot more focus on the costs. Psychologist Jonathan Haidt, for instance, is a vocal critic of smartphone use by children. Although some of his claims are controversial, in books like The Anxious Generation he argues that they adversely affect cognition and mental health. As another example, Tristan Harris, former Google design ethicist and co-founder of the Center for Humane Technology—and the main subject of the documentary, The Social Dilemma—was one of the first from inside the technosphere to sound the alarm about the adverse effects of social media.
Like social media, GenAI in media has the potential for both social benefits and social harms.




